Files
llvm/bolt/lib/Core/BinarySection.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

306 lines
11 KiB
C++
Raw Normal View History

//===- bolt/Core/BinarySection.cpp - Section in a binary file -------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file implements the BinarySection class.
//
//===----------------------------------------------------------------------===//
#include "bolt/Core/BinarySection.h"
#include "bolt/Core/BinaryContext.h"
#include "bolt/Utils/Utils.h"
#include "llvm/MC/MCStreamer.h"
#include "llvm/Support/CommandLine.h"
#define DEBUG_TYPE "bolt"
using namespace llvm;
using namespace bolt;
namespace opts {
extern cl::opt<bool> PrintRelocations;
extern cl::opt<bool> HotData;
} // namespace opts
uint64_t BinarySection::Count = 0;
bool BinarySection::isELF() const { return BC.isELF(); }
bool BinarySection::isMachO() const { return BC.isMachO(); }
uint64_t
BinarySection::hash(const BinaryData &BD,
std::map<const BinaryData *, uint64_t> &Cache) const {
auto Itr = Cache.find(&BD);
if (Itr != Cache.end())
return Itr->second;
hash_code Hash =
hash_combine(hash_value(BD.getSize()), hash_value(BD.getSectionName()));
Cache[&BD] = Hash;
if (!containsRange(BD.getAddress(), BD.getSize()))
return Hash;
uint64_t Offset = BD.getAddress() - getAddress();
const uint64_t EndOffset = BD.getEndAddress() - getAddress();
auto Begin = Relocations.lower_bound(Relocation{Offset, 0, 0, 0, 0});
auto End = Relocations.upper_bound(Relocation{EndOffset, 0, 0, 0, 0});
const StringRef Contents = getContents();
while (Begin != End) {
const Relocation &Rel = *Begin++;
Hash = hash_combine(
Hash, hash_value(Contents.substr(Offset, Begin->Offset - Offset)));
if (BinaryData *RelBD = BC.getBinaryDataByName(Rel.Symbol->getName()))
Hash = hash_combine(Hash, hash(*RelBD, Cache));
Offset = Rel.Offset + Rel.getSize();
}
Hash = hash_combine(Hash,
hash_value(Contents.substr(Offset, EndOffset - Offset)));
Cache[&BD] = Hash;
return Hash;
}
[BOLT] Section-handling refactoring/overhaul Simplify the logic of handling sections in BOLT. This change brings more direct and predictable mapping of BinarySection instances to sections in the input and output files. * Only sections from the input binary will have a non-null SectionRef. When a new section is created as a copy of the input section, its SectionRef is reset to null. * RewriteInstance::getOutputSectionName() is removed as the section name in the output file is now defined by BinarySection::getOutputName(). * Querying BinaryContext for sections by name uses their original name. E.g., getUniqueSectionByName(".rodata") will return the original section even if the new .rodata section was created. * Input file sections (with relocations applied) are emitted via MC with ".bolt.org" prefix. However, their name in the output binary is unchanged unless a new section with the same name is created. * New sections are emitted internally with ".bolt.new" prefix if there's a name conflict with an input file section. Their original name is preserved in the output file. * Section header string table is properly populated with section names that are actually used. Previously we used to include discarded section names as well. * Fix the problem when dynamic relocations were propagated to a new section with a name that matched a section in the input binary. E.g., the new .rodata with jump tables had dynamic relocations from the original .rodata. Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D135494
2022-09-22 12:05:12 -07:00
void BinarySection::emitAsData(MCStreamer &Streamer,
const Twine &SectionName) const {
StringRef SectionContents = getContents();
MCSectionELF *ELFSection =
BC.Ctx->getELFSection(SectionName, getELFType(), getELFFlags());
Streamer.switchSection(ELFSection);
Streamer.emitValueToAlignment(getAlign());
if (BC.HasRelocations && opts::HotData && isReordered())
Streamer.emitLabel(BC.Ctx->getOrCreateSymbol("__hot_data_start"));
LLVM_DEBUG(dbgs() << "BOLT-DEBUG: emitting "
<< (isAllocatable() ? "" : "non-")
<< "allocatable data section " << SectionName << '\n');
if (!hasRelocations()) {
Streamer.emitBytes(SectionContents);
} else {
uint64_t SectionOffset = 0;
[BOLT] Implement composed relocations BOLT currently assumes (and asserts) that no two relocations can share the same offset. Although this is true in most cases, ELF has a feature called (not sure if this is an official term) composed relocations [1] where multiple relocations at the same offset are combined to produce a single value. For example, to support label subtraction (a - b) on RISC-V, two relocations are emitted at the same offset: - R_RISCV_ADD32 a + 0 - R_RISCV_SUB32 b + 0 which, when combined, will produce the value of (a - b). To support this in BOLT, first, RelocationSetType in BinarySection is changed to be a multiset in order to allow it to store multiple relocations at the same offset. Next, Relocation::emit() is changed to receive an iterator pair of relocations. In most cases, these will point to a single relocation in which case its behavior is unaltered by this patch. For composed relocations, they should point to all relocations at the same offset and the following happens: - A new method Relocation::createExpr() is called for every relocation. This method is essentially the same as the original emit() except that it returns the MCExpr without emitting it. - The MCExprs of relocations i and i+1 are combined using the opcode returned by the new method Relocation::getComposeOpcodeFor(). - After combining all MCExprs, the last one is emitted. Note that in the current patch, getComposeOpcodeFor() simply calls llvm_unreachable() since none of the current targets use composed relocations. This will change once the RISC-V target lands. Finally, BinarySection::emitAsData() is updated to group relocations by offset and emit them all at once. Note that this means composed relocations are only supported in data sections. Since this is the only place they seem to be used in RISC-V, I believe it's reasonable to only support them there for now to avoid further code complexity. [1]: https://www.sco.com/developers/gabi/latest/ch4.reloc.html Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D146546
2023-06-19 16:51:43 +02:00
for (auto RI = Relocations.begin(), RE = Relocations.end(); RI != RE;) {
auto RelocationOffset = RI->Offset;
assert(RelocationOffset < SectionContents.size() && "overflow detected");
if (SectionOffset < RelocationOffset) {
Streamer.emitBytes(SectionContents.substr(
SectionOffset, RelocationOffset - SectionOffset));
SectionOffset = RelocationOffset;
}
// Get iterators to all relocations with the same offset. Usually, there
// is only one such relocation but there can be more for composed
// relocations.
auto ROI = RI;
auto ROE = Relocations.upper_bound(RelocationOffset);
// Start from the next offset on the next iteration.
RI = ROE;
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
// Skip undefined symbols.
[BOLT] Implement composed relocations BOLT currently assumes (and asserts) that no two relocations can share the same offset. Although this is true in most cases, ELF has a feature called (not sure if this is an official term) composed relocations [1] where multiple relocations at the same offset are combined to produce a single value. For example, to support label subtraction (a - b) on RISC-V, two relocations are emitted at the same offset: - R_RISCV_ADD32 a + 0 - R_RISCV_SUB32 b + 0 which, when combined, will produce the value of (a - b). To support this in BOLT, first, RelocationSetType in BinarySection is changed to be a multiset in order to allow it to store multiple relocations at the same offset. Next, Relocation::emit() is changed to receive an iterator pair of relocations. In most cases, these will point to a single relocation in which case its behavior is unaltered by this patch. For composed relocations, they should point to all relocations at the same offset and the following happens: - A new method Relocation::createExpr() is called for every relocation. This method is essentially the same as the original emit() except that it returns the MCExpr without emitting it. - The MCExprs of relocations i and i+1 are combined using the opcode returned by the new method Relocation::getComposeOpcodeFor(). - After combining all MCExprs, the last one is emitted. Note that in the current patch, getComposeOpcodeFor() simply calls llvm_unreachable() since none of the current targets use composed relocations. This will change once the RISC-V target lands. Finally, BinarySection::emitAsData() is updated to group relocations by offset and emit them all at once. Note that this means composed relocations are only supported in data sections. Since this is the only place they seem to be used in RISC-V, I believe it's reasonable to only support them there for now to avoid further code complexity. [1]: https://www.sco.com/developers/gabi/latest/ch4.reloc.html Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D146546
2023-06-19 16:51:43 +02:00
auto HasUndefSym = [this](const auto &Relocation) {
return BC.UndefinedSymbols.count(Relocation.Symbol);
};
if (std::any_of(ROI, ROE, HasUndefSym))
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
continue;
[BOLT] Implement composed relocations BOLT currently assumes (and asserts) that no two relocations can share the same offset. Although this is true in most cases, ELF has a feature called (not sure if this is an official term) composed relocations [1] where multiple relocations at the same offset are combined to produce a single value. For example, to support label subtraction (a - b) on RISC-V, two relocations are emitted at the same offset: - R_RISCV_ADD32 a + 0 - R_RISCV_SUB32 b + 0 which, when combined, will produce the value of (a - b). To support this in BOLT, first, RelocationSetType in BinarySection is changed to be a multiset in order to allow it to store multiple relocations at the same offset. Next, Relocation::emit() is changed to receive an iterator pair of relocations. In most cases, these will point to a single relocation in which case its behavior is unaltered by this patch. For composed relocations, they should point to all relocations at the same offset and the following happens: - A new method Relocation::createExpr() is called for every relocation. This method is essentially the same as the original emit() except that it returns the MCExpr without emitting it. - The MCExprs of relocations i and i+1 are combined using the opcode returned by the new method Relocation::getComposeOpcodeFor(). - After combining all MCExprs, the last one is emitted. Note that in the current patch, getComposeOpcodeFor() simply calls llvm_unreachable() since none of the current targets use composed relocations. This will change once the RISC-V target lands. Finally, BinarySection::emitAsData() is updated to group relocations by offset and emit them all at once. Note that this means composed relocations are only supported in data sections. Since this is the only place they seem to be used in RISC-V, I believe it's reasonable to only support them there for now to avoid further code complexity. [1]: https://www.sco.com/developers/gabi/latest/ch4.reloc.html Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D146546
2023-06-19 16:51:43 +02:00
#ifndef NDEBUG
[BOLT] Implement composed relocations BOLT currently assumes (and asserts) that no two relocations can share the same offset. Although this is true in most cases, ELF has a feature called (not sure if this is an official term) composed relocations [1] where multiple relocations at the same offset are combined to produce a single value. For example, to support label subtraction (a - b) on RISC-V, two relocations are emitted at the same offset: - R_RISCV_ADD32 a + 0 - R_RISCV_SUB32 b + 0 which, when combined, will produce the value of (a - b). To support this in BOLT, first, RelocationSetType in BinarySection is changed to be a multiset in order to allow it to store multiple relocations at the same offset. Next, Relocation::emit() is changed to receive an iterator pair of relocations. In most cases, these will point to a single relocation in which case its behavior is unaltered by this patch. For composed relocations, they should point to all relocations at the same offset and the following happens: - A new method Relocation::createExpr() is called for every relocation. This method is essentially the same as the original emit() except that it returns the MCExpr without emitting it. - The MCExprs of relocations i and i+1 are combined using the opcode returned by the new method Relocation::getComposeOpcodeFor(). - After combining all MCExprs, the last one is emitted. Note that in the current patch, getComposeOpcodeFor() simply calls llvm_unreachable() since none of the current targets use composed relocations. This will change once the RISC-V target lands. Finally, BinarySection::emitAsData() is updated to group relocations by offset and emit them all at once. Note that this means composed relocations are only supported in data sections. Since this is the only place they seem to be used in RISC-V, I believe it's reasonable to only support them there for now to avoid further code complexity. [1]: https://www.sco.com/developers/gabi/latest/ch4.reloc.html Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D146546
2023-06-19 16:51:43 +02:00
for (const auto &Relocation : make_range(ROI, ROE)) {
LLVM_DEBUG(
dbgs() << "BOLT-DEBUG: emitting relocation for symbol "
<< (Relocation.Symbol ? Relocation.Symbol->getName()
: StringRef("<none>"))
<< " at offset 0x" << Twine::utohexstr(Relocation.Offset)
<< " with size "
<< Relocation::getSizeForType(Relocation.Type) << '\n');
}
#endif
[BOLT] Implement composed relocations BOLT currently assumes (and asserts) that no two relocations can share the same offset. Although this is true in most cases, ELF has a feature called (not sure if this is an official term) composed relocations [1] where multiple relocations at the same offset are combined to produce a single value. For example, to support label subtraction (a - b) on RISC-V, two relocations are emitted at the same offset: - R_RISCV_ADD32 a + 0 - R_RISCV_SUB32 b + 0 which, when combined, will produce the value of (a - b). To support this in BOLT, first, RelocationSetType in BinarySection is changed to be a multiset in order to allow it to store multiple relocations at the same offset. Next, Relocation::emit() is changed to receive an iterator pair of relocations. In most cases, these will point to a single relocation in which case its behavior is unaltered by this patch. For composed relocations, they should point to all relocations at the same offset and the following happens: - A new method Relocation::createExpr() is called for every relocation. This method is essentially the same as the original emit() except that it returns the MCExpr without emitting it. - The MCExprs of relocations i and i+1 are combined using the opcode returned by the new method Relocation::getComposeOpcodeFor(). - After combining all MCExprs, the last one is emitted. Note that in the current patch, getComposeOpcodeFor() simply calls llvm_unreachable() since none of the current targets use composed relocations. This will change once the RISC-V target lands. Finally, BinarySection::emitAsData() is updated to group relocations by offset and emit them all at once. Note that this means composed relocations are only supported in data sections. Since this is the only place they seem to be used in RISC-V, I believe it's reasonable to only support them there for now to avoid further code complexity. [1]: https://www.sco.com/developers/gabi/latest/ch4.reloc.html Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D146546
2023-06-19 16:51:43 +02:00
size_t RelocationSize = Relocation::emit(ROI, ROE, &Streamer);
SectionOffset += RelocationSize;
}
assert(SectionOffset <= SectionContents.size() && "overflow error");
if (SectionOffset < SectionContents.size())
Streamer.emitBytes(SectionContents.substr(SectionOffset));
}
if (BC.HasRelocations && opts::HotData && isReordered())
Streamer.emitLabel(BC.Ctx->getOrCreateSymbol("__hot_data_end"));
}
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
void BinarySection::flushPendingRelocations(raw_pwrite_stream &OS,
SymbolResolverFuncTy Resolver) {
if (PendingRelocations.empty() && Patches.empty())
return;
const uint64_t SectionAddress = getAddress();
// We apply relocations to original section contents. For allocatable sections
// this means using their input file offsets, since the output file offset
// could change (e.g. for new instance of .text). For non-allocatable
// sections, the output offset should always be a valid one.
const uint64_t SectionFileOffset =
isAllocatable() ? getInputFileOffset() : getOutputFileOffset();
LLVM_DEBUG(
dbgs() << "BOLT-DEBUG: flushing pending relocations for section "
<< getName() << '\n'
<< " address: 0x" << Twine::utohexstr(SectionAddress) << '\n'
<< " offset: 0x" << Twine::utohexstr(SectionFileOffset) << '\n');
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
for (BinaryPatch &Patch : Patches)
OS.pwrite(Patch.Bytes.data(), Patch.Bytes.size(),
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
SectionFileOffset + Patch.Offset);
for (Relocation &Reloc : PendingRelocations) {
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
uint64_t Value = Reloc.Addend;
if (Reloc.Symbol)
Value += Resolver(Reloc.Symbol);
Value = Relocation::encodeValue(Reloc.Type, Value,
SectionAddress + Reloc.Offset);
OS.pwrite(reinterpret_cast<const char *>(&Value),
Relocation::getSizeForType(Reloc.Type),
SectionFileOffset + Reloc.Offset);
LLVM_DEBUG(
dbgs() << "BOLT-DEBUG: writing value 0x" << Twine::utohexstr(Value)
<< " of size " << Relocation::getSizeForType(Reloc.Type)
<< " at section offset 0x" << Twine::utohexstr(Reloc.Offset)
<< " address 0x"
<< Twine::utohexstr(SectionAddress + Reloc.Offset)
<< " file offset 0x"
<< Twine::utohexstr(SectionFileOffset + Reloc.Offset) << '\n';);
}
clearList(PendingRelocations);
}
BinarySection::~BinarySection() {
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
if (isReordered()) {
delete[] getData();
return;
}
if (!isAllocatable() && !hasValidSectionID() &&
(!hasSectionRef() ||
OutputContents.data() != getContents(Section).data())) {
delete[] getOutputData();
}
}
void BinarySection::clearRelocations() { clearList(Relocations); }
[BOLT] Support for lite mode with relocations Summary: Add '-lite' support for relocations for improved processing time, memory consumption, and more resilient processing of binaries with embedded assembly code. In lite relocation mode, BOLT will skip full processing of functions without a profile. It will run scanExternalRefs() on such functions to discover external references and to create internal relocations to update references to optimized functions. Note that we could have relied on the compiler/linker to provide relocations for function references. However, there's no assurance that all such references are reported. E.g., the compiler can resolve inter-procedural references internally, leaving no relocations for the linker. The scan process takes about <10 seconds per 100MB of code on modern hardware. It's a reasonable overhead to live with considering the flexibility it provides. If BOLT fails to scan or disassemble a function, .e.g., due to a data object embedded in code, or an unsupported instruction, it enables a patching mode to guarantee that the failed function will call optimized/moved versions of functions. The patching happens at original function entry points. '-skip=<func1,func2,...>' option now can be used to skip processing of arbitrary functions in the relocation mode. With '-use-old-text' or '-strict' we require all functions to be processed. As such, it is incompatible with '-lite' option, and '-skip' option will only disable optimizations of listed functions, not their disassembly and emission. (cherry picked from FBD22040717)
2020-06-15 00:15:47 -07:00
void BinarySection::print(raw_ostream &OS) const {
OS << getName() << ", "
<< "0x" << Twine::utohexstr(getAddress()) << ", " << getSize() << " (0x"
<< Twine::utohexstr(getOutputAddress()) << ", " << getOutputSize() << ")"
<< ", data = " << getData() << ", output data = " << getOutputData();
if (isAllocatable())
OS << " (allocatable)";
[BOLT] Refactor global symbol handling code. Summary: This is preparation work for static data reordering. I've created a new class called BinaryData which represents a symbol contained in a section. It records almost all the information relevant for dealing with data, e.g. names, address, size, alignment, profiling data, etc. BinaryContext still stores and manages BinaryData objects similar to how it managed symbols and global addresses before. The interfaces are not changed too drastically from before either. There is a bit of overlap between BinaryData and BinaryFunction. I would have liked to do some more refactoring to make a BinaryFunctionFragment that subclassed from BinaryData and then have BinaryFunction be composed or associated with BinaryFunctionFragments. I've also attempted to use (symbol + offset) for when addresses are pointing into the middle of symbols with known sizes. This changes the simplify rodata loads optimization slightly since the expression on an instruction can now also be a (symbol + offset) rather than just a symbol. One of the overall goals for this refactoring is to make sure every relocation is associated with a BinaryData object. This requires adding "hole" BinaryData's wherever there are gaps in a section's address space. Most of the holes seem to be data that has no associated symbol info. In this case we can't do any better than lumping all the adjacent hole symbols into one big symbol (there may be more than one actual data object that contributes to a hole). At least the combined holes should be moveable. Jump tables have similar issues. They appear to mostly be sub-objects for top level local symbols. The main problem is that we can't recognize jump tables at the time we scan the symbol table, we have to wait til disassembly. When a jump table is discovered we add it as a sub-object to the existing local symbol. If there are one or more existing BinaryData's that appear in the address range of a newly created jump table, those are added as sub-objects as well. (cherry picked from FBD6362544)
2017-11-14 20:05:11 -08:00
if (isVirtual())
OS << " (virtual)";
if (isTLS())
OS << " (tls)";
if (opts::PrintRelocations)
for (const Relocation &R : relocations())
OS << "\n " << R;
}
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
BinarySection::RelocationSetType
BinarySection::reorderRelocations(bool Inplace) const {
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
assert(PendingRelocations.empty() &&
[BOLT] Fix typos (#68121) Closes https://github.com/llvm/llvm-project/issues/63097 Before merging please make sure the change to bolt/include/bolt/Passes/StokeInfo.h is correct. bolt/include/bolt/Passes/StokeInfo.h ```diff // This Pass solves the two major problems to use the Stoke program without - // proting its code: + // probing its code: ``` I'm still not happy about the awkward wording in this comment. bolt/include/bolt/Passes/FixRelaxationPass.h ``` $ ed -s bolt/include/bolt/Passes/FixRelaxationPass.h <<<'9,12p' // This file declares the FixRelaxations class, which locates instructions with // wrong targets and fixes them. Such problems usually occures when linker // relaxes (changes) instructions, but doesn't fix relocations types properly // for them. $ ``` bolt/docs/doxygen.cfg.in bolt/include/bolt/Core/BinaryContext.h bolt/include/bolt/Core/BinaryFunction.h bolt/include/bolt/Core/BinarySection.h bolt/include/bolt/Core/DebugData.h bolt/include/bolt/Core/DynoStats.h bolt/include/bolt/Core/Exceptions.h bolt/include/bolt/Core/MCPlusBuilder.h bolt/include/bolt/Core/Relocation.h bolt/include/bolt/Passes/FixRelaxationPass.h bolt/include/bolt/Passes/InstrumentationSummary.h bolt/include/bolt/Passes/ReorderAlgorithm.h bolt/include/bolt/Passes/StackReachingUses.h bolt/include/bolt/Passes/StokeInfo.h bolt/include/bolt/Passes/TailDuplication.h bolt/include/bolt/Profile/DataAggregator.h bolt/include/bolt/Profile/DataReader.h bolt/lib/Core/BinaryContext.cpp bolt/lib/Core/BinarySection.cpp bolt/lib/Core/DebugData.cpp bolt/lib/Core/DynoStats.cpp bolt/lib/Core/Relocation.cpp bolt/lib/Passes/Instrumentation.cpp bolt/lib/Passes/JTFootprintReduction.cpp bolt/lib/Passes/ReorderData.cpp bolt/lib/Passes/RetpolineInsertion.cpp bolt/lib/Passes/ShrinkWrapping.cpp bolt/lib/Passes/TailDuplication.cpp bolt/lib/Rewrite/BoltDiff.cpp bolt/lib/Rewrite/DWARFRewriter.cpp bolt/lib/Rewrite/RewriteInstance.cpp bolt/lib/Utils/CommandLineOpts.cpp bolt/runtime/instr.cpp bolt/test/AArch64/got-ld64-relaxation.test bolt/test/AArch64/unmarked-data.test bolt/test/X86/Inputs/dwarf5-cu-no-debug-addr-helper.s bolt/test/X86/Inputs/linenumber.cpp bolt/test/X86/double-jump.test bolt/test/X86/dwarf5-call-pc-function-null-check.test bolt/test/X86/dwarf5-split-dwarf4-monolithic.test bolt/test/X86/dynrelocs.s bolt/test/X86/fallthrough-to-noop.test bolt/test/X86/tail-duplication-cache.s bolt/test/runtime/X86/instrumentation-ind-calls.s
2023-11-09 13:29:46 -06:00
"reordering pending relocations not supported");
RelocationSetType NewRelocations;
for (const Relocation &Rel : relocations()) {
uint64_t RelAddr = Rel.Offset + getAddress();
BinaryData *BD = BC.getBinaryDataContainingAddress(RelAddr);
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
BD = BD->getAtomicRoot();
assert(BD);
if ((!BD->isMoved() && !Inplace) || BD->isJumpTable())
continue;
Relocation NewRel(Rel);
uint64_t RelOffset = RelAddr - BD->getAddress();
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
NewRel.Offset = BD->getOutputOffset() + RelOffset;
assert(NewRel.Offset < getSize());
LLVM_DEBUG(dbgs() << "BOLT-DEBUG: moving " << Rel << " -> " << NewRel
<< "\n");
[BOLT] Implement composed relocations BOLT currently assumes (and asserts) that no two relocations can share the same offset. Although this is true in most cases, ELF has a feature called (not sure if this is an official term) composed relocations [1] where multiple relocations at the same offset are combined to produce a single value. For example, to support label subtraction (a - b) on RISC-V, two relocations are emitted at the same offset: - R_RISCV_ADD32 a + 0 - R_RISCV_SUB32 b + 0 which, when combined, will produce the value of (a - b). To support this in BOLT, first, RelocationSetType in BinarySection is changed to be a multiset in order to allow it to store multiple relocations at the same offset. Next, Relocation::emit() is changed to receive an iterator pair of relocations. In most cases, these will point to a single relocation in which case its behavior is unaltered by this patch. For composed relocations, they should point to all relocations at the same offset and the following happens: - A new method Relocation::createExpr() is called for every relocation. This method is essentially the same as the original emit() except that it returns the MCExpr without emitting it. - The MCExprs of relocations i and i+1 are combined using the opcode returned by the new method Relocation::getComposeOpcodeFor(). - After combining all MCExprs, the last one is emitted. Note that in the current patch, getComposeOpcodeFor() simply calls llvm_unreachable() since none of the current targets use composed relocations. This will change once the RISC-V target lands. Finally, BinarySection::emitAsData() is updated to group relocations by offset and emit them all at once. Note that this means composed relocations are only supported in data sections. Since this is the only place they seem to be used in RISC-V, I believe it's reasonable to only support them there for now to avoid further code complexity. [1]: https://www.sco.com/developers/gabi/latest/ch4.reloc.html Reviewed By: rafauler Differential Revision: https://reviews.llvm.org/D146546
2023-06-19 16:51:43 +02:00
NewRelocations.emplace(std::move(NewRel));
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
}
return NewRelocations;
}
void BinarySection::reorderContents(const std::vector<BinaryData *> &Order,
bool Inplace) {
IsReordered = true;
Relocations = reorderRelocations(Inplace);
std::string Str;
raw_string_ostream OS(Str);
const char *Src = Contents.data();
LLVM_DEBUG(dbgs() << "BOLT-DEBUG: reorderContents for " << Name << "\n");
for (BinaryData *BD : Order) {
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
assert((BD->isMoved() || !Inplace) && !BD->isJumpTable());
assert(BD->isAtomic() && BD->isMoveable());
const uint64_t SrcOffset = BD->getAddress() - getAddress();
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
assert(SrcOffset < Contents.size());
assert(SrcOffset == BD->getOffset());
while (OS.tell() < BD->getOutputOffset())
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
OS.write((unsigned char)0);
LLVM_DEBUG(dbgs() << "BOLT-DEBUG: " << BD->getName() << " @ " << OS.tell()
<< "\n");
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
OS.write(&Src[SrcOffset], BD->getOutputSize());
}
if (Relocations.empty()) {
// If there are no existing relocations, tack a phony one at the end
// of the reordered segment to force LLVM to recognize and map this
// section.
MCSymbol *ZeroSym = BC.registerNameAtAddress("Zero", 0, 0, 0);
addRelocation(OS.tell(), ZeroSym, Relocation::getAbs64(), 0xdeadbeef);
[BOLT] Static data reordering pass. Summary: Enable BOLT to reorder data sections in a binary based on memory profiling data. This diff adds a new pass to BOLT that can reorder data sections for better locality based on memory profiling data. For now, the algorithm to order data is primitive and just relies on the frequency of loads to order the contents of a section. We could probably do a lot better by looking at what functions use the hot data and grouping together hot data that is used by a single function (or cluster of functions). Block ordering might give some hints on how to order the data better as well. The new pass has two basic modes: inplace and split (when inplace is false). The default is split since inplace hasn't really been tested much. When splitting is on, the cold data is copied to a "cold" version of the section while the hot data is kept in the original section, e.g. for .rodata, .rodata will contain the hot data and .bolt.org.rodata will contain the cold bits. In inplace mode, the section contents are reordered inplace. In either mode, all relocations to data within that section are updated to reflect new data locations. Things to improve: - The current algorithm is really dumb and doesn't seem to lead to any wins. It certainly could use some improvement. - Private symbols can have data that leaks over to an adjacent symbol, e.g. a string that has a common suffix can start in one symbol and leak over (with the common suffix) into the next. For now, we punt on adjacent private symbols. - Handle ambiguous relocations better. Section relocations that point to the boundary of two symbols will prevent the adjacent symbols from being moved because we can't tell which symbol the relocation is for. - Handle jump tables. Right now jump table support must be basic if data reordering is enabled. - Being able to handle TLS. A good amount of data access in some binaries are happening in TLS. It would be worthwhile to be able to reorder any TLS sections too. - Handle sections with writeable data. This hasn't been tested so probably won't work. We could try to prevent false sharing in writeable sections as well. - A pie in the sky goal would be to use DWARF info to reorder types. (cherry picked from FBD6792876)
2018-04-20 20:03:31 -07:00
uint64_t Zero = 0;
OS.write(reinterpret_cast<const char *>(&Zero), sizeof(Zero));
}
auto *NewData = reinterpret_cast<char *>(copyByteArray(OS.str()));
Contents = OutputContents = StringRef(NewData, OS.str().size());
OutputSize = Contents.size();
}
std::string BinarySection::encodeELFNote(StringRef NameStr, StringRef DescStr,
uint32_t Type) {
std::string Str;
raw_string_ostream OS(Str);
const uint32_t NameSz = NameStr.size() + 1;
const uint32_t DescSz = DescStr.size();
OS.write(reinterpret_cast<const char *>(&(NameSz)), 4);
OS.write(reinterpret_cast<const char *>(&(DescSz)), 4);
OS.write(reinterpret_cast<const char *>(&(Type)), 4);
OS << NameStr << '\0';
for (uint64_t I = NameSz; I < alignTo(NameSz, 4); ++I)
OS << '\0';
OS << DescStr;
for (uint64_t I = DescStr.size(); I < alignTo(DescStr.size(), 4); ++I)
OS << '\0';
return OS.str();
}