The linker supports a feature to force load an object from a static
archive if it defines an Objective-C category.
This API supports this feature by looking at every section in the
module to find if a category is defined in the module.
llvm-svn: 275125
Summary:
This is a cleanup and refactoring of the interception code on windows
Enhancement:
* Adding the support for 64-bits code
* Adding several hooking technique:
* Detour
* JumpRedirect
* HotPatch
* Trampoline
* Adding a trampoline memory pool (64-bits) and release the allocated memory in unittests
Cleanup:
* Adding unittests for 64-bits hooking techniques
* Enhancing the RoundUpInstruction by sharing common decoder
Reviewers: rnk
Subscribers: llvm-commits, wang0109, chrisha
Differential Revision: http://reviews.llvm.org/D22111
llvm-svn: 275123
This patch simplifies the graph builder by encoding nodes as {Value,
Dereference Level} pairs. This lets us kill edge types, and allows us to
get rid of hacks in StratifiedSets (like addAttrsBelow/...). This
simplification also allows us to remove InstantiatedRelations and
InstantiatedAttrs.
Patch by Jia Chen.
Differential Revision: http://reviews.llvm.org/D22080
llvm-svn: 275122
Do not assign source regions located within system headers file ID's,
and do not construct counter mapping regions out of them.
This makes coverage reports less cluttered and less mysterious. E.g
using the "assert" macro doesn't cause assert.h to appear in reports,
and it no longer shows the "assertion failed" branch as an uncovered
region.
It also makes coverage mapping sections a bit smaller (e.g a 1%
reduction in a stage2 build of bin/llvm-as).
llvm-svn: 275121
The issue was we have two global variables: one that contains a DebuggerList pointer and one that contains a std::mutex pointer. These get initialized in Debugger::Initialize(), and everywhere that uses these does:
if (g_debugger_list_ptr && g_debugger_list_mutex_ptr)
{
std::lock_guard<std::recursive_mutex> guard(*g_debugger_list_mutex_ptr);
// do work while mutex is locked
}
Debugger::Terminate() was deleting and nulling out g_debugger_list_ptr which meant we had a race condition where someone might do the if statement and it evaluates to true, then another thread calls Debugger::Terminate() and deletes and nulls out g_debugger_list_ptr while holding the mutex, and another thread then locks the mutex and tries to use g_debugger_list_ptr. The fix is to just not delete and null out the g_debugger_list_ptr variable.
llvm-svn: 275119
Summary:
Aiming to correct the ordering of loads/stores. This patch changes the
insert point for loads to the position of the first load.
It updates the ordering method for loads to insert before, rather than after.
Before this patch the following sequence:
"load a[1], store a[1], store a[0], load a[2]"
Would incorrectly vectorize to "store a[0,1], load a[1,2]".
The correctness check was assuming the insertion point for loads is at
the position of the first load, when in practice it was at the last
load. An alternative fix would have been to invert the correctness check.
The current fix changes insert position but also requires reordering of
instructions before the vectorized load.
Updated testcases to reflect the changes.
Reviewers: tstellarAMD, llvm-commits, jlebar, arsenm
Subscribers: mzolotukhin
Differential Revision: http://reviews.llvm.org/D22071
llvm-svn: 275117
Immediate branch targets aren't commonly used, but if they are we should make
sure they can actually be encoded. This means they must be divisible by 2 when
targeting Thumb mode, and by 4 when targeting ARM mode.
Also do a little naming cleanup while I was changing everything around anyway.
llvm-svn: 275116
Summary:
Nested if statements can generate empty BBs whose terminator branches
unconditionally to its successor. These branches are not eliminated
to help generate better line number information in some cases, but there
is no reason to keep the empty blocks that result from nested ifs.
Reviewers: mehdi_amini, dblaikie, echristo
Subscribers: mehdi_amini, cfe-commits
Differential review: http://reviews.llvm.org/D11360
llvm-svn: 275115
This cleans up a previous optimization attempt in hash, and results in
additional performance improvements over that previous attempt. Additionally
this new optimization does not hinder the power of 2 bucket count optimization.
llvm-svn: 275114
Summary:
Setting MIMG to 0 has a bunch of unexpected side effects, including that
isVMEM returns false which leads to incorrect treatment in the hazard
recognizer. The reason I noticed it is that it also leads to incorrect
treatment in VGPR-to-SGPR copies, which is one cause of the referenced bug.
The only reason why MIMG was set to 0 is to signal the special handling of
dmasks, but that can be checked differently.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96877
Reviewers: arsenm, tstellarAMD
Subscribers: arsenm, kzhuravl, llvm-commits
Differential Revision: http://reviews.llvm.org/D22210
llvm-svn: 275113
Summary:
This patch is a refactoring of the way cmake 'targets' are grouped.
It won't affect non-UI cmake-generators.
Clang/LLVM are using a structured way to group targets which ease
navigation through Visual Studio UI. The Compiler-RT projects
differ from the way Clang/LLVM are grouping targets.
This patch doesn't contain behavior changes.
Reviewers: kubabrecka, rnk
Subscribers: wang0109, llvm-commits, kubabrecka, chrisha
Differential Revision: http://reviews.llvm.org/D21952
llvm-svn: 275111
This will be useful once we start adding the ability to dump type
records and symbol records, since it will allow us to generate
mergeable information instead of information that specifies an
entire file.
llvm-svn: 275109
Summary:
The main bug fix here is using the 32-bit encoding of V_ADD_I32 in
materializeFrameBaseRegister and resolveFrameIndex, so that arbitrary
immediates work.
The second part is that we may now require the SegmentWaveByteOffset
even when there are initially no stack objects and VGPR spilling isn't
enabled, for stack slots that are allocated later. This means that some
bits become effectively dead and can be cleaned up.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=96602
Tested-by: Kai Wasserbäch <kai@dev.carbon-project.org>
Reviewers: arsenm, tstellarAMD
Subscribers: arsenm, llvm-commits, kzhuravl
Differential Revision: http://reviews.llvm.org/D21551
llvm-svn: 275108
Memory will be committed on demand when exception happens while accessing
shadow memeory region.
Patch by: Wei Wang
Differential Revision: http://reviews.llvm.org/D21942
llvm-svn: 275107
Make some AVX and AVX512 cast costs more precise.
Based on part of a patch by Elena Demikhovsky (D15604).
Differential Revision: http://reviews.llvm.org/D22064
llvm-svn: 275106
Blocks to be tail-merged may share more than one successor. Correct the
comment to state that they share a specific successor, SuccBB, rather
than a single successor, which is not true.
llvm-svn: 275104
This bug (llvm.org/PR28124) was introduced by r237977, which refactored
the tail call sequence to be generated in two passes instead of one.
Unfortunately, the stack adjustment produced by the first pass was not
recognized by X86FrameLowering::mergeSPUpdates() in all cases, causing
code such as the following, which clobbers the return address, to be
generated:
popl %edi
popl %edi
pushl %eax
jmp tailcallee # TAILCALL
To fix the problem, the entire stack adjustment is performed in
X86ExpandPseudo::ExpandMI() for tail calls.
Patch by Magnus Lång <margnus1@gmail.com>
Differential Revision: http://reviews.llvm.org/D21325
llvm-svn: 275103
It is an optimization pass, and should not run at -O0. Especially since Fast RA
will not do the required register coalescing anyway, so it's a loss even from
the optimization standpoint.
This also works around (but doesn't quite fix) PR28489.
llvm-svn: 275099
[asan] Fix unittest Asan-x86_64-inline-Test crashing on Windows64
REAL(memcpy) was used in several places in Asan, while REAL(memmove) was not used.
This CL chooses to patch memcpy() first, solving the crash for unittest.
The crash looks like this:
projects\compiler-rt\lib\asan\tests\default\Asan-x86_64-inline-Test.exe
=================================================================
==22680==ERROR: AddressSanitizer: access-violation on unknown address 0x000000000000 (pc 0x000000000000 bp 0x0029d555f590 sp 0x0029d555f438 T0)
==22680==Hint: pc points to the zero page.
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: access-violation (<unknown module>)
==22680==ABORTING
Patch by: Wei Wang
Differential Revision: http://reviews.llvm.org/D22232
llvm-svn: 275098
Summary: Add support for the z13 instructions LOCHI and LOCGHI which
conditionally load immediate values. Add target instruction info hooks so
that if conversion will allow predication of LHI/LGHI.
Author: RolandF
Reviewers: uweigand
Subscribers: zhanjunl
Commiting on behalf of Roland.
Differential Revision: http://reviews.llvm.org/D22117
llvm-svn: 275086
Summary: LLVM adds a new value FMRB_DoesNotReadMemory in the enumeration.
Reviewers: andrew.w.kaylor, chrisj, zinob, grosser, jdoerfert
Subscribers: Meinersbur, pollydev
Differential Revision: http://reviews.llvm.org/D22109
llvm-svn: 275085
There's a little bit of churn in this patch because the initialization
mechanism is now shared between the old and the new PM. Other than
that, it's just a pretty mechanical translation.
llvm-svn: 275082
Summary: http://reviews.llvm.org/D22118 uses metadata to store the call count, which makes it possible to have branch weight to have only one elements. Also fix the assertion failure in inliner when checking the instruction type to include "invoke" instruction.
Reviewers: mkuper, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22228
llvm-svn: 275079
After thinking about it, we don't really need to forbid
BuiltinTemplateDecls explicitly. The restriction doesn't really buy us
anything.
llvm-svn: 275078
In preparation for porting this pass to the new PM (which has no
doInitialization()).
Differential Revision: http://reviews.llvm.org/D22223
llvm-svn: 275074
Summary:
For sample-based PGO, using BFI to calculate callsite count is sometime not accurate. This is because with sampling based approach, if a callsite resides in a hot loop deeply nested in a bunch of cold branches, the callsite's BFI frequency would be inaccurately calculated due to lack of samples in the cold branch.
E.g.
if (A1 && A2 && A3 && ..... && A10) {
for (i=0; i < 100000000; i++) {
callsite();
}
}
Assume that A1 to A100 are all 100% taken, and callsite has 1000 samples and thus is considerred hot. Because the loop's trip count is huge, it's normal that all branches outside the loop has no sample at all. As a result, we can only use static branch probability to derive the the frequency of the loop header. Assuming that static heuristic thinks each branch is 50% taken, then the count calculated from BFI will be 1/(2^10) of the actual value.
In order to get more accurate callsite count, we directly annotate the weight on the call instruction, and directly use it when checking callsite hotness.
Note that this mechanism can also be shared by instrumentation based callsite hotness analysis. The side benefit is that it breaks the dependency from Inliner to BFI as call count is embedded in the IR.
Reviewers: davidxl, eraman, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22118
llvm-svn: 275073
Summary: Handle the case when there is only one incoming/outgoing edge for a visited basic block: use the block weight to adjust edge weight even when the edge has been visited before. This can help reduce inaccuracies introduced by incorrect basic block profile, as shown in the updated unittest.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22180
llvm-svn: 275072
This patch adds interceptors for dispatch_io_*, dispatch_read and dispatch_write functions. This avoids false positives when using GCD IO. Adding several test cases.
Differential Revision: http://reviews.llvm.org/D21889
llvm-svn: 275071