Commit Graph

374339 Commits

Author SHA1 Message Date
Richard Smith
2a2c228c7a Add new 'preferred_name' attribute.
This attribute permits a typedef to be associated with a class template
specialization as a preferred way of naming that class template
specialization. This permits us to specify that (for example) the
preferred way to express 'std::basic_string<char>' is as 'std::string'.

The attribute is applied to the various class templates in libc++ that have
corresponding well-known typedef names.

This is a re-commit. The previous commit was reverted because it exposed
a pre-existing bug that has since been fixed / worked around; see
PR48434.

Differential Revision: https://reviews.llvm.org/D91311
2020-12-09 12:22:35 -08:00
Richard Smith
997a719d5a PR48434: Work around crashes due to deserialization cycles via typedefs.
Ensure that we can deserialize a TypedefType even while in the middle of
deserializing its TypedefDecl, by removing the need to look at the
TypedefDecl while constructing the TypedefType.

This fixes all the currently-known failures for PR48434, but it's not a
complete fix, because we can still trigger deserialization cycles, which
are not supposed to happen.
2020-12-09 12:22:35 -08:00
Fangrui Song
baef18dffb [ELF] Reorganize "is only supported on" tests and fix some diagnostics 2020-12-09 12:14:00 -08:00
Duncan P. N. Exon Smith
82789228c6 Frontend: Migrate to FileEntryRef in VerifyDiagnosticConsumer.cpp, NFC
Add a `FileEntryRef` overload of `SourceManager::translateFile`, and
migrate `ParseDirective` in VerifyDiagnosticConsumer.cpp to use it and
the corresponding overload of `createFileID`.

No functionality change here.

Differential Revision: https://reviews.llvm.org/D92699
2020-12-09 11:51:43 -08:00
Peter Collingbourne
e5a28e1261 scudo: Fix quarantine allocation when MTE enabled.
Quarantines have always been broken when MTE is enabled because the
quarantine batch allocator fails to reset tags that may have been
left behind by a user allocation.

This was only noticed when running the Scudo unit tests with Scudo
as the system allocator because quarantines are turned off by
default on Android and the test binary turns them on by defining
__scudo_default_options, which affects the system allocator as well.

Differential Revision: https://reviews.llvm.org/D92881
2020-12-09 11:48:41 -08:00
Peter Collingbourne
9f8aeb0602 scudo: Split setRandomTag in two. NFCI.
Separate the IRG part from the STZG part since we will need to use
the latter on its own for some upcoming changes.

Differential Revision: https://reviews.llvm.org/D92880
2020-12-09 11:48:41 -08:00
Florian Hahn
77fd12a66e [AArch64] Add aarch64_neon_vcmla{_rot{90,180,270}} intrinsics.
Add builtins required to implement vcmla and rotated variants from
the ACLE

Reviewed By: t.p.northover

Differential Revision: https://reviews.llvm.org/D92929
2020-12-09 19:46:49 +00:00
Jon Chesterfield
7c59614394 [libomptarget][amdgpu] clang-format src/rtl.cpp 2020-12-09 19:45:51 +00:00
Michael Munday
e28b6a60bc [RISCV][NFC] Regenerate RISCV CodeGen tests
Regenerated using:

./llvm/utils/update_llc_test_checks.py -u llvm/test/CodeGen/RISCV/*.ll

This has added comments to spill-related instructions and added @plt to
some symbols.

Differential Revision: https://reviews.llvm.org/D92841
2020-12-09 19:42:49 +00:00
Jianzhou Zhao
ea981165a4 [dfsan] Track field/index-level shadow values in variables
*************
* The problem
*************
See motivation examples in compiler-rt/test/dfsan/pair.cpp. The current
DFSan always uses a 16bit shadow value for a variable with any type by
combining all shadow values of all bytes of the variable. So it cannot
distinguish two fields of a struct: each field's shadow value equals the
combined shadow value of all fields. This introduces an overtaint issue.

Consider a parsing function

   std::pair<char*, int> get_token(char* p);

where p points to a buffer to parse, the returned pair includes the next
token and the pointer to the position in the buffer after the token.

If the token is tainted, then both the returned pointer and int ar
tainted. If the parser keeps on using get_token for the rest parsing,
all the following outputs are tainted because of the tainted pointer.

The CL is the first change to address the issue.

**************************
* The proposed improvement
**************************
Eventually all fields and indices have their own shadow values in
variables and memory.

For example, variables with type {i1, i3}, [2 x i1], {[2 x i4], i8},
[2 x {i1, i1}] have shadow values with type {i16, i16}, [2 x i16],
{[2 x i16], i16}, [2 x {i16, i16}] correspondingly; variables with
primary type still have shadow values i16.

***************************
* An potential implementation plan
***************************

The idea is to adopt the change incrementially.

1) This CL
Support field-level accuracy at variables/args/ret in TLS mode,
load/store/alloca still use combined shadow values.

After the alloca promotion and SSA construction phases (>=-O1), we
assume alloca and memory operations are reduced. So if struct
variables do not relate to memory, their tracking is accurate at
field level.

2) Support field-level accuracy at alloca
3) Support field-level accuracy at load/store

These two should make O0 and real memory access work.

4) Support vector if necessary.
5) Support Args mode if necessary.
6) Support passing more accurate shadow values via custom functions if
necessary.

***************
* About this CL.
***************
The CL did the following

1) extended TLS arg/ret to work with aggregate types. This is similar
to what MSan does.

2) implemented how to map between an original type/value/zero-const to
its shadow type/value/zero-const.

3) extended (insert|extract)value to use field/index-level progagation.

4) for other instructions, propagation rules are combining inputs by or.
The CL converts between aggragate and primary shadow values at the
cases.

5) Custom function interfaces also need such a conversion because
all existing custom functions use i16. It is unclear whether custome
functions need more accurate shadow propagation yet.

6) Added test cases for aggregate type related cases.

Reviewed-by: morehouse

Differential Revision: https://reviews.llvm.org/D92261
2020-12-09 19:38:35 +00:00
Jon Chesterfield
c9bc414840 [libomptarget][amdgpu] Let default number of teams equal number of CUs 2020-12-09 19:35:34 +00:00
Jon Chesterfield
e191d31159 [libomptarget][amdgpu] Robust handling of device_environment symbol 2020-12-09 19:21:51 +00:00
Reid Kleckner
df282215d4 Don't setup inalloca for swiftcc on i686-windows-msvc
Swiftcall does it's own target-independent argument type classification,
since it is not designed to be ABI compatible with anything local on the
target that isn't LLVM-based. This means it never uses inalloca.
However, we have duplicate logic for checking for inalloca parameters
that runs before call argument setup. This logic needs to know ahead of
time if inalloca will be used later, and we can't move the
CGFunctionInfo calculation earlier.

This change gets the calling convention from either the
FunctionProtoType or ObjCMethodDecl, checks if it is swift, and if so
skips the stackbase setup.

Depends on D92883.

Differential Revision: https://reviews.llvm.org/D92944
2020-12-09 11:08:48 -08:00
Reid Kleckner
d7098ff29c De-templatify EmitCallArgs argument type checking, NFCI
This template exists to abstract over FunctionPrototype and
ObjCMethodDecl, which have similar APIs for storing parameter types. In
place of a template, use a PointerUnion with two cases to handle this.
Hopefully this improves readability, since the type of the prototype is
easier to discover. This allows me to sink this code, which is mostly
assertions, out of the header file and into the cpp file. I can also
simplify the overloaded methods for computing isGenericMethod, and get
rid of the second EmitCallArgs overload.

Differential Revision: https://reviews.llvm.org/D92883
2020-12-09 11:08:00 -08:00
Raphael Isemann
199ec40e7b [lldb][NFC] Refactor _get_bool_config_skip_if_decorator
NFC preparation for another patch. Also add some documentation for why the
error value is true (and not false).
2020-12-09 20:02:06 +01:00
Jon Chesterfield
cab9f69235 [libomptarget][amdgpu] Improve diagnostics on arch mismatch 2020-12-09 18:55:53 +00:00
Justin Bogner
e6a1187dd8 Limit the recursion depth of SelectionDAG::isSplatValue()
This method previously always recursively checked both the left-hand
side and right-hand side of binary operations for splatted (broadcast)
vector values to determine if the parent DAG node is a splat.

Like several other SelectionDAG methods, limit the recursion depth to
MaxRecursionDepth (6). This prevents stack overflow.
See also https://issuetracker.google.com/173785481

Patch by Nicolas Capens. Thanks!

Differential Revision: https://reviews.llvm.org/D92421
2020-12-09 10:35:07 -08:00
Alexey Bader
be9b4bbdfc [MCJIT] Add cmake variables to customize ittapi git location and revision.
To support llorg builds this patch provides the following changes:

1)  Added cmake variable ITTAPI_GIT_REPOSITORY to control the location of ITTAPI repository.
     Default value of ITTAPI_GIT_REPOSITORY is github location: https://github.com/intel/ittapi.git
     Also, the separate cmake variable ITTAPI_GIT_TAG was added for repo tag.
2)  Added cmake variable ITTAPI_SOURCE_DIR to control the place where the repo will be cloned.
     Default value of ITTAPI_SOURCE_DIR is build area: PROJECT_BINARY_DIR

Reviewed By: etyurin, bader

Patch by ekovanov.

Differential Revision: https://reviews.llvm.org/D91935
2020-12-09 21:04:24 +03:00
Arthur Eubanks
664b187160 Reland Pin -loop-reduce to legacy PM
This was accidentally reverted by a later change.

LSR currently only runs in the codegen pass manager.
There are a couple issues with LSR and the NPM.
1) Lots of tests assume that LCSSA isn't run before LSR. This breaks a
bunch of tests' expected output. This is fixable with some time put in.
2) LSR doesn't preserve LCSSA. See
llvm/test/Analysis/MemorySSA/update-remove-deadblocks.ll. LSR's use of
SCEVExpander is the only use of SCEVExpander where the PreserveLCSSA option is
off. Turning it on causes some code sinking out of loops to fail due to
SCEVExpander's inability to handle the newly created trivial PHI nodes in the
broken critical edge (I was looking at
llvm/test/Transforms/LoopStrengthReduce/X86/2011-11-29-postincphi.ll).
I also tried simply just calling formLCSSA() at the end of LSR, but the extra
PHI nodes cause regressions in codegen tests.

We'll delay figuring these issues out until later.

This causes the number of check-llvm failures with -enable-new-pm true
by default to go from 60 to 29.

Reviewed By: asbirlea

Differential Revision: https://reviews.llvm.org/D92796
2020-12-09 09:57:57 -08:00
Fangrui Song
b4cbb87fea [CMake] Add llvm-profgen to LLVM_TEST_DEPENDS
Otherwise `check-llvm-*` may not rebuild llvm-profgen, causing llvm-profgen tests
to fail if llvm-profgen happens to be stale.
2020-12-09 09:34:51 -08:00
Jonas Devlieghere
5861234e72 [lldb] Track the API boundary using a thread_local variable.
The reproducers currently use a static variable to track the API
boundary. This is obviously incorrect when the SB API is used
concurrently. While I do not plan to support that use-case (right now),
I do want to avoid us crashing. As a first step, correctly track API
boundaries across multiple threads.

Before this patch SB API calls made by the embedded script interpreter
would be considered "behind the API boundary" and correctly ignored.
After this patch, we need to tell the reproducers to ignore the
scripting thread as a "private thread".

Differential revision: https://reviews.llvm.org/D92811
2020-12-09 08:58:40 -08:00
Arthur Eubanks
fed7565ee2 [COFF][LTO][NPM] Use NPM for LTO with ENABLE_EXPERIMENTAL_NEW_PASS_MANAGER
Reviewed By: hans

Differential Revision: https://reviews.llvm.org/D92866
2020-12-09 08:53:50 -08:00
Mircea Trofin
f9a27df16b [FileCheck] Enforce --allow-unused-prefixes=false for llvm/test/Transforms
Explicitly opt-out llvm/test/Transforms/Attributor.

Verified by flipping the default value of allow-unused-prefixes and
observing that none of the failures were under llvm/test/Transforms.

Differential Revision: https://reviews.llvm.org/D92404
2020-12-09 08:51:38 -08:00
Sam McCall
634a377bd8 [clangd] Extract per-dir CDB cache to its own threadsafe class. NFC
This is a step towards making compile_commands.json reloadable.

The idea is:
 - in addition to rare CDB loads we're soon going to have somewhat-rare CDB
   reloads and fairly-common stat() of files to validate the CDB
 - so stop doing all our work under a big global lock, instead using it to
   acquire per-directory structures with their own locks
 - each directory can be refreshed from disk every N seconds, like filecache
 - avoid locking these at all in the most common case: directory has no CDB

Differential Revision: https://reviews.llvm.org/D92381
2020-12-09 17:40:12 +01:00
Louis Dionne
717b0da7a6 [libc++] Run back-deployment CI on macOS 10.15 instead of 10.14
The goal was to add coverage for back-deployment over the filesystem
library, but it was added in macOS 10.15, not 10.14.

Differential Revision: https://reviews.llvm.org/D92937
2020-12-09 11:35:15 -05:00
LLVM GN Syncbot
cff1f4cbbc [gn build] Port b804eef090 2020-12-09 16:19:07 +00:00
LLVM GN Syncbot
da1392e1b9 [gn build] Port ac7864ec01 2020-12-09 16:19:07 +00:00
LLVM GN Syncbot
d75791ec1e [gn build] Port 5934a79196 2020-12-09 16:19:06 +00:00
Adam Czachorowski
5934a79196 [clangd] Split tweak tests into one file per tweak.
No changes to the tests themselves, other than some auto -> const auto
diagnostic fixes and formatting.

Differential Revision: https://reviews.llvm.org/D92939
2020-12-09 17:17:06 +01:00
Kazushi (Jam) Marukawa
1a2147fead [VE] Add vsum and vfsum intrinsic instructions
Add vsum and vfsum intrinsic instructions and regression tests.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92938
2020-12-10 01:11:53 +09:00
Paul C. Anagnostopoulos
6266f36226 [TableGen] Cache the vectors of records returned by getAllDerivedDefinitions().
Differential Revision: https://reviews.llvm.org/D92674
2020-12-09 10:54:04 -05:00
Sanjay Patel
b2ef264096 [VectorCombine] allow peeking through an extractelt when creating a vector load
This is an enhancement to load vectorization that is motivated by
a pattern in https://llvm.org/PR16739.
Unfortunately, it's still not enough to make a difference there.
We will have to handle multi-use cases in some better way to avoid
creating multiple overlapping loads.

Differential Revision: https://reviews.llvm.org/D92858
2020-12-09 10:36:14 -05:00
Roman Lebedev
e6f2a79d7a [InstCombine] canonicalizeSaturatedAdd(): last fold is only valid for strict comparison (PR48390)
We could create uadd.sat under incorrect circumstances
if a select with -1 as the false value was canonicalized
by swapping the T/F values. Unlike the other transforms
in the same function, it is not invariant to equality.

Some alive proofs: https://alive2.llvm.org/ce/z/emmKKL

Based on original patch by David Green!

Fixes https://bugs.llvm.org/show_bug.cgi?id=48390

Differential Revision: https://reviews.llvm.org/D92717
2020-12-09 18:19:09 +03:00
Roman Lebedev
f16320b90b [NFC][InstCombine] Add test coverage for @llvm.uadd.sat canonicalization
The non-strict variants are already handled because they are canonicalized
to strict variants by swapping hands in both the select and icmp,
and the fold simply considers that strictness is irrelevant here.

But that isn't actually true for the last pattern, as PR48390 reports.
2020-12-09 18:19:08 +03:00
Kazushi (Jam) Marukawa
398f29fbb0 [VE] Add vfmk intrinsic instructions
Add vfmk intrinsic instructions, a few pseudo instructions to expand
vfmk intrinsic using VM512 correctly, and regression tests.

Reviewed By: simoll

Differential Revision: https://reviews.llvm.org/D92758
2020-12-10 00:08:20 +09:00
Yvan Roux
03a77d04b4 [LLD][ELF] Fix typo in relocation-model-pic.ll
Should fix non-x86 bot failures.
2020-12-09 15:38:50 +01:00
Simon Pilgrim
24184dbb82 [X86] Fold CONCAT(VPERMV3(X,Y,M0),VPERMV3(Z,W,M1)) -> VPERMV3(CONCAT(X,Z),CONCAT(Y,W),CONCAT(M0,M1))
Further prep work toward supporting different subvector sizes in combineX86ShufflesRecursively
2020-12-09 14:29:32 +00:00
Matt Morehouse
6f13445fb6 [DFSan] Add custom wrapper for epoll_wait.
The wrapper clears shadow for any events written.

Reviewed By: stephan.yichao.zhao

Differential Revision: https://reviews.llvm.org/D92891
2020-12-09 06:05:29 -08:00
Muhammad Omair Javaid
10edd10348 [LLDB] Temporarily incrase DEFAULT_TIMEOUT on gdbremote_testcase.py
TestLldbGdbServer.py testcases are timing out on LLDB/AArch64 Linux
buildbot since recent changes. I am temporarily increasing
DEFAULT_TIMEOUT to 20 seconds to see impact.
2020-12-09 18:44:21 +05:00
Anton Afanasyev
e5bf2e8989 [SLP] Use the width of value truncated just before storing
For stores chain vectorization we choose the size of vector
elements to ensure we fit to minimum and maximum vector register
size for the number of elements given. This patch corrects vector
element size choosing the width of value truncated just before
storing instead of the width of value stored.

Fixes PR46983

Differential Revision: https://reviews.llvm.org/D92824
2020-12-09 16:38:45 +03:00
Djordje Todorovic
163c223161 [Debuginfo] [CSInfo] Do not create CSInfo for undef arguments
If a function parameter is marked as "undef", prevent creation
of CallSiteInfo for that parameter.
Without this patch, the parameter's call_site_value would be incorrect.
The incorrect call_value case reported in PR39716,
addressed in D85111.
​
Patch by Nikola Tesic
​
Differential revision: https://reviews.llvm.org/D92471
2020-12-09 12:54:59 +01:00
Kerry McLaughlin
05edfc5475 [SVE][CodeGen] Add DAG combines for s/zext_masked_gather
This patch adds the following DAGCombines, which apply if isVectorLoadExtDesirable() returns true:
 - fold (and (masked_gather x)) -> (zext_masked_gather x)
 - fold (sext_inreg (masked_gather x)) -> (sext_masked_gather x)

LowerMGATHER has also been updated to fetch the LoadExtType associated with the
gather and also use this value to determine the correct masked gather opcode to use.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D92230
2020-12-09 11:53:19 +00:00
Sander de Smalen
d568cff696 [LoopVectorizer][SVE] Vectorize a simple loop with with a scalable VF.
* Steps are scaled by `vscale`, a runtime value.
* Changes to circumvent the cost-model for now (temporary)
  so that the cost-model can be implemented separately.

This can vectorize the following loop [1]:

   void loop(int N, double *a, double *b) {
     #pragma clang loop vectorize_width(4, scalable)
     for (int i = 0; i < N; i++) {
       a[i] = b[i] + 1.0;
     }
   }

[1] This source-level example is based on the pragma proposed
separately in D89031. This patch only implements the LLVM part.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D91077
2020-12-09 11:25:21 +00:00
Sander de Smalen
adc37145de [LoopVectorizer] NFC: Remove unnecessary asserts that VF cannot be scalable.
This patch removes a number of asserts that VF is not scalable, even though
the code where this assert lives does nothing that prevents VF being scalable.

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D91060
2020-12-09 11:25:21 +00:00
Kerry McLaughlin
4519ff4b6f [SVE][CodeGen] Add the ExtensionType flag to MGATHER
Adds the ExtensionType flag, which reflects the LoadExtType of a MaskedGatherSDNode.
Also updated SelectionDAGDumper::print_details so that details of the gather
load (is signed, is scaled & extension type) are printed.

Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D91084
2020-12-09 11:19:08 +00:00
Christian Sigg
0bf4a82a5a [mlir] Use mlir::OpState::operator->() to get to methods of mlir::Operation. This is a preparation step to remove the corresponding methods from OpState.
Reviewed By: silvas, rriddle

Differential Revision: https://reviews.llvm.org/D92878
2020-12-09 12:11:32 +01:00
Joe Ellis
80c33de2d3 [SelectionDAG] Add llvm.vector.{extract,insert} intrinsics
This commit adds two new intrinsics.

- llvm.experimental.vector.insert: used to insert a vector into another
  vector starting at a given index.

- llvm.experimental.vector.extract: used to extract a subvector from a
  larger vector starting from a given index.

The codegen work for these intrinsics has already been completed; this
commit is simply exposing the existing ISD nodes to LLVM IR.

Reviewed By: cameron.mcinally

Differential Revision: https://reviews.llvm.org/D91362
2020-12-09 11:08:41 +00:00
Cullen Rhodes
4167a0259e [IR] Support scalable vectors in CastInst::CreatePointerCast
Reviewed By: sdesmalen

Differential Revision: https://reviews.llvm.org/D92482
2020-12-09 10:39:36 +00:00
Simon Moll
3ffbc79357 [VP] Build VP SDNodes
Translate VP intrinsics to VP_* SDNodes.  The tests check whether a
matching vp_* SDNode is emitted.

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D91441
2020-12-09 11:36:51 +01:00
Alex Zinenko
f31704f8ae [OpenMPIRBuilder] Put the barrier in the exit block in createWorkshapeLoop
The original code was inserting the barrier at the location given by the
caller. Make sure it is always inserted at the end of the loop exit block
instead.

Reviewed By: Meinersbur

Differential Revision: https://reviews.llvm.org/D92849
2020-12-09 11:33:04 +01:00