These two headers both contained a strange mix of definitions related to
both patterns and non-pattern transforms. Put patterns and "populate"
functions into Patterns.h and standalone transforms into Transforms.h.
Depends On: D155223
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D155454
This is the first and most basic and important step for inlining memory operations with alias scopes.
For correctness, it is required that any alias scopes of inlined operations are replaced with deep copies. This is necessary as otherwise the same function could be inlined twice in one function, and suddenly the alias scopes extended.
A simple example would be `foo(a, b); foo(a2, b2)`. `a` and `a2` may alias. If `foo` is inlined in both instances, the store and load operations from `foo` may suddenly claim that `a` and `a2` do not alias if we were to keep the original alias scopes.
This is analogous to the following class/code in LLVM: 4eef2e30d6/llvm/lib/Transforms/Utils/InlineFunction.cpp (L985)
Differential Revision: https://reviews.llvm.org/D155479
At the moment, SME-to-LLVM lowerings rely entirely on
`LLVMTypeConverter`. This patch introduces a dedicated `TypeConverter`
that inherits from `LLVMTypeConverter` (it will also be used when
lowering ArmSME Ops to LLVM).
The new type converter merely disables lowerings for `VectorType` to
prevent 2-d scalable vectors (common in the context of ArmSME), e.g.
`vector<[16]x[16]xi8>`,
entering the LLVM Type converter. LLVM does not support arrays of
scalable vectors and hence the need for specialisation. In the case of
SME such types are effectively eliminated when emitting LLVM IR
intrinsics for SME.
Differential Revision: https://reviews.llvm.org/D155365
This patch adds a pass '-allocate-sme-tiles' to the ArmSME dialect that
implements allocation of SME ZA tiles.
It does this at the 'func.func' op level by replacing
'arm_sme.get_tile_id' ops with 'arith.constant' ops that represent the
tile number. The tiles in use in a given function are tracked by an
integer function attribute 'arm_sme.tiles_in_use' that is a 16-bit tile
mask with a bit for each 128-bit element tile (ZA0.Q-ZA15.Q), the
smallest ZA tile granule. This is initialized on the first
'arm_sme.get_tile_id' rewrite and updated on each subsequent rewrite.
Mixing of different element tile types is supported.
Section B2.3.2 of the SME spec [1] describes how the 128-bit element
tiles overlap with other element tiles.
Depends on D154941
[1] https://developer.arm.com/documentation/ddi0616/aa
Reviewed By: awarzynski
Differential Revision: https://reviews.llvm.org/D154955
At the moment, the lowering from the Vector dialect to SME looks like
this:
* Vector --> SME LLVM IR intrinsics
This patch introduces a new lowering layer between the Vector dialect
and the Arm SME extension:
* Vector --> ArmSME dialect (custom Ops) --> SME LLVM IR intrinsics.
This is motivated by 2 considerations:
1. Storing `ZA` to memory (e.g. `vector.transfer_write`) requires an
`scf.for` loop over all rows of `ZA`. Similar logic will apply to
"load to ZA from memory". This is a rather complex transformation and
a custom Op seems justified.
2. As discussed in [1], we need to prevent the LLVM type converter from
having to convert types unsupported in LLVM, e.g.
`vector<[16]x[16]xi8>`. A dedicated abstraction layer with custom Ops
opens a path to some fine tuning (e.g. custom type converters) that
will allow us to avoid this.
To facilitate this change, two new custom SME Op are introduced:
* `TileStoreOp`, and
* `ZeroOp`.
Note that no new functionality is added - these Ops merely model what's
already supported. In particular, the following tile size is assumed
(dimension and element size are fixed):
* `vector<[16]x[16]xi8>`
The new lowering layer is introduced via a conversion pass between the
Vector and the SME dialects. You can use the `-convert-vector-to-sme`
flag to run it. The following function:
```
func.func @example(%arg0 : memref<?x?xi8>) {
// (...)
%cst = arith.constant dense<0> : vector<[16]x[16]xi8>
vector.transfer_write %cst, %arg0 : vector<[16]x[16]xi8>, memref<?x?xi8>
return
}
```
would be lowered to:
```
func.func @example(%arg0: memref<?x?xi8>) {
// (...)
%0 = arm_sme.zero : vector<[16]x[16]xi8>
arm_sme.tile_store %arg0[%c0, %c0], %0 : memref<?x?xi8>, vector<[16]x[16]xi8>
return
}
```
Later, a mechanism will be introduced to guarantee that `arm_sme.zero`
and `arm_sme.tile_store` operate on the same virtual tile. For `i8`
elements this is not required as there is only one tile.
In order to lower the above output to LLVM, use
* `-convert-vector-to-llvm="enable-arm-sme"`.
[1] https://github.com/openxla/iree/issues/14294
Reviewed By: WanderAway
Differential Revision: https://reviews.llvm.org/D154867
This patch adds three new custom ops to the ArmSME dialect:
* arm_sme.get_tile_id - returns a scalar integer representing an SME
"virtual tile" that is not in use.
* arm_sme.cast_tile_to_vector - casts from a tile id to a 2-d scalable
vector type, which represents an SME "virtual tile".
* arm_sme.cast_vector_to_tile - casts from a 2-d scalable vector type,
which represents an SME "virtual tile", to a tile id.
The 'arm_sme.get_tile_id' op currently only supports tile 0, a follow-up
patch will implement proper tile allocation. A further follow-up patch
will demonstrate load/store to/from ZA using these ops.
See the op descriptions for further details and examples.
Thanks to @paulwalker-arm and @awarzynski for helping drive this.
Reviewed By: awarzynski, dcaballe
Differential Revision: https://reviews.llvm.org/D154941
The dealloc operation deallocates each of the given memrefs if there is no alias
to that memref in the list of retained memrefs and the corresponding
condition value is set. This condition can be used to indicate and pass on
ownership of memref values (or in other words, the responsibility of
deallocating that memref). If two memrefs alias each other, only one will be
deallocated to avoid double free situations.
The memrefs to be deallocated must be the originally allocated memrefs,
however, the memrefs to be retained may be arbitrary memrefs.
Returns a list of conditions corresponding to the list of memrefs which
indicates the new ownerships, i.e., if the memref was deallocated the
ownership was dropped (set to 'false') and otherwise will be the same as the
input condition.
Differential Revision: https://reviews.llvm.org/D155467
This patch updates one SparseTensor integration test so that the VLA
vectorisation is run conditionally based on the value of the
MLIR_RUN_ARM_SME_TESTS CMake variable.
This change opens the path to reduce the duplication of RUN lines in
"mlir/test/Integration/Dialect/SparseTensor/CPU/". ATM, there are
usually 2 RUN lines to test vectorization in SparseTensor integration
tests:
* one for VLS vectorisation,
* one for VLA vectorisation whenever that's available and which
reduces to VLS vectorisation when VLA is not supported.
When VLA is not available, VLS vectorisation is verified twice. This
duplication should be avoided - integration test are relatively
expansive to run.
This patch makes sure that the 2nd vectorisation RUN line becomes:
```
if (SVE integration tests are enabled)
run VLA vectorisation
else
return
```
This logic is implemented using LIT's (relatively new) conditional
substitution [1]. It enables us to guarantee that all RUN lines are
unique and that the VLA vectorisation is only enabled when supported.
This patch updates only 1 test to set-up and to demonstrate the logic.
Subsequent patches will update the remaining tests.
[1] https://www.llvm.org/docs/TestingGuide.html
Differential Revision: https://reviews.llvm.org/D155403
If the paddingAttr is an ArrayAttr with two values we know that
the element type is a `ComplexType` and we should pad the value
accordingly.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D154908
Linalg operations can include `complex` types in the src/target types.
This should include conversion between `arith` and `complex` types when
constructing `linalg` operations.
Reviewed By: kuhar
Differential Revision: https://reviews.llvm.org/D154740
Since `StringRef::empty` can be used in constexpr context, it seems reasonable that `SMLoc::isValid` should be too. The default-ctor and `operator==` are made constexpr for consistency.
In particular, the `constexpr` keyword is needed for silencing warnings on Windows (whereas Linux allows constexpr usage without the keyword).
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D154741
The Linalg vectoriser incorrectly recognises the following
`tensor.extract` as contiguous:
```
func.func @example(%in: tensor<123x321xf32>, %arg1: tensor<1x?x8xf32>) -> tensor<1x?x8xf32> {
%c0 = arith.constant 1 : index
%2 = linalg.generic {
indexing_maps = [#map1],
iterator_types = ["parallel", "parallel", "parallel"]
} outs(%arg1 : tensor<1x?x8xf32>)
{
^bb0(%arg3: f32):
%idx_0 = linalg.index 0 : index
%idx_1 = linalg.index 1 : index
%idx = arith.addi %idx_0, %idx_1 : index
%7 = tensor.extract %in[%c0, %idx] : tensor<123x321xf32>
linalg.yield %7 : f32
} -> tensor<1x?x8xf32>
return %2 : tensor<1x?x8xf32>
}
```
However, the following index Op corresponds to the dynamic dimension
in the iteration space:
```
%idx_1 = linalg.index 1 : index
```
The vectoriser should assume that:
* this index Op _is not_ loop invariant,
* the resulting memory access is a gather load
This is what this patch fixes.
Differential Revision: https://reviews.llvm.org/D155373
This fixes builds for 7e78ecfe10 (both cmake and bazel) as well as trim unnecessary dependencies.
This is achieved by moving the functionality to test/lib/GPU which is a more natural landing pad.
Delete the backslash. It was there to compile tablegen file. It looks like space also works fine.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D155474
This revision fixes `hasTensorSemantics` and `hasBufferSemantics` for vector transfer ops, which may have a vector operand. `VectorType` implements `ShapedType` and such operands do not affect whether an op has tensor or buffer semantics. Also implement `DestinationStyleOpInterface` on `TransferReadOp` so that `hasTensorSemantics`/`hasBufferSemantics` can be called. (The op has no inits, but this makes it symmetric to `TransferWriteOp`.)
Differential Revision: https://reviews.llvm.org/D155469
This mirrors the test-lower-to-llvm pass pipeline that provides some sanity when running e2e examples.
One peculiarity of the GPU pipeline is that we want to allow 32b indexing in kernels.
This is currently not straightforward as there are dependencies between passes.
This new test pass orders passes in a way that connects end-to-end.
Differential Revision: https://reviews.llvm.org/D155463
This work introduce `cp.async.bulk.tensor.shared.cluster.global` in NVVM dialect that executes load using TMA.
Depends on D155056
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D155060
This work improves verifier for invalid cases. It is NFC.
Reviewed By: nicolasvasilache, springerm
Differential Revision: https://reviews.llvm.org/D155448
Add a simple transform operation to the NVGPU extension that performs
software pipelining of copies to shared memory. The functionality is
extremely minimalistic in this version and only supports copies from
global to shared memory inside an `scf.for` loop with either
`vector.transfer` or `nvgpu.device_async_copy` operations when
pipelining preconditions are already satisfied in the IR. This is the
minimally useful version that uses the more general loop pipeliner in an
NVGPU-specific way. Further extensions and orthogonalizations will be
necessary.
This required a change to the loop pipeliner itself to properly
propagate errors should the predicate generator fail.
This is loosely inspired from the vesion in IREE, but has less unsafe
assumptions and more principled way of communicating decisions.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D155223
* Move passes to `Transforms` directory.
* Add `Utils.h` (will be utilized in a subsequent change).
Differential Revision: https://reviews.llvm.org/D155427
reshape(reshape(x)) -> reshape(x) can be directly written as a fold instead of a canonicalization,
to help other passes cleanup while they work.
This initially broke ReshapeConverterExpand/Collapse, which relies on creating foldable reshapes and a carefully crafted
benefit priority of patterns.
I turned this into a single pattern on reshapes, which does expand and/or collapse as needed in one go.
Differential Revision: https://reviews.llvm.org/D155266
This patch adds support for selecting which functions are lowered to LLVM IR
from MLIR depending on declare target information and whether host or device
code is being generated.
The approach proposed by this patch is to perform the filtering in two stages:
- An MLIR transformation pass, which is added to the Flang translation flow
after the `OMPEarlyOutliningPass`. The functions that are kept are those
that match the OpenMP processor (host or device) the compiler invocation
is targeting, according to the presence of the `-fopenmp-is-target-device`
compiler option and declare target information. All functions contaning an
`omp.target` are also kept, regardless of the declare target information of
the function, due to the need for keeping target regions visible for both
host and device compilation.
- A filtering step during translation to LLVM IR, which is peformed for those
functions that were kept because of the presence of a target region inside.
If the targeted OpenMP processor does not match the declare target
information of the function, then it is removed from the LLVM IR after its
contents have been processed and translated. Since they should only contain
an omp.target operation which, in turn, should have been outlined into
another LLVM IR function, the wrapper can be deleted at that point.
Depends on D150328 and D150329.
Differential Revision: https://reviews.llvm.org/D147641
This is essentially a follow up to https://reviews.llvm.org/D155072
This adds support for also passing properties as `ref` parameter to `custom`. This requires the property to have been bound previously and will error otherwise. This makes it possible for an implementation of `custom` to take previously parsed data into account, creating nice context-dependent grammars :-)
Differential Revision: https://reviews.llvm.org/D155297
This feature was introduced in `D123492`.
Doing equivalence on pointers to sort operands of commutative operations is incorrect when checking equivalence of ops in separate regions (where the lhs and rhs operands are marked as equivalent but are not the same value).
It was also discussed in `D123492` and `D129480` that the correct solution would be to stable sort the operands in canonicalization (based on some numbering in the region maybe), but until that lands, reverting this change will unblock us and other users.
An example of a pass that might not work properly because of this is `DuplicateFunctionEliminationPass`.
Reviewed By: mehdi_amini, jpienaar
Differential Revision: https://reviews.llvm.org/D154699
Not every NumPy type (e.g., the `ml_dtypes.bfloat16` NumPy extension
type) has a type in the Python buffer protocol, so exporting such a
buffer with `PyBUF_FORMAT` may fail.
However, we don't care about the self-reported type of a buffer if the
user provides an explicit type. In the case that an explicit type is
provided, don't request the format from the buffer protocol, which
allows arrays whose element types are unknown to the buffer protocol to
be passed.
Reviewed By: jpienaar, ftynse
Differential Revision: https://reviews.llvm.org/D155209
No matter how one constructs their SerializeTo* pass, we want to
ensure that the LLVM initialization code runs once and only once. This
commit adds a static once_flag to ensure that.
I've run into mysterious segfaults when calling MLIR GPU compiles from
multiple threads, and this commit is a potential fix for the issue.
Reviewed By: fmorac
Differential Revision: https://reviews.llvm.org/D155226
(1) without the check, the results may silently be wrong, so check is needed
(2) add pruning step to guarantee 2:4 property
Note, in the longer run, we may want to split out the pruning step somehow,
or make it optional.
Reviewed By: K-Wu
Differential Revision: https://reviews.llvm.org/D155320
* Rename functions with underscore to camel case.
* Return C++ bools of "in_bounds" values instead of an `ArrayAttr`.
Differential Revision: https://reviews.llvm.org/D155277
Using MLIR attributes instead of metadata has many advantages:
* No indirection: Attributes can simply refer to each other seemlessly without having to use the indirection of `SymbolRefAttr`. This also gives us correctness by construction in a lot of places as well
* Multithreading safe: The Attribute infrastructure gives us thread-safety for free. Creating operations and inserting them into a block is not thread-safe. This is a major use case for e.g. the inliner in MLIR which runs in parallel
* Easier to create: There is no need for a builder or a metadata region
This patch therefore does exactly that. It leverages the new distinct attributes to create distinct access groups in a deterministic and threadsafe manner.
Differential Revision: https://reviews.llvm.org/D155285
Using MLIR attributes instead of metadata has many advantages:
* No indirection: Attributes can simply refer to each other seemlessly without having to use the indirection of `SymbolRefAttr`. This also gives us correctness by construction in a lot of places as well
* Multithreading save: The Attribute infrastructure gives us thread-safety for free. Creating operations and inserting them into a block is not thread-safe. This is a major use case for e.g. the inliner in MLIR which runs in parallel
* Easier to create: There is no need for a builder or a metadata region
This patch therefore does exactly that. It leverages the new distinct attributes to create distinct alias domains and scopes in a deterministic and threadsafe manner.
Differential Revision: https://reviews.llvm.org/D155159
There was a bug in `TransferWriteNonPermutationLowering`, a pattern that extends the permutation map of a TransferWriteOp with leading transfer dimensions of size ones. These newly added transfer dimensions are always in-bounds, because the starting point of any dimension is in-bounds. VectorToSCF inserts out-of-bounds checks based on the "in_bounds" attribute and dims that are marked as out-of-bounds but that are actually always in-bounds lead to unnecessary "scf.if" ops.
Differential Revision: https://reviews.llvm.org/D155196