Trying to reduce confusion by having the name of the public method match that of the private method for handling the recursion. Also adding some comments to SparseTensorStorage::fromCOO to help clarify what the recursive calls are doing in the dense case.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D108954
The output tensor was added for tiling purposes. With use of
`TilingInterface` for tiling pad operations, there is no need for an
explicit operand for the shape of result of `linalg.pad_tensor`
op. The interface allows the tiling pattern to query the value that
can be used for the "init" needed for tiling dynamically.
Differential Revision: https://reviews.llvm.org/D108613
Currently the builtin dialect is the default namespace used for parsing
and printing. As such module and func don't need to be prefixed.
In the case of some dialects that defines new regions for their own
purpose (like SpirV modules for example), it can be beneficial to
change the default dialect in order to improve readability.
Differential Revision: https://reviews.llvm.org/D107236
This aligns the printer with the parser contract: the operation isn't part of the user-controllable part of the syntax.
Differential Revision: https://reviews.llvm.org/D108804
This makes the hook return a printer if available, instead of using LogicalResult to
indicate if a printer was available (and invoked). This allows the caller to detect that
the dialect has a printer for a given operation without actually invoking the printer.
It'll be leveraged in a future revision to move printing the op name itself under control
of the ASMPrinter.
Differential Revision: https://reviews.llvm.org/D108803
Don't assert fail on strided memrefs when dropping unit dims.
Instead just leave them unchanged.
Differential Revision: https://reviews.llvm.org/D108205
* This allows multiple MLIR-API embedding downstreams to co-exist in the same process.
* I believe this is the last thing needed to enable isolated embedding.
Differential Revision: https://reviews.llvm.org/D108605
An interface to allow for tiling of operations is introduced. The
tiling of the linalg.pad_tensor operation is modified to use this
interface.
Differential Revision: https://reviews.llvm.org/D108611
The StringAttr version doesn't need a context, so we can just use the
existing `SymbolRefAttr::get` form. The StringRef version isn't preferred
so we want to encourage people to use StringAttr.
There is an additional form of getSymbolRefAttr that takes a (SymbolTrait
implementing) operation. This should also be moved, but I'll do that as
a separate patch.
Differential Revision: https://reviews.llvm.org/D108922
* It is pretty clear that no one has tried this yet since it was both incomplete and broken.
* Fixes a symbol hiding issues keeping even the generic builder from constructing an operation with successors.
* Adds ODS support for successors.
* Adds CAPI `mlirBlockGetParentRegion`, `mlirRegionEqual` + tests (and missing test for `mlirBlockGetParentOperation`).
* Adds Python property: `Block.region`.
* Adds Python methods: `Block.create_before` and `Block.create_after`.
* Adds Python property: `InsertionPoint.block`.
* Adds new blocks.py test to verify a plausible CFG construction case.
Differential Revision: https://reviews.llvm.org/D108898
41d4aa7de6 introduced incorrect code in
extraTraitClassDeclaration: `this` refers to the trait class and not the
operation class so `->getContext()` is not valid. Use `$_op` instead.
SymbolRefAttr is fundamentally a base string plus a sequence
of nested references. Instead of storing the string data as
a copies StringRef, store it as an already-uniqued StringAttr.
This makes a lot of things simpler and more efficient because:
1) references to the symbol are already stored as StringAttr's:
there is no need to copy the string data into MLIRContext
multiple times.
2) This allows pointer comparisons instead of string
comparisons (or redundant uniquing) within SymbolTable.cpp.
3) This allows SymbolTable to hold a DenseMap instead of a
StringMap (which again copies the string data and slows
lookup).
This is a moderately invasive patch, so I kept a lot of
compatibility APIs around. It would be nice to explore changing
getName() to return a StringAttr for example (right now you have
to use getNameAttr()), and eliminate things like the StringRef
version of getSymbol.
Differential Revision: https://reviews.llvm.org/D108899
* Add `DimOfIterArgFolder`.
* Move existing cross-dialect canonicalization patterns to `LoopCanonicalization.cpp`.
* Rename `SCFAffineOpCanonicalization` pass to `SCFForLoopCanonicalization`.
* Expand documentaton of scf.for: The type of loop-carried variables may not change with iterations. (Not even the dynamic type.)
Differential Revision: https://reviews.llvm.org/D108806
* Add batched version of all `addId` variants, so that multiple IDs can be added at a time.
* Rename `addId` and variants to `insertId` and `appendId`. Most external users call `appendId`. Splitting `addId` into two functions also makes it possible to provide batched version for both. (Otherwise, the overloads are ambigious when calling `addId`.)
Differential Revision: https://reviews.llvm.org/D108532
Drop mgpuMemHostRegisterMemRef's dependence on LLVM Support. This
method is the only one in CUDA runtime wrappers library that creates
a dependence on libLLVMSupport due to its use of SmallVector and
ArrayRef. The code can be as easily/compactly written without those ADT.
The dependence on LLVMSupport adds a significant amount of additional
complexity for external things that want to link this library in (both
statically or as a shared object) since libLLVMSupport includes numerous
other objects that are sensitive to C++ compiler version and ABI.
Differential Revision: https://reviews.llvm.org/D108684
Needed to switch to extract to support tosa.reverse using dynamic shapes.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D108744
This prepares general sparse to sparse conversions. The code that
needs to be generated using this new feature is now simply:
(1) coo = sparse_tensor_1->asCOO(); // source format1
(2) sparse_tensor_2 = newSparseTensor(coo); // destination format2
By using COO as an intermediate, we can do *all* conversions without
having to implement the full O(N^2) conversion matrix. Note that we
can always improve particular conversions individually if a faster
solution is required.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D108681
This allows for using a different type when accessing a parameter than the
one used for storage. This allows for returning parameters by reference,
enables using more optimized/convient reference results, and more.
Differential Revision: https://reviews.llvm.org/D108593
This allows for parsing strings that have escape sequences, which require constructing
a string (as they can't be represented by looking at the Token contents directly).
Differential Revision: https://reviews.llvm.org/D108589
This allows for iterating and interacting with the uses of a specific subset of
results as opposed to just the full range.
Differential Revision: https://reviews.llvm.org/D108586
Includes the quantized version of average pool lowering to linalg dialect.
This includes a lit test for the transform. It is not 100% correct as the
multiplier / shift should be done in i64 however this is negligable rounding
difference.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D108676
Lowering to table was incorrect as it did not apply a 128 offset before
extracting the value from the table. Fixed and correct tensor length on input
table.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D108436
* Add support for affine.max ops to SCF loop peeling pattern.
* Add support for affine.max ops to `AffineMinSCFCanonicalizationPattern`.
* Rename `AffineMinSCFCanonicalizationPattern` to `AffineOpSCFCanonicalizationPattern`.
* Rename `AffineMinSCFCanonicalization` pass to `SCFAffineOpCanonicalization`.
Differential Revision: https://reviews.llvm.org/D108009
The emplace commands are variadic and should take all the constructor arguments directly, since they implicitly call the constructor themselves in order to avoid the cost of constructing and then moving/copying temporaries.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D108670
When padding quantized operations, the padding needs to equal the zero point
of the input value. Corrected the pass to change the padding value if quantized.
Reviewed By: NatashaKnk
Differential Revision: https://reviews.llvm.org/D108440
Add notes for discarding private-visible functions in the Toy tutorial chapter 4.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D108026
Recent changes outside sparse compiler exposed the requirement of running a
new pass (lower-affine) but this only became apparent with private testing.
By adding some vectorized runs to integration test, we will detect the need
for such changes earlier and also widen codegen coverage of course.
Reviewed By: gussmith23
Differential Revision: https://reviews.llvm.org/D108667
This canonicalization simplifies affine.min operations inside "for loop"-like operations (e.g., scf.for and scf.parallel) based on two invariants:
* iv >= lb
* iv < lb + step * ((ub - lb - 1) floorDiv step) + 1
This commit adds a new pass `canonicalize-scf-affine-min` (instead of being a canonicalization pattern) to avoid dependencies between the Affine dialect and the SCF dialect.
Differential Revision: https://reviews.llvm.org/D107731
Introduces new Ops to represent 1. alias.scope metadata in LLVM, and 2. domains for these scopes. These correspond to the metadata described in https://llvm.org/docs/LangRef.html#noalias-and-alias-scope-metadata. Lists of scopes are modeled the same way as access groups - as an ArrayAttr on the Op (added in https://reviews.llvm.org/D97944).
Lowering 'noalias' attributes on function parameters is already supported. However, lowering `noalias` metadata on individual Ops is not, which is added in this change. LLVM uses the same keyword for these, but this change introduces a separate attribute name 'noalias_scopes' to represent this distinct concept.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D107870
I found myself typing this code several times at different places
by now, so time to make this a general utility instead. Given
a permutation, it returns the permuted position of the input,
for example (i,j,k) -> (k,i,j) yields position 1 for input 0.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D108347