These includes have been deprecated in favor of BuiltinDialect.h, which contains the definitions of ModuleOp and FuncOp.
Differential Revision: https://reviews.llvm.org/D91572
This revision adds support in the parser/printer for "deferrable" aliases, i.e. those that can be resolved after printing has finished. This allows for printing aliases for operation locations after the module instead of before, i.e. this is now supported:
```
"foo.op"() : () -> () loc(#loc)
#loc = loc("some_location")
```
Differential Revision: https://reviews.llvm.org/D91227
The tokens are already handled by the lexer. This revision exposes them
through the parser interface.
This revision also adds missing functions for question mark parsing and
completes the list of valid punctuation tokens in the documentation.
Differential Revision: https://reviews.llvm.org/D90907
Locations often get very long and clutter up operations when printed inline with them. This revision adds support for using aliases with trailing operation locations, and makes printing with aliases the default behavior. Aliases in the trailing location take the form `loc(<alias>)`, such as `loc(#loc0)`. As with all aliases, using `mlir-print-local-scope` can be used to disable them and get the inline behavior.
Differential Revision: https://reviews.llvm.org/D90652
- Change syntax for FuncOp to be `func <visibility>? @name` instead of printing the
visibility in the attribute dictionary.
- Since printFunctionLikeOp() and parseFunctionLikeOp() are also used by other
operations, make the "inline visibility" an opt-in feature.
- Updated unit test to use and check the new syntax.
Differential Revision: https://reviews.llvm.org/D90859
The new construct represents a generic loop with two regions: one executed
before the loop condition is verifier and another after that. This construct
can be used to express both a "while" loop and a "do-while" loop, depending on
where the main payload is located. It is intended as an intermediate
abstraction for lowering, which will be added later. This form is relatively
easy to target from higher-level abstractions and supports transformations such
as loop rotation and LICM.
Differential Revision: https://reviews.llvm.org/D90255
- Verify that attributes parsed using a custom parser do not have duplicates.
- If there are duplicated in the attribute dictionary in the input, they get caught during the
dictionary parsing.
- This check verifies that there is no duplication between the parsed dictionary and any
attributes that might be added by the custom parser (or when the custom parsing code
adds duplicate attributes).
- Fixes https://bugs.llvm.org/show_bug.cgi?id=48025
Differential Revision: https://reviews.llvm.org/D90502
* Use function_ref instead of std::function in several methods
* Use ::get instead of ::getChecked for IntegerType.
- It is already fully verified and constructing a mlir::Location can be extremely costly during parsing.
- Add standard dialect operations to define global variables with memref types and to
retrieve the memref for to a named global variable
- Extend unit tests to test verification for these operations.
Differential Revision: https://reviews.llvm.org/D90337
This patch fixes a bug [[ https://bugs.llvm.org/show_bug.cgi?id=46091 | 46091 ]]
Raw data for the `dense-element attribute` is written in little endian (LE) format.
This commit converts the format to big endian (BE) in ʻAttribute Parser` on the
BE machine. Also, when outputting on a BE machine, the BE format is converted
to LE in "AsmPrinter".
Differential Revision: https://reviews.llvm.org/D80695
This revision optimizes the parsing of hex strings by using the checked variant of llvm::fromHex, and adding a specialized method to Token for extracting hex strings. This leads a large decrease in compile time when parsing large hex constants (one example: 2.6 seconds -> 370 miliseconds)
Differential Revision: https://reviews.llvm.org/D90266
Instead of storing a StringRef, we keep an Identifier which otherwise requires a lock on the context to retrieve.
This will allow to get an Identifier for any registered Operation for "free".
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D86994
This adds some initial support for regions and does not support formatting the specific arguments of a region. For now this can be achieved by using a custom directive that formats the arguments and then parses the region.
Differential Revision: https://reviews.llvm.org/D86760
The PDL Interpreter dialect provides a lower level abstraction compared to the PDL dialect, and is targeted towards low level optimization and interpreter code generation. The dialect operations encapsulates low-level pattern match and rewrite "primitives", such as navigating the IR (Operation::getOperand), creating new operations (OpBuilder::create), etc. Many of the operations within this dialect also fuse branching control flow with some form of a predicate comparison operation. This type of fusion reduces the amount of work that an interpreter must do when executing.
An example of this representation is shown below:
```mlir
// The following high level PDL pattern:
pdl.pattern : benefit(1) {
%resultType = pdl.type
%inputOperand = pdl.input
%root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
pdl.rewrite %root {
pdl.replace %root with (%inputOperand)
}
}
// May be represented in the interpreter dialect as follows:
module {
func @matcher(%arg0: !pdl.operation) {
pdl_interp.check_operation_name of %arg0 is "foo.op" -> ^bb2, ^bb1
^bb1:
pdl_interp.return
^bb2:
pdl_interp.check_operand_count of %arg0 is 1 -> ^bb3, ^bb1
^bb3:
pdl_interp.check_result_count of %arg0 is 1 -> ^bb4, ^bb1
^bb4:
%0 = pdl_interp.get_operand 0 of %arg0
pdl_interp.is_not_null %0 : !pdl.value -> ^bb5, ^bb1
^bb5:
%1 = pdl_interp.get_result 0 of %arg0
pdl_interp.is_not_null %1 : !pdl.value -> ^bb6, ^bb1
^bb6:
pdl_interp.record_match @rewriters::@rewriter(%0, %arg0 : !pdl.value, !pdl.operation) : benefit(1), loc([%arg0]), root("foo.op") -> ^bb1
}
module @rewriters {
func @rewriter(%arg0: !pdl.value, %arg1: !pdl.operation) {
pdl_interp.replace %arg1 with(%arg0)
pdl_interp.return
}
}
}
```
Differential Revision: https://reviews.llvm.org/D84579
PDL presents a high level abstraction for the rewrite pattern infrastructure available in MLIR. This abstraction allows for representing patterns transforming MLIR, as MLIR. This allows for applying all of the benefits that the general MLIR infrastructure provides, to the infrastructure itself. This means that pattern matching can be more easily verified for correctness, targeted by frontends, and optimized.
PDL abstracts over various different aspects of patterns and core MLIR data structures. Patterns are specified via a `pdl.pattern` operation. These operations contain a region body for the "matcher" code, and terminate with a `pdl.rewrite` that either dispatches to an external rewriter or contains a region for the rewrite specified via `pdl`. The types of values in `pdl` are handle types to MLIR C++ types, with `!pdl.attribute`, `!pdl.operation`, and `!pdl.type` directly mapping to `mlir::Attribute`, `mlir::Operation*`, and `mlir::Value` respectively.
An example pattern is shown below:
```mlir
// pdl.pattern contains metadata similarly to a `RewritePattern`.
pdl.pattern : benefit(1) {
// External input operand values are specified via `pdl.input` operations.
// Result types are constrainted via `pdl.type` operations.
%resultType = pdl.type
%inputOperand = pdl.input
%root, %results = pdl.operation "foo.op"(%inputOperand) -> %resultType
pdl.rewrite(%root) {
pdl.replace %root with (%inputOperand)
}
}
```
This is a culmination of the work originally discussed here: https://groups.google.com/a/tensorflow.org/g/mlir/c/j_bn74ByxlQ
Differential Revision: https://reviews.llvm.org/D84578
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
The current modeling of LLVM IR types in MLIR is based on the LLVMType class
that wraps a raw `llvm::Type *` and delegates uniquing, printing and parsing to
LLVM itself. This model makes thread-safe type manipulation hard and is being
progressively replaced with a cleaner MLIR model that replicates the type
system. Introduce a set of classes reflecting the LLVM IR type system in MLIR
instead of wrapping the existing types. These are currently introduced as
separate classes without affecting the dialect flow, and are exercised through
a test dialect. Once feature parity is reached, the old implementation will be
gradually substituted with the new one.
Depends On D84171
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D84339
This adds a `parseOptionalAttribute` method to the OpAsmParser that allows for parsing optional attributes, in a similar fashion to how optional types are parsed. This also enables the use of attribute values as the first element of an assembly format optional group.
Differential Revision: https://reviews.llvm.org/D83712
Depending on where the 0 dimension is within the shape, the parser will currently reject .mlir generated by the printer.
Differential Revision: https://reviews.llvm.org/D83445
Summary: At this point Parser has grown to be over 5000 lines and can be very difficult to navigate/update/etc. This commit splits Parser.cpp into several sub files focused on parsing specific types of entities; e.g., Attributes, Types, etc.
Differential Revision: https://reviews.llvm.org/D81299
Modify structure type in SPIR-V dialect to support:
1) Multiple decorations per structure member
2) Key-value based decorations (e.g., MatrixStride)
This commit kept the Offset decoration separate from members'
decorations container for easier implementation and logical clarity.
As such, all references to Structure layoutinfo are now offsetinfo,
and any member layout defining decoration (e.g., RowMajor for Matrix)
will be add to the members' decorations container along with its
value if any.
Differential Revision: https://reviews.llvm.org/D81426
This patch is a follow-up on https://reviews.llvm.org/D81127
BF16 constants were represented as 64-bit floating point values due to the lack
of support for BF16 in APFloat. APFloat was recently extended to support
BF16 so this patch is fixing the BF16 constant representation to be 16-bit.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D81218
This still allows `if (value)` while requiring an explicit cast when not
in a boolean context. This means things like `std::set<Value>` will no
longer compile.
Differential Revision: https://reviews.llvm.org/D80497
This is a wrapper around vector of NamedAttributes that keeps track of whether sorted and does some minimal effort to remain sorted (doing more, e.g., appending attributes in sorted order, could be done in follow up). It contains whether sorted and if a DictionaryAttr is queried, it caches the returned DictionaryAttr along with whether sorted.
Change MutableDictionaryAttr to always return a non-null Attribute even when empty (reserve null cases for errors). To this end change the getter to take a context as input so that the empty DictionaryAttr could be queried. Also create one instance of the empty dictionary attribute that could be reused without needing to lock context etc.
Update infer type op interface to use DictionaryAttr and use NamedAttrList to avoid incurring multiple conversion costs.
Fix bug in sorting helper function.
Differential Revision: https://reviews.llvm.org/D79463
The types of forward references are checked that they match with other
uses, but they do not check they match with the definition.
func @forward_reference_type_check() -> (i8) {
br ^bb2
^bb1:
return %1 : i8
^bb2:
%1 = "bar"() : () -> (f32)
br ^bb1
}
Would be parsed and the use site of '%1' would be silently changed to
'f32'.
This commit adds a test for this case, and a check during parsing for
the types to match.
Patch by Matthew Parkinson <mattpark@microsoft.com>
Closes D79317.
This revision allows for creating DenseElementsAttrs and accessing elements using std::complex<APInt>/std::complex<APFloat>. This allows for opaquely accessing and transforming complex values. This is used by the printer/parser to provide pretty printing for complex values. The form for complex values matches that of std::complex, i.e.:
```
// `(` element `,` element `)`
dense<(10,10)> : tensor<complex<i64>>
```
Differential Revision: https://reviews.llvm.org/D79296
This revision adds support for storing ComplexType elements inside of a DenseElementsAttr. We store complex objects as an array of two elements, matching the definition of std::complex. There is no current attribute storage for ComplexType, but DenseElementsAttr provides API for access/creation using std::complex<>. Given that the internal implementation of DenseElementsAttr is already fairly opaque, the only real complexity here is in the printing/parsing. This revision keeps it simple for now and always uses hex when printing complex elements. A followup will add prettier syntax for this.
Differential Revision: https://reviews.llvm.org/D79281
- Exports MLIR targets to be used out-of-tree.
- mimicks `add_clang_library` and `add_flang_library`.
- Fixes libMLIR.so
After https://reviews.llvm.org/D77515 libMLIR.so was no longer containing
any object files. We originally had a cludge there that made it work with
the static initalizers and when switchting away from that to the way the
clang shlib does it, I noticed that MLIR doesn't create a `obj.{name}` target,
and doesn't export it's targets to `lib/cmake/mlir`.
This is due to MLIR using `add_llvm_library` under the hood, which adds
the target to `llvmexports`.
Differential Revision: https://reviews.llvm.org/D78773
[MLIR] Fix libMLIR.so and LLVM_LINK_LLVM_DYLIB
Primarily, this patch moves all mlir references to LLVM libraries into
either LLVM_LINK_COMPONENTS or LINK_COMPONENTS. This enables magic in
the llvm cmake files to automatically replace reference to LLVM components
with references to libLLVM.so when necessary. Among other things, this
completes fixing libMLIR.so, which has been broken for some configurations
since D77515.
Unlike previously, the pattern is now that mlir libraries should almost
always use add_mlir_library. Previously, some libraries still used
add_llvm_library. However, this confuses the export of targets for use
out of tree because libraries specified with add_llvm_library are exported
by LLVM. Instead users which don't need/can't be linked into libMLIR.so
can specify EXCLUDE_FROM_LIBMLIR
A common error mode is linking with LLVM libraries outside of LINK_COMPONENTS.
This almost always results in symbol confusion or multiply defined options
in LLVM when the same object file is included as a static library and
as part of libLLVM.so. To catch these errors more directly, there's now
mlir_check_all_link_libraries.
To simplify usage of add_mlir_library, we assume that all mlir
libraries depend on LLVMSupport, so it's not necessary to separately specify
it.
tested with:
BUILD_SHARED_LIBS=on,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB,
BUILD_SHARED_LIBS=off + LLVM_BUILD_LLVM_DYLIB + LLVM_LINK_LLVM_DYLIB.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79067
[MLIR] Move from using target_link_libraries to LINK_LIBS
This allows us to correctly generate dependencies for derived targets,
such as targets which are created for object libraries.
By: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Differential Revision: https://reviews.llvm.org/D79243
Three commits have been squashed to avoid intermediate build breakage.
In cmake, dependencies on generated files require some sophistication in the build system. At build time, files are parsed to determine which headers they depend on and these dependencies are injected into the build system. This works well with ninja, but has some constraints with the makefile generator. According to the cmake documentation, this only works reliably within the same directory.
This patch expands the usage of mlir-headers to include all generated headers and adds an mlir-generic-headers target which triggers generation of dialect-independent headers. These targets are used to express dependencies on generated headers. This is mostly handled in AddMLIR.cmake and only a few CMakeLists.txt files need to change.
Differential Revision: https://reviews.llvm.org/D79242
These libraries are distinct from other things in Analysis in that they
operate only on core IR concepts. This also simplifies dependencies
so that Dialect -> Analysis -> Parser -> IR. Previously, the parser depended
on portions of the the Analysis directory as well, which sometimes
caused issues with the way the cmake makefile generator discovers
dependencies on generated files during compilation.
Differential Revision: https://reviews.llvm.org/D79240
Makes the relationship and function clearer. Accordingly rename getAttrList to getMutableAttrDict.
Differential Revision: https://reviews.llvm.org/D79125
This revision refactors the structure of the operand storage such that there is no additional memory cost for resizable operand lists until it is required. This is done by using two different internal representations for the operand storage:
* One using trailing operands
* One using a dynamically allocated std::vector<OpOperand>
This allows for removing the resizable operand list bit, and will free up APIs from needing to workaround non-resizable operand lists.
Differential Revision: https://reviews.llvm.org/D78875