Friendlier wrapper for transform.foreach.
To facilitate that friendliness, makes it so that OpResult.owner returns
the relevant OpView instead of Operation. For good measure, also changes
Value.owner to return OpView instead of Operation, thereby ensuring
consistency. That is, makes it is so that all op-returning .owner
accessors return OpView (and thereby give access to all goodies
available on registered OpViews.)
Reland of #171544 due to fixup for integration test.
Friendlier wrapper for `transform.foreach`.
To facilitate that friendliness, makes it so that `OpResult.owner`
returns the relevant `OpView` instead of `Operation`. For good measure,
also changes `Value.owner` to return `OpView` instead of `Operation`,
thereby ensuring consistency. That is, makes it is so that all
op-returning `.owner` accessors return `OpView` (and thereby give access
to all goodies available on registered `OpView`s.)
This bug was introduced by #108323, where the loc and ip were not
properly set. It may lead to errors when the operations are not linearly
asserted to the IR.
There were two bugs lurking in mlir.ir.loc_tracebacks():
1) The default None parameter was not handled correctly (passed to a
C++ function that expects ints.
2) The `yield` was incorrectly indented meaning loc_tracebacks()
could not be nested (a "generator didn't yield" exception would be
raised).
Added testing of loc_tracebacks by replacing the custom contextmanager
in the auto_location.py test with the loc_tracebacks() API.
Had to harden the test to line number differences.
---------
Co-authored-by: James Molloy <jmolloy@google.com>
Disallow implicit casting, which is surprising, and, IME, usually
indicative of copy-paste errors.
Because the initial value must be a scalar, I don't expect this to
affect any data movement.
The C++ index switch op has utilities for `getCaseBlock(int i)` and
`getDefaultBlock()`, so these have been added.
Optional body builder args have been added: one for the default case and
one for the switch cases.
Updates the derived Op-classes for the main transform ops to have all
the arguments, etc, from the auto-generated classes. Additionally
updates and adds missing snake_case wrappers for the derived classes
which shadow the snake_case wrappers of the auto-generated classes,
which were hitherto exposed alongside the derived classes.
Adds the first XeGPU transform op, `xegpu.set_desc_layout`, which attachs a `xegpu.layout` attribute to the descriptor that a `xegpu.create_nd_tdesc` op returns.
Add builders on the Python side that match builders in the C++ side, add tests for launching GPU kernels and regions, and correct some small documentation mistakes. This reflects the API decisions already made in the func dialect's Python bindings and makes use of the GPU dialect's bindings work more similar to C++ interface.
By allowing `transform.smt.constrain_params`'s region to yield SMT-vars,
op instances can declare relationships, through constraints, on incoming
params-as-SMT-vars and outgoing SMT-vars-as-params. This makes it
possible to declare that computations on params should be performed.
The semantics are that the yielded SMT-vars should be from any valid
satisfying assignment/model of the constraints in the region.
Adds initial support for Python bindings to the OpenACC dialect.
* The bindings do not provide any niceties yet, just the barebones
exposure of the dialect to Python. Construction of OpenACC ops is
therefore verbose and somewhat inconvenient, as evidenced by the test.
* The test only constructs one module, but I attempted to use enough
operations to be meaningful. It does not test all the ops exposed, but
does contain a realistic example of a memcpy idiom.
The func dialect provides a more pythonic interface for constructing
operations, but the gpu dialect does not; this is the first PR to
provide the same conveniences for the gpu dialect, starting with the
gpu.func op.
Changes to linalg `structured.fuse` transform op:
* Adds an optional `use_forall` boolean argument which generates a tiled
`scf.forall` loop instead of `scf.for` loops.
* `tile_sizes` can now be any parameter or handle.
* `tile_interchange` can now be any parameter or handle.
* IR formatting changes from `transform.structured.fuse %0 [4, 8] ...`
to `transform.structured.fuse %0 tile_sizes [4, 8] ...`
- boolean arguments are now `UnitAttrs` and should be set via the op
attr-dict: `{apply_cleanup, use_forall}`
This is a follow-up PR for #162699.
Currently, in the function where we define rewrite patterns, the `op` we
receive is of type `ir.Operation` rather than a specific `OpView` type
(such as `arith.AddIOp`). This means we can’t conveniently access
certain parts of the operation — for example, we need to use
`op.operands[0]` instead of `op.lhs`. The following example code
illustrates this situation.
```python
def to_muli(op, rewriter):
# op is typed ir.Operation instead of arith.AddIOp
pass
patterns.add(arith.AddIOp, to_muli)
```
In this PR, we convert the operation to its corresponding `OpView`
subclass before invoking the rewrite pattern callback, making it much
easier to write patterns.
---------
Co-authored-by: Maksim Levental <maksim.levental@gmail.com>
This op enables expressing uncertainty regarding what should be
happening at particular places in transform-dialect schedules. In
particular, it enables representing a choice among alternative regions.
This choice is resolved through providing a `selected_region` argument.
When this argument is provided, the semantics are such that it is valid
to rewrite the op through substituting in the selected region -- with
the op's interpreted semantics corresponding to exactly this.
This op represents another piece of the puzzle w.r.t. a toolkit for
expressing autotuning problems with the transform dialect. Note that
this goes beyond tuning knobs _on_ transforms, going further by making
it tunable which (sequences of) transforms are to be applied.
Transform op to request a tensor value to live in a specific memory
space after bufferization
Co-authored-by: Nicolas Vasilache <Nico.Vasilache@amd.com>
Co-authored-by: Alex Zinenko <ftynse@gmail.com>
Stubgen doesn't work when cross-compiling (stubgen will run in the host
interpreter and then fail to find the extension module for the host
arch). So disable it when `CMAKE_CROSSCOMPILING=ON`.
Introduces a Transform-dialect SMT-extension so that we can have an op
to express constrains on Transform-dialect params, in particular when
these params are knobs -- see transform.tune.knob -- and can hence be
seen as symbolic variables. This op allows expressing joint constraints
over multiple params/knobs together.
While the op's semantics are clearly defined, per SMTLIB, the interpreted
semantics -- i.e. the `apply()` method -- for now just defaults to failure. In
the future we should support attaching an implementation so that users
can Bring Your Own Solver and thereby control performance of
interpreting the op. For now the main usage is to walk schedule IR and
collect these constraints so that knobs can be rewritten to constants that
satisfy the constraints.
This a reland of https://github.com/llvm/llvm-project/pull/155741 which
was reverted at https://github.com/llvm/llvm-project/pull/157831. This
version is narrower in scope - it only turns on automatic stub
generation for `MLIRPythonExtension.Core._mlir` and **does not do
anything automatically**. Specifically, the only CMake code added to
`AddMLIRPython.cmake` is the `mlir_generate_type_stubs` function which
is then used only in a manual way. The API for
`mlir_generate_type_stubs` is:
```
Arguments:
MODULE_NAME: The fully-qualified name of the extension module (used for importing in python).
DEPENDS_TARGETS: List of targets these type stubs depend on being built; usually corresponding to the
specific extension module (e.g., something like StandalonePythonModules.extension._standaloneDialectsNanobind.dso)
and the core bindings extension module (e.g., something like StandalonePythonModules.extension._mlir.dso).
OUTPUT_DIR: The root output directory to emit the type stubs into.
OUTPUTS: List of expected outputs.
DEPENDS_TARGET_SRC_DEPS: List of cpp sources for extension library (for generating a DEPFILE).
IMPORT_PATHS: List of paths to add to PYTHONPATH for stubgen.
PATTERN_FILE: (Optional) Pattern file (see https://nanobind.readthedocs.io/en/latest/typing.html#pattern-files).
Outputs:
NB_STUBGEN_CUSTOM_TARGET: The target corresponding to generation which other targets can depend on.
```
Downstream users should use `mlir_generate_type_stubs` in coordination
with `declare_mlir_python_sources` to turn on stub generation for their
own downstream dialect extensions and upstream dialect extensions if
they so choose. Standalone example shows an example.
Note, downstream will also need to set
`-DMLIR_PYTHON_PACKAGE_PREFIX=...` correctly for their bindings.
In this PR we add basic python bindings for IRDL dialect, so that python
users can create and load IRDL dialects in python. This allows users, to
some extent, to define dialects in Python without having to modify
MLIR’s CMake/TableGen/C++ code and rebuild, making prototyping more
convenient.
A basic example is shown below (and also in the added test case):
```python
# create a module with IRDL dialects
module = Module.create()
with InsertionPoint(module.body):
dialect = irdl.DialectOp("irdl_test")
with InsertionPoint(dialect.body):
op = irdl.OperationOp("test_op")
with InsertionPoint(op.body):
f32 = irdl.is_(TypeAttr.get(F32Type.get()))
irdl.operands_([f32], ["input"], [irdl.Variadicity.single])
# load the module
irdl.load_dialects(module)
# use the op defined in IRDL
m = Module.parse("""
module {
%a = arith.constant 1.0 : f32
"irdl_test.test_op"(%a) : (f32) -> ()
}
""")
```
In https://github.com/llvm/llvm-project/pull/155741 I broke the cardinal
rule of CMake: nothing happens when you think it happens 🤷. Meaning:
`declare_mlir_python_sources(SOURCES_GLOB
"_mlir_libs/${_module_name}/**/*.pyi")` wasn't picking up any sources
_because they aren't generated yet_. This of course makes sense in
retrospect (the stubs are generated as part of the build process, post
extension compile, rather than the configure process).
Thus, the API needs to be:
```
GENERATE_TYPE_STUBS: List of generated type stubs expected from stubgen, relative to _mlir_libs.
```
Partially as a result of this omission, the stubs weren't being
installed into either the build dir nor the install dir. That is also
fixed now:
**Source dir (for easy reference):**
<img width="300" height="674" alt="image"
src="https://github.com/user-attachments/assets/a569f066-c2bd-4361-91f3-1c75181e51da"
/>
**Build dir (for forthcoming typechecker tests):**
<img width="398" height="551" alt="image"
src="https://github.com/user-attachments/assets/017859f9-fddb-49ee-85e5-915f5b5f7377"
/>
**Install dir:**
<img width="456" height="884" alt="image"
src="https://github.com/user-attachments/assets/8051be7e-898c-4ec8-a11e-e2408b241a56"
/>
This PR turns on automatic type stub generation (rather than relying on
hand-written/updated stubs). It uses nanobind's [stubgen
facility](https://nanobind.readthedocs.io/en/latest/typing.html#stub-generation).
If you would like to enable this functionality you can add
`GENERATE_TYPE_STUBS` to `declare_mlir_python_extension` .
Retry landing https://github.com/llvm/llvm-project/pull/153373
## Major changes from previous attempt
- remove the test in CAPI because no existing tests in CAPI deal with
sanitizer exemptions
- update `mlir/docs/Dialects/GPU.md` to reflect the new behavior: load
GPU binary in global ctors, instead of loading them at call site.
- skip the test on Aarch64 since we have an issue with initialization there
---------
Co-authored-by: Mehdi Amini <joker.eph@gmail.com>
This PR implements "automatic" location inference in the bindings. The
way it works is it walks the frame stack collecting source locations
(Python captures these in the frame itself). It is inspired by JAX's
[implementation](523ddcfbca/jax/_src/interpreters/mlir.py (L462))
but moves the frame stack traversal into the bindings for better
performance.
The system supports registering "included" and "excluded" filenames;
frames originating from functions in included filenames **will not** be
filtered and frames originating from functions in excluded filenames
**will** be filtered (in that order). This allows excluding all the
generated `*_ops_gen.py` files.
The system is also "toggleable" and off by default to save people who
have their own systems (such as JAX) from the added cost.
Note, the system stores the entire stacktrace (subject to
`locTracebackFramesLimit`) in the `Location` using specifically a
`CallSiteLoc`. This can be useful for profiling tools (flamegraphs
etc.).
Shoutout to the folks at JAX for coming up with a good system.
---------
Co-authored-by: Jacques Pienaar <jpienaar@google.com>
This patch specializes the Python bindings for ForallOp and
InParallelOp, similar to the existing one for ForOp. These bindings
create the regions and blocks properly and expose some additional
helpers.
- Introduces a `large_resource_limit` parameter across Python bindings,
enabling the eliding of resource strings exceeding a specified character
limit during IR printing.
- To maintain backward compatibilty, when using `operation.print()` API,
if `large_resource_limit` is None and the `large_elements_limit` is set,
the later will be used to elide the resource string as well. This change
was introduced by https://github.com/llvm/llvm-project/pull/125738.
- For printing using pass manager, the `large_resource_limit` and
`large_elements_limit` are completely independent of each other.