mirror of
https://github.com/intel/llvm.git
synced 2026-02-03 02:26:27 +08:00
Summary:
Canonicalization and folding patterns in StandardOps may interfere with the needs
of Linalg. This revision introduces specific foldings for dynamic memrefs that can
be proven to be static.
Very concretely:
Determines whether it is possible to fold it away in the parent Linalg op:
```mlir
%1 = memref_cast %0 : memref<8x16xf32> to memref<?x?xf32>
%2 = linalg.slice %1 ... : memref<?x?xf32> ...
// or
%1 = memref_cast %0 : memref<8x16xf32, affine_map<(i, j)->(16 * i + j)>>
to memref<?x?xf32>
linalg.generic(%1 ...) : memref<?x?xf32> ...
```
into
```mlir
%2 = linalg.slice %0 ... : memref<8x16xf32> ...
// or
linalg.generic(%0 ... : memref<8x16xf32, affine_map<(i, j)->(16 * i + j)>>
```
Reviewers: ftynse, aartbik, jsetoain, tetuante, asaadaldien
Subscribers: mehdi_amini, rriddle, jpienaar, burmako, shauheen, antiagainst, arpith-jacob, mgester, lucyrfox, liufengdb, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D73565