Commit Graph

18 Commits

Author SHA1 Message Date
Nicolas Vasilache
21638dcda9 [MLIR] Extend vectorization to 2+-D patterns
This CL adds support for vectorization using more interesting 2-D and 3-D
patterns. Note in particular the fact that we match some pretty complex
imperfectly nested 2-D patterns with a quite minimal change to the
implementation: we just add a bit of recursion to traverse the matched
patterns and actually vectorize the loops.

For instance, vectorizing the following loop by 128:
```
for %i3 = 0 to %0 {
  %7 = affine_apply (d0) -> (d0)(%i3)
  %8 = load %arg0[%c0_0, %7] : memref<?x?xf32>
}
```

Currently generates:
```
#map0 = ()[s0] -> (s0 + 127)
#map1 = (d0) -> (d0)
for %i3 = 0 to #map0()[%0] step 128 {
  %9 = affine_apply #map1(%i3)
  %10 = alloc() : memref<1xvector<128xf32>>
  %11 = "n_d_unaligned_load"(%arg0, %c0_0, %9, %10, %c0) :
    (memref<?x?xf32>, index, index, memref<1xvector<128xf32>>, index) ->
    (memref<?x?xf32>, index, index, memref<1xvector<128xf32>>, index)
   %12 = load %10[%c0] : memref<1xvector<128xf32>>
}
```

The above is subject to evolution.

PiperOrigin-RevId: 219629745
2019-03-29 13:46:58 -07:00
River Riddle
4c465a181d Implement value type abstraction for types.
This is done by changing Type to be a POD interface around an underlying pointer storage and adding in-class support for isa/dyn_cast/cast.

PiperOrigin-RevId: 219372163
2019-03-29 13:45:54 -07:00
Nicolas Vasilache
af7f56fdf8 [MLIR] Implement 1-D vectorization for fastest varying load/stores
This CL is a first in a series that implements early vectorization of
increasingly complex patterns. In particular, early vectorization will support
arbitrary loop nesting patterns (both perfectly and imperfectly nested), at
arbitrary depths in the loop tree.

This first CL builds the minimal support for applying 1-D patterns.
It relies on an unaligned load/store op abstraction that can be inplemented
differently on different HW.
Future CLs will support higher dimensional patterns, but 1-D patterns already
exhibit interesting properties.
In particular, we want to separate pattern matching (i.e. legality both
structural and dependency analysis based), from profitability analysis, from
application of the transformation.
As a consequence patterns may intersect and we need to verify that a pattern
can still apply by the time we get to applying it.

A non-greedy analysis on profitability that takes into account pattern
intersection is left for future work.

Additionally the CL makes the following cleanups:
1. the matches method now returns a value, not a reference;
2. added comments about the MLFunctionMatcher and MLFunctionMatches usage by
value;
3. added size and empty methods to matches;
4. added a negative vectorization test with a conditional, this exhibited a
but in the iterators. Iterators now return nullptr if the underlying storage
is nullpt.

PiperOrigin-RevId: 219299489
2019-03-29 13:44:26 -07:00
Uday Bondhugula
bdfd6193b8 Add getMemRefType() accessors to LoadOp/StoreOp.
- There are several places where we are casting the type of the memref obtained
  from the load/store op to a memref type, and this will become even more
  common (some upcoming CLs this week). Add a getMemRefType and use it at
  several places where the cast was being used.

PiperOrigin-RevId: 219164326
2019-03-29 13:43:58 -07:00
Feng Liu
34927e2474 Rename Operation::getAs to Operation::dyn_cast
Also rename Operation::is to Operation::isa
Introduce Operation::cast

All of these are for consistency with global dyn_cast/cast/isa operators.

PiperOrigin-RevId: 217878786
2019-03-29 13:33:41 -07:00
Uday Bondhugula
18e666702c Generalize / improve DMA transfer overlap; nested and multiple DMA support; resolve
multiple TODOs.

- replace the fake test pass (that worked on just the first loop in the
  MLFunction) to perform DMA pipelining on all suitable loops.
- nested DMAs work now (DMAs in an outer loop, more DMAs in nested inner loops)
- fix bugs / assumptions: correctly copy memory space and elemental type of source
  memref for double buffering.
- correctly identify matching start/finish statements, handle multiple DMAs per
  loop.
- introduce dominates/properlyDominates utitilies for MLFunction statements.
- move checkDominancePreservationOnShifts to LoopAnalysis.h; rename it
  getShiftValidity
- refactor getContainingStmtPos -> findAncestorStmtInBlock - move into
  Analysis/Utils.h; has two users.
- other improvements / cleanup for related API/utilities
- add size argument to dma_wait - for nested DMAs or in general, it makes it
  easy to obtain the size to use when lowering the dma_wait since we wouldn't
  want to identify the matching dma_start, and more importantly, in general/in the
  future, there may not always be a dma_start dominating the dma_wait.
- add debug information in the pass

PiperOrigin-RevId: 217734892
2019-03-29 13:32:28 -07:00
Nicolas Vasilache
3013dadb7c [MLIR] Basic infrastructure for vectorization test
This CL implements a very simple loop vectorization **test** and the basic
infrastructure to support it.

The test simply consists in:
1. matching the loops in the MLFunction and all the Load/Store operations
nested under the loop;
2. testing whether all the Load/Store are contiguous along the innermost
memory dimension along that particular loop. If any reference is
non-contiguous (i.e. the ForStmt SSAValue appears in the expression), then
the loop is not-vectorizable.

The simple test above can gradually be extended with more interesting
behaviors to account for the fact that a layout permutation may exist that
enables contiguity etc. All these will come in due time but it is worthwhile
noting that the test already supports detection of outer-vetorizable loops.

In implementing this test, I also added a recursive MLFunctionMatcher and some
sugar that can capture patterns
such as `auto gemmLike = Doall(Doall(Red(LoadStore())))` and allows iterating
on the matched IR structures. For now it just uses in order traversal but
post-order DFS will be useful in the future once IR rewrites start occuring.

One may note that the memory management design decision follows a different
pattern from MLIR. After evaluating different designs and how they quickly
increase cognitive overhead, I decided to opt for the simplest solution in my
view: a class-wide (threadsafe) RAII context.

This way, a pass that needs MLFunctionMatcher can just have its own locally
scoped BumpPtrAllocator and everything is cleaned up when the pass is destroyed.
If passes are expected to have a longer lifetime, then the contexts can easily
be scoped inside the runOnMLFunction call and storage lifetime reduced.
Lastly, whatever the scope of threading (module, function, pass), this is
expected to also be future-proof wrt concurrency (but this is a detail atm).

PiperOrigin-RevId: 217622889
2019-03-29 13:32:13 -07:00
Nicolas Vasilache
1d3e7e2616 [MLIR] AffineMap value type
This CL applies the same pattern as AffineExpr to AffineMap: a simple struct
that acts as the storage is allocated in the bump pointer. The AffineMap is
immutable and accessed everywhere by value.

PiperOrigin-RevId: 216445930
2019-03-29 13:26:24 -07:00
Nicolas Vasilache
8ebb6ff171 [MLIR] Sketch AffineExpr value type
This CL sketches what it takes for AffineExpr to fully have by-value semantics
and not be a not-so-smart pointer anymore.

This essentially makes the underyling class a simple storage struct and
implements the operations on the value type directly. Since there is no
forwarding of operations anymore, we can full isolate the storage class and
make a hard visibility barrier by moving detail::AffineExpr into
AffineExprDetail.h.

AffineExprDetail.h is only included where storage-related information is
needed.

PiperOrigin-RevId: 216385459
2019-03-29 13:25:42 -07:00
Nicolas Vasilache
6707c7bea1 [MLIR] AffineExpr final cleanups
This CL:
1. performs the global codemod AffineXExpr->AffineXExprClass and
AffineXExprRef -> AffineXExpr;
2. simplifies function calls by removing the redundant MLIRContext parameter;
3. adds missing binary operator versions of scalar op AffineExpr where it
makes sense.

PiperOrigin-RevId: 216242674
2019-03-29 13:25:14 -07:00
Nicolas Vasilache
ce2edea135 [MLIR] Cleanup AffineExpr
This CL introduces a series of cleanups for AffineExpr value types:
1. to make it clear that the value types should be used, the pointer
AffineExpr types are put in the detail namespace. Unfortunately, since the
value type operator-> only forwards to the underlying pointer type, we
still
need to expose this in the include file for now;
2. AffineExprKind is ok to use, it thus comes out of detail and thus of
AffineExpr
3. getAffineDimExpr, getAffineSymbolExpr, getAffineConstantExpr are
similarly
extracted as free functions and their naming is mande consistent across
Builder, MLContext and AffineExpr
4. AffineBinaryOpEx::simplify functions are made into static free
functions.
In particular it is moved away from AffineMap.cpp where it does not belong
5. operator AffineExprType is made explicit
6. uses the binary operators everywhere possible
7. drops the pointer usage everywhere outside of AffineExpr.cpp,
MLIRContext.cpp and AsmPrinter.cpp

PiperOrigin-RevId: 216207212
2019-03-29 13:24:45 -07:00
Nicolas Vasilache
4911978f7e [MLIR] Value types for AffineXXXExpr
This CL makes AffineExprRef into a value type.

Notably:
1. drops llvm isa, cast, dyn_cast on pointer type and uses member functions on
the value type. It may be possible to still use classof  (in a followup CL)
2. AffineBaseExprRef aggressively casts constness away: if we mean the type is
immutable then let's jump in with both feet;
3. Drop implicit casts to the underlying pointer type because that always
results in surprising behavior and is not needed in practice once enough
cleanup has been applied.

The remaining negative I see is that we still need to mix operator. and
operator->. There is an ugly solution that forwards the methods but that ends
up duplicating the class hierarchy which I tried to avoid as much as
possible. But maybe it's not that bad anymore since AffineExpr.h would still
contain a single class hierarchy (the duplication would be impl detail in.cpp)

PiperOrigin-RevId: 216188003
2019-03-29 13:24:31 -07:00
Nicolas Vasilache
5b8017db18 [MLIR] Templated AffineExprBaseRef
This CL implements AffineExprBaseRef as a templated type to allow LLVM-style
casts to work properly. This also allows making AffineExprBaseRef::expr
private.

To achieve this, it is necessary to use llvm::simplify_type and make
AffineConstExpr derive from both AffineExpr and llvm::simplify<AffineExprRef>.
Note that llvm::simplify_type is just an interface to enable the proper
template resolution of isa/cast/dyn_cast but it otherwise holds no value.

Lastly note that certain dyn_cast operations wanted the const AffineExpr* form
of AffineExprBaseRef so I made the implicit constructor take that by default
and documented the immutable behavior. I think this is consistent with the
decision to make unique'd type immutable by convention and never use const on
them.

PiperOrigin-RevId: 215642247
2019-03-29 13:22:49 -07:00
Nicolas Vasilache
544f5e7a9b [MLIR] Remove uses of AffineExpr* outside of IR
This CL uniformizes the uses of AffineExprWrap outside of IR.
The public API of AffineExpr builder is modified to only use AffineExprWrap.
A few places access AffineExprWrap.expr, this is only while the API is in
transition to easily keep track (i.e. make expr private and let the compiler
track the errors).

Parser.cpp exhibits patterns that are dependent on nullptr values so
converting it is left for another CL.

PiperOrigin-RevId: 215642005
2019-03-29 13:22:35 -07:00
Uday Bondhugula
0ebc927f2f Fix MLIR's floordiv, ceildiv, and mod for constant inputs (for negative lhs's)
- introduce mlir::{floorDiv, ceilDiv, mod} for constant inputs in
  mlir/Support/MathExtras.h
- consistently use these everywhere in IR, Analysis, and Transforms.

PiperOrigin-RevId: 215580677
2019-03-29 13:21:53 -07:00
Uday Bondhugula
ec35e51f6d Change loop step to be a positive integral constant
Changing this per discussion on mlir-team. Spec updated.

PiperOrigin-RevId: 214868483
2019-03-29 13:21:13 -07:00
Uday Bondhugula
ab4797229c Extend loop unroll/unroll-and-jam to affine bounds + refactor related code.
- extend loop unroll-jam similar to loop unroll for affine bounds
- extend both loop unroll/unroll-jam to deal with cleanup loop for non multiple
  of unroll factor.
- extend promotion of single iteration loops to work with affine bounds
- fix typo bugs in loop unroll
- refactor common code b/w loop unroll and loop unroll-jam
- move prototypes of non-pass transforms to LoopUtils.h
- add additional builder methods.
- introduce loopUnrollUpTo(factor) to unroll by either factor or trip count,
  whichever is less.
- remove Statement::isInnermost (not used for now - will come back at the right
  place/in right form later)

PiperOrigin-RevId: 213471227
2019-03-29 13:15:06 -07:00
Uday Bondhugula
64812a56c7 Extend getConstantTripCount to deal with a larger subset of loop bounds; make loop
unroll/unroll-and-jam more powerful; add additional affine expr builder methods

- use previously added analysis/simplification to infer multiple of unroll
  factor trip counts, making loop unroll/unroll-and-jam more general.

- for loop unroll, support bounds that are single result affine map's with the
  same set of operands. For unknown loop bounds, loop unroll will now work as
  long as trip count can be determined to be a multiple of unroll factor.

- extend getConstantTripCount to deal with single result affine map's with the
  same operands. move it to mlir/Analysis/LoopAnalysis.cpp

- add additional builder utility methods for affine expr arithmetic
  (difference, mod/floordiv/ceildiv w.r.t postitive constant). simplify code to
  use the utility methods.

- move affine analysis routines to AffineAnalysis.cpp/.h from
  AffineStructures.cpp/.h.

- Rename LoopUnrollJam to LoopUnrollAndJam to match class name.

- add an additional simplification for simplifyFloorDiv, simplifyCeilDiv

- Rename AffineMap::getNumOperands() getNumInputs: an affine map by itself does
  not have operands. Operands are passed to it through affine_apply, from loop
  bounds/if condition's, etc., operands are stored in the latter.

This should be sufficiently powerful for now as far as unroll/unroll-and-jam go for TPU
code generation, and can move to other analyses/transformations.

Loop nests like these are now unrolled without any cleanup loop being generated.

  for %i = 1 to 100 {
    // unroll factor 4: no cleanup loop will be generated.
    for %j = (d0) -> (d0) (%i) to (d0) -> (5*d0 + 3) (%i) {
      %x = "foo"(%j) : (affineint) -> i32
    }
  }

  for %i = 1 to 100 {
    // unroll factor 4: no cleanup loop will be generated.
    for %j = (d0) -> (d0) (%i) to (d0) -> (d0 - d mod 4 - 1) (%i) {
      %y = "foo"(%j) : (affineint) -> i32
    }
  }

  for %i = 1 to 100 {
    for %j = (d0) -> (d0) (%i) to (d0) -> (d0 + 128) (%i) {
      %x = "foo"() : () -> i32
    }
  }

TODO(bondhugula): extend this to LoopUnrollAndJam as well in the next CL (with minor
changes).

PiperOrigin-RevId: 212661212
2019-03-29 13:13:00 -07:00