2021-02-12 12:53:12 +01:00
|
|
|
//===- GPUOpsLowering.cpp - GPU FuncOp / ReturnOp lowering ----------------===//
|
|
|
|
|
//
|
|
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
|
|
|
|
//
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
|
|
#include "GPUOpsLowering.h"
|
2023-08-09 17:23:59 +00:00
|
|
|
|
|
|
|
|
#include "mlir/Conversion/GPUCommon/GPUCommonPass.h"
|
2025-02-19 14:52:02 -05:00
|
|
|
#include "mlir/Conversion/LLVMCommon/VectorPattern.h"
|
2021-12-08 23:28:06 +00:00
|
|
|
#include "mlir/Dialect/LLVMIR/LLVMDialect.h"
|
Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.
Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.
Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.
ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.
(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)
As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.
I expect that the type attribute conversion system may be useful in
other contexts.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D142159
2023-01-19 21:56:04 +00:00
|
|
|
#include "mlir/IR/Attributes.h"
|
2021-02-12 12:53:12 +01:00
|
|
|
#include "mlir/IR/Builders.h"
|
2022-10-27 10:08:52 +02:00
|
|
|
#include "mlir/IR/BuiltinTypes.h"
|
2023-05-03 10:01:22 -04:00
|
|
|
#include "llvm/ADT/SmallVectorExtras.h"
|
[mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (#71546)
While the `gpu.launch` Op allows setting the size via the
`dynamic_shared_memory_size` argument, accessing the dynamic shared
memory is very convoluted. This PR implements the proposed Op,
`gpu.dynamic_shared_memory` that aims to simplify the utilization of
dynamic shared memory.
RFC:
https://discourse.llvm.org/t/rfc-simplifying-dynamic-shared-memory-access-in-gpu/
**Proposal from RFC**
This PR `gpu.dynamic.shared.memory` Op to use dynamic shared memory
feature efficiently. It is is a powerful feature that enables the
allocation of shared memory at runtime with the kernel launch on the
host. Afterwards, the memory can be accessed directly from the device. I
believe similar story exists for AMDGPU.
**Current way Using Dynamic Shared Memory with MLIR**
Let me illustrate the challenges of using dynamic shared memory in MLIR
with an example below. The process involves several steps:
- memref.global 0-sized array LLVM's NVPTX backend expects
- dynamic_shared_memory_size Set the size of dynamic shared memory
- memref.get_global Access the global symbol
- reinterpret_cast and subview Many OPs for pointer arithmetic
```
// Step 1. Create 0-sized global symbol. Manually set the alignment
memref.global "private" @dynamicShmem : memref<0xf16, 3> { alignment = 16 }
func.func @main() {
// Step 2. Allocate shared memory
gpu.launch blocks(...) threads(...)
dynamic_shared_memory_size %c10000 {
// Step 3. Access the global object
%shmem = memref.get_global @dynamicShmem : memref<0xf16, 3>
// Step 4. A sequence of `memref.reinterpret_cast` and `memref.subview` operations.
%4 = memref.reinterpret_cast %shmem to offset: [0], sizes: [14, 64, 128], strides: [8192,128,1] : memref<0xf16, 3> to memref<14x64x128xf16,3>
%5 = memref.subview %4[7, 0, 0][7, 64, 128][1,1,1] : memref<14x64x128xf16,3> to memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3>
%6 = memref.subview %5[2, 0, 0][1, 64, 128][1,1,1] : memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3> to memref<64x128xf16, strided<[128, 1], offset: 73728>, 3>
%7 = memref.subview %6[0, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>
%8 = memref.subview %6[32, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>
// Step.5 Use
"test.use.shared.memory"(%7) : (memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>) -> (index)
gpu.terminator
}
```
Let’s write the program above with that:
```
func.func @main() {
gpu.launch blocks(...) threads(...) dynamic_shared_memory_size %c10000 {
%i = arith.constant 18 : index
// Step 1: Obtain shared memory directly
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
%c147456 = arith.constant 147456 : index
%c155648 = arith.constant 155648 : index
%7 = memref.view %shmem[%c147456][] : memref<?xi8, 3> to memref<64x64xf16, 3>
%8 = memref.view %shmem[%c155648][] : memref<?xi8, 3> to memref<64x64xf16, 3>
// Step 2: Utilize the shared memory
"test.use.shared.memory"(%7) : (memref<64x64xf16, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, 3>) -> (index)
}
}
```
This PR resolves #72513
2023-11-16 14:42:17 +01:00
|
|
|
#include "llvm/ADT/StringSet.h"
|
2021-02-12 12:53:12 +01:00
|
|
|
#include "llvm/Support/FormatVariadic.h"
|
|
|
|
|
|
|
|
|
|
using namespace mlir;
|
|
|
|
|
|
2025-01-06 12:00:11 +01:00
|
|
|
LLVM::LLVMFuncOp mlir::getOrDefineFunction(gpu::GPUModuleOp moduleOp,
|
|
|
|
|
Location loc, OpBuilder &b,
|
|
|
|
|
StringRef name,
|
|
|
|
|
LLVM::LLVMFunctionType type) {
|
|
|
|
|
LLVM::LLVMFuncOp ret;
|
|
|
|
|
if (!(ret = moduleOp.template lookupSymbol<LLVM::LLVMFuncOp>(name))) {
|
|
|
|
|
OpBuilder::InsertionGuard guard(b);
|
|
|
|
|
b.setInsertionPointToStart(moduleOp.getBody());
|
|
|
|
|
ret = b.create<LLVM::LLVMFuncOp>(loc, name, type, LLVM::Linkage::External);
|
|
|
|
|
}
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static SmallString<16> getUniqueSymbolName(gpu::GPUModuleOp moduleOp,
|
|
|
|
|
StringRef prefix) {
|
|
|
|
|
// Get a unique global name.
|
|
|
|
|
unsigned stringNumber = 0;
|
|
|
|
|
SmallString<16> stringConstName;
|
|
|
|
|
do {
|
|
|
|
|
stringConstName.clear();
|
|
|
|
|
(prefix + Twine(stringNumber++)).toStringRef(stringConstName);
|
|
|
|
|
} while (moduleOp.lookupSymbol(stringConstName));
|
|
|
|
|
return stringConstName;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
LLVM::GlobalOp
|
|
|
|
|
mlir::getOrCreateStringConstant(OpBuilder &b, Location loc,
|
|
|
|
|
gpu::GPUModuleOp moduleOp, Type llvmI8,
|
|
|
|
|
StringRef namePrefix, StringRef str,
|
|
|
|
|
uint64_t alignment, unsigned addrSpace) {
|
|
|
|
|
llvm::SmallString<20> nullTermStr(str);
|
|
|
|
|
nullTermStr.push_back('\0'); // Null terminate for C
|
|
|
|
|
auto globalType =
|
|
|
|
|
LLVM::LLVMArrayType::get(llvmI8, nullTermStr.size_in_bytes());
|
|
|
|
|
StringAttr attr = b.getStringAttr(nullTermStr);
|
|
|
|
|
|
|
|
|
|
// Try to find existing global.
|
|
|
|
|
for (auto globalOp : moduleOp.getOps<LLVM::GlobalOp>())
|
|
|
|
|
if (globalOp.getGlobalType() == globalType && globalOp.getConstant() &&
|
|
|
|
|
globalOp.getValueAttr() == attr &&
|
|
|
|
|
globalOp.getAlignment().value_or(0) == alignment &&
|
|
|
|
|
globalOp.getAddrSpace() == addrSpace)
|
|
|
|
|
return globalOp;
|
|
|
|
|
|
|
|
|
|
// Not found: create new global.
|
|
|
|
|
OpBuilder::InsertionGuard guard(b);
|
|
|
|
|
b.setInsertionPointToStart(moduleOp.getBody());
|
|
|
|
|
SmallString<16> name = getUniqueSymbolName(moduleOp, namePrefix);
|
|
|
|
|
return b.create<LLVM::GlobalOp>(loc, globalType,
|
|
|
|
|
/*isConstant=*/true, LLVM::Linkage::Internal,
|
|
|
|
|
name, attr, alignment, addrSpace);
|
|
|
|
|
}
|
|
|
|
|
|
2021-02-12 12:53:12 +01:00
|
|
|
LogicalResult
|
2021-09-24 17:51:20 +00:00
|
|
|
GPUFuncOpLowering::matchAndRewrite(gpu::GPUFuncOp gpuFuncOp, OpAdaptor adaptor,
|
2021-02-12 12:53:12 +01:00
|
|
|
ConversionPatternRewriter &rewriter) const {
|
|
|
|
|
Location loc = gpuFuncOp.getLoc();
|
|
|
|
|
|
|
|
|
|
SmallVector<LLVM::GlobalOp, 3> workgroupBuffers;
|
2024-08-09 16:09:11 +02:00
|
|
|
if (encodeWorkgroupAttributionsAsArguments) {
|
|
|
|
|
// Append an `llvm.ptr` argument to the function signature to encode
|
|
|
|
|
// workgroup attributions.
|
|
|
|
|
|
|
|
|
|
ArrayRef<BlockArgument> workgroupAttributions =
|
|
|
|
|
gpuFuncOp.getWorkgroupAttributions();
|
|
|
|
|
size_t numAttributions = workgroupAttributions.size();
|
|
|
|
|
|
|
|
|
|
// Insert all arguments at the end.
|
|
|
|
|
unsigned index = gpuFuncOp.getNumArguments();
|
|
|
|
|
SmallVector<unsigned> argIndices(numAttributions, index);
|
|
|
|
|
|
|
|
|
|
// New arguments will simply be `llvm.ptr` with the correct address space
|
|
|
|
|
Type workgroupPtrType =
|
|
|
|
|
rewriter.getType<LLVM::LLVMPointerType>(workgroupAddrSpace);
|
|
|
|
|
SmallVector<Type> argTypes(numAttributions, workgroupPtrType);
|
|
|
|
|
|
|
|
|
|
// Attributes: noalias, llvm.mlir.workgroup_attribution(<size>, <type>)
|
|
|
|
|
std::array attrs{
|
|
|
|
|
rewriter.getNamedAttr(LLVM::LLVMDialect::getNoAliasAttrName(),
|
|
|
|
|
rewriter.getUnitAttr()),
|
|
|
|
|
rewriter.getNamedAttr(
|
|
|
|
|
getDialect().getWorkgroupAttributionAttrHelper().getName(),
|
|
|
|
|
rewriter.getUnitAttr()),
|
|
|
|
|
};
|
|
|
|
|
SmallVector<DictionaryAttr> argAttrs;
|
|
|
|
|
for (BlockArgument attribution : workgroupAttributions) {
|
|
|
|
|
auto attributionType = cast<MemRefType>(attribution.getType());
|
|
|
|
|
IntegerAttr numElements =
|
|
|
|
|
rewriter.getI64IntegerAttr(attributionType.getNumElements());
|
|
|
|
|
Type llvmElementType =
|
|
|
|
|
getTypeConverter()->convertType(attributionType.getElementType());
|
|
|
|
|
if (!llvmElementType)
|
|
|
|
|
return failure();
|
|
|
|
|
TypeAttr type = TypeAttr::get(llvmElementType);
|
|
|
|
|
attrs.back().setValue(
|
|
|
|
|
rewriter.getAttr<LLVM::WorkgroupAttributionAttr>(numElements, type));
|
|
|
|
|
argAttrs.push_back(rewriter.getDictionaryAttr(attrs));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Location match function location
|
|
|
|
|
SmallVector<Location> argLocs(numAttributions, gpuFuncOp.getLoc());
|
|
|
|
|
|
|
|
|
|
// Perform signature modification
|
|
|
|
|
rewriter.modifyOpInPlace(
|
|
|
|
|
gpuFuncOp, [gpuFuncOp, &argIndices, &argTypes, &argAttrs, &argLocs]() {
|
2025-04-30 09:26:42 +02:00
|
|
|
LogicalResult inserted =
|
|
|
|
|
static_cast<FunctionOpInterface>(gpuFuncOp).insertArguments(
|
|
|
|
|
argIndices, argTypes, argAttrs, argLocs);
|
|
|
|
|
(void)inserted;
|
|
|
|
|
assert(succeeded(inserted) &&
|
|
|
|
|
"expected GPU funcs to support inserting any argument");
|
2024-08-09 16:09:11 +02:00
|
|
|
});
|
|
|
|
|
} else {
|
|
|
|
|
workgroupBuffers.reserve(gpuFuncOp.getNumWorkgroupAttributions());
|
|
|
|
|
for (auto [idx, attribution] :
|
|
|
|
|
llvm::enumerate(gpuFuncOp.getWorkgroupAttributions())) {
|
|
|
|
|
auto type = dyn_cast<MemRefType>(attribution.getType());
|
|
|
|
|
assert(type && type.hasStaticShape() && "unexpected type in attribution");
|
|
|
|
|
|
|
|
|
|
uint64_t numElements = type.getNumElements();
|
|
|
|
|
|
|
|
|
|
auto elementType =
|
|
|
|
|
cast<Type>(typeConverter->convertType(type.getElementType()));
|
|
|
|
|
auto arrayType = LLVM::LLVMArrayType::get(elementType, numElements);
|
|
|
|
|
std::string name =
|
|
|
|
|
std::string(llvm::formatv("__wg_{0}_{1}", gpuFuncOp.getName(), idx));
|
|
|
|
|
uint64_t alignment = 0;
|
|
|
|
|
if (auto alignAttr = dyn_cast_or_null<IntegerAttr>(
|
|
|
|
|
gpuFuncOp.getWorkgroupAttributionAttr(
|
|
|
|
|
idx, LLVM::LLVMDialect::getAlignAttrName())))
|
|
|
|
|
alignment = alignAttr.getInt();
|
|
|
|
|
auto globalOp = rewriter.create<LLVM::GlobalOp>(
|
|
|
|
|
gpuFuncOp.getLoc(), arrayType, /*isConstant=*/false,
|
|
|
|
|
LLVM::Linkage::Internal, name, /*value=*/Attribute(), alignment,
|
|
|
|
|
workgroupAddrSpace);
|
|
|
|
|
workgroupBuffers.push_back(globalOp);
|
|
|
|
|
}
|
2021-02-12 12:53:12 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Remap proper input types.
|
|
|
|
|
TypeConverter::SignatureConversion signatureConversion(
|
|
|
|
|
gpuFuncOp.front().getNumArguments());
|
2023-07-26 18:05:13 +00:00
|
|
|
|
2023-02-21 07:51:44 +01:00
|
|
|
Type funcType = getTypeConverter()->convertFunctionSignature(
|
2023-04-06 16:10:45 +00:00
|
|
|
gpuFuncOp.getFunctionType(), /*isVariadic=*/false,
|
|
|
|
|
getTypeConverter()->getOptions().useBarePtrCallConv, signatureConversion);
|
2023-07-26 18:05:13 +00:00
|
|
|
if (!funcType) {
|
|
|
|
|
return rewriter.notifyMatchFailure(gpuFuncOp, [&](Diagnostic &diag) {
|
|
|
|
|
diag << "failed to convert function signature type for: "
|
|
|
|
|
<< gpuFuncOp.getFunctionType();
|
|
|
|
|
});
|
|
|
|
|
}
|
2021-02-12 12:53:12 +01:00
|
|
|
|
|
|
|
|
// Create the new function operation. Only copy those attributes that are
|
|
|
|
|
// not specific to function modeling.
|
|
|
|
|
SmallVector<NamedAttribute, 4> attributes;
|
2023-09-11 14:54:43 +00:00
|
|
|
ArrayAttr argAttrs;
|
2021-02-26 13:28:32 +00:00
|
|
|
for (const auto &attr : gpuFuncOp->getAttrs()) {
|
2021-11-18 05:23:32 +00:00
|
|
|
if (attr.getName() == SymbolTable::getSymbolAttrName() ||
|
2022-12-06 11:28:47 -08:00
|
|
|
attr.getName() == gpuFuncOp.getFunctionTypeAttrName() ||
|
2023-04-18 19:48:49 +00:00
|
|
|
attr.getName() ==
|
|
|
|
|
gpu::GPUFuncOp::getNumWorkgroupAttributionsAttrName() ||
|
|
|
|
|
attr.getName() == gpuFuncOp.getWorkgroupAttribAttrsAttrName() ||
|
2024-06-17 21:47:38 -07:00
|
|
|
attr.getName() == gpuFuncOp.getPrivateAttribAttrsAttrName() ||
|
|
|
|
|
attr.getName() == gpuFuncOp.getKnownBlockSizeAttrName() ||
|
|
|
|
|
attr.getName() == gpuFuncOp.getKnownGridSizeAttrName())
|
2021-02-12 12:53:12 +01:00
|
|
|
continue;
|
2023-09-11 14:54:43 +00:00
|
|
|
if (attr.getName() == gpuFuncOp.getArgAttrsAttrName()) {
|
|
|
|
|
argAttrs = gpuFuncOp.getArgAttrsAttr();
|
|
|
|
|
continue;
|
|
|
|
|
}
|
2021-02-12 12:53:12 +01:00
|
|
|
attributes.push_back(attr);
|
|
|
|
|
}
|
2024-06-17 21:47:38 -07:00
|
|
|
|
|
|
|
|
DenseI32ArrayAttr knownBlockSize = gpuFuncOp.getKnownBlockSizeAttr();
|
|
|
|
|
DenseI32ArrayAttr knownGridSize = gpuFuncOp.getKnownGridSizeAttr();
|
|
|
|
|
// Ensure we don't lose information if the function is lowered before its
|
|
|
|
|
// surrounding context.
|
|
|
|
|
auto *gpuDialect = cast<gpu::GPUDialect>(gpuFuncOp->getDialect());
|
|
|
|
|
if (knownBlockSize)
|
|
|
|
|
attributes.emplace_back(gpuDialect->getKnownBlockSizeAttrHelper().getName(),
|
|
|
|
|
knownBlockSize);
|
|
|
|
|
if (knownGridSize)
|
|
|
|
|
attributes.emplace_back(gpuDialect->getKnownGridSizeAttrHelper().getName(),
|
|
|
|
|
knownGridSize);
|
|
|
|
|
|
2021-02-12 12:53:12 +01:00
|
|
|
// Add a dialect specific kernel attribute in addition to GPU kernel
|
|
|
|
|
// attribute. The former is necessary for further translation while the
|
|
|
|
|
// latter is expected by gpu.launch_func.
|
2024-01-08 14:49:19 +01:00
|
|
|
if (gpuFuncOp.isKernel()) {
|
2024-08-09 16:09:11 +02:00
|
|
|
if (kernelAttributeName)
|
|
|
|
|
attributes.emplace_back(kernelAttributeName, rewriter.getUnitAttr());
|
2024-06-17 21:47:38 -07:00
|
|
|
// Set the dialect-specific block size attribute if there is one.
|
2024-08-09 16:09:11 +02:00
|
|
|
if (kernelBlockSizeAttributeName && knownBlockSize) {
|
|
|
|
|
attributes.emplace_back(kernelBlockSizeAttributeName, knownBlockSize);
|
2024-01-08 14:49:19 +01:00
|
|
|
}
|
|
|
|
|
}
|
2024-08-09 16:09:11 +02:00
|
|
|
LLVM::CConv callingConvention = gpuFuncOp.isKernel()
|
|
|
|
|
? kernelCallingConvention
|
|
|
|
|
: nonKernelCallingConvention;
|
2021-02-12 12:53:12 +01:00
|
|
|
auto llvmFuncOp = rewriter.create<LLVM::LLVMFuncOp>(
|
|
|
|
|
gpuFuncOp.getLoc(), gpuFuncOp.getName(), funcType,
|
2024-08-09 16:09:11 +02:00
|
|
|
LLVM::Linkage::External, /*dsoLocal=*/false, callingConvention,
|
2023-06-27 06:56:01 +00:00
|
|
|
/*comdat=*/nullptr, attributes);
|
2021-02-12 12:53:12 +01:00
|
|
|
|
|
|
|
|
{
|
|
|
|
|
// Insert operations that correspond to converted workgroup and private
|
|
|
|
|
// memory attributions to the body of the function. This must operate on
|
|
|
|
|
// the original function, before the body region is inlined in the new
|
|
|
|
|
// function to maintain the relation between block arguments and the
|
|
|
|
|
// parent operation that assigns their semantics.
|
|
|
|
|
OpBuilder::InsertionGuard guard(rewriter);
|
|
|
|
|
|
|
|
|
|
// Rewrite workgroup memory attributions to addresses of global buffers.
|
|
|
|
|
rewriter.setInsertionPointToStart(&gpuFuncOp.front());
|
|
|
|
|
unsigned numProperArguments = gpuFuncOp.getNumArguments();
|
|
|
|
|
|
2024-08-09 16:09:11 +02:00
|
|
|
if (encodeWorkgroupAttributionsAsArguments) {
|
|
|
|
|
// Build a MemRefDescriptor with each of the arguments added above.
|
|
|
|
|
|
|
|
|
|
unsigned numAttributions = gpuFuncOp.getNumWorkgroupAttributions();
|
|
|
|
|
assert(numProperArguments >= numAttributions &&
|
|
|
|
|
"Expecting attributions to be encoded as arguments already");
|
|
|
|
|
|
|
|
|
|
// Arguments encoding workgroup attributions will be in positions
|
|
|
|
|
// [numProperArguments, numProperArguments+numAttributions)
|
|
|
|
|
ArrayRef<BlockArgument> attributionArguments =
|
|
|
|
|
gpuFuncOp.getArguments().slice(numProperArguments - numAttributions,
|
|
|
|
|
numAttributions);
|
|
|
|
|
for (auto [idx, vals] : llvm::enumerate(llvm::zip_equal(
|
|
|
|
|
gpuFuncOp.getWorkgroupAttributions(), attributionArguments))) {
|
|
|
|
|
auto [attribution, arg] = vals;
|
|
|
|
|
auto type = cast<MemRefType>(attribution.getType());
|
|
|
|
|
|
|
|
|
|
// Arguments are of llvm.ptr type and attributions are of memref type:
|
|
|
|
|
// we need to wrap them in memref descriptors.
|
|
|
|
|
Value descr = MemRefDescriptor::fromStaticShape(
|
|
|
|
|
rewriter, loc, *getTypeConverter(), type, arg);
|
|
|
|
|
|
|
|
|
|
// And remap the arguments
|
|
|
|
|
signatureConversion.remapInput(numProperArguments + idx, descr);
|
|
|
|
|
}
|
|
|
|
|
} else {
|
|
|
|
|
for (const auto [idx, global] : llvm::enumerate(workgroupBuffers)) {
|
|
|
|
|
auto ptrType = LLVM::LLVMPointerType::get(rewriter.getContext(),
|
|
|
|
|
global.getAddrSpace());
|
|
|
|
|
Value address = rewriter.create<LLVM::AddressOfOp>(
|
|
|
|
|
loc, ptrType, global.getSymNameAttr());
|
|
|
|
|
Value memory =
|
|
|
|
|
rewriter.create<LLVM::GEPOp>(loc, ptrType, global.getType(),
|
|
|
|
|
address, ArrayRef<LLVM::GEPArg>{0, 0});
|
|
|
|
|
|
|
|
|
|
// Build a memref descriptor pointing to the buffer to plug with the
|
|
|
|
|
// existing memref infrastructure. This may use more registers than
|
|
|
|
|
// otherwise necessary given that memref sizes are fixed, but we can try
|
|
|
|
|
// and canonicalize that away later.
|
|
|
|
|
Value attribution = gpuFuncOp.getWorkgroupAttributions()[idx];
|
|
|
|
|
auto type = cast<MemRefType>(attribution.getType());
|
2025-03-15 18:33:06 +01:00
|
|
|
Value descr = MemRefDescriptor::fromStaticShape(
|
2024-08-09 16:09:11 +02:00
|
|
|
rewriter, loc, *getTypeConverter(), type, memory);
|
|
|
|
|
signatureConversion.remapInput(numProperArguments + idx, descr);
|
|
|
|
|
}
|
2021-02-12 12:53:12 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Rewrite private memory attributions to alloca'ed buffers.
|
|
|
|
|
unsigned numWorkgroupAttributions = gpuFuncOp.getNumWorkgroupAttributions();
|
|
|
|
|
auto int64Ty = IntegerType::get(rewriter.getContext(), 64);
|
2024-01-03 14:28:15 -06:00
|
|
|
for (const auto [idx, attribution] :
|
|
|
|
|
llvm::enumerate(gpuFuncOp.getPrivateAttributions())) {
|
2023-05-08 16:33:54 +02:00
|
|
|
auto type = cast<MemRefType>(attribution.getType());
|
2021-02-12 12:53:12 +01:00
|
|
|
assert(type && type.hasStaticShape() && "unexpected type in attribution");
|
|
|
|
|
|
|
|
|
|
// Explicitly drop memory space when lowering private memory
|
|
|
|
|
// attributions since NVVM models it as `alloca`s in the default
|
|
|
|
|
// memory space and does not support `alloca`s with addrspace(5).
|
2023-02-21 07:51:44 +01:00
|
|
|
Type elementType = typeConverter->convertType(type.getElementType());
|
|
|
|
|
auto ptrType =
|
2023-11-03 13:02:35 +01:00
|
|
|
LLVM::LLVMPointerType::get(rewriter.getContext(), allocaAddrSpace);
|
2021-02-12 12:53:12 +01:00
|
|
|
Value numElements = rewriter.create<LLVM::ConstantOp>(
|
2022-08-09 14:40:07 -04:00
|
|
|
gpuFuncOp.getLoc(), int64Ty, type.getNumElements());
|
2023-04-18 19:48:49 +00:00
|
|
|
uint64_t alignment = 0;
|
|
|
|
|
if (auto alignAttr =
|
2023-05-08 16:33:54 +02:00
|
|
|
dyn_cast_or_null<IntegerAttr>(gpuFuncOp.getPrivateAttributionAttr(
|
2024-01-03 14:28:15 -06:00
|
|
|
idx, LLVM::LLVMDialect::getAlignAttrName())))
|
2023-04-18 19:48:49 +00:00
|
|
|
alignment = alignAttr.getInt();
|
2021-02-12 12:53:12 +01:00
|
|
|
Value allocated = rewriter.create<LLVM::AllocaOp>(
|
2023-04-18 19:48:49 +00:00
|
|
|
gpuFuncOp.getLoc(), ptrType, elementType, numElements, alignment);
|
2025-03-15 18:33:06 +01:00
|
|
|
Value descr = MemRefDescriptor::fromStaticShape(
|
2021-02-12 12:53:12 +01:00
|
|
|
rewriter, loc, *getTypeConverter(), type, allocated);
|
|
|
|
|
signatureConversion.remapInput(
|
2024-01-03 14:28:15 -06:00
|
|
|
numProperArguments + numWorkgroupAttributions + idx, descr);
|
2021-02-12 12:53:12 +01:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Move the region to the new function, update the entry block signature.
|
|
|
|
|
rewriter.inlineRegionBefore(gpuFuncOp.getBody(), llvmFuncOp.getBody(),
|
|
|
|
|
llvmFuncOp.end());
|
|
|
|
|
if (failed(rewriter.convertRegionTypes(&llvmFuncOp.getBody(), *typeConverter,
|
|
|
|
|
&signatureConversion)))
|
|
|
|
|
return failure();
|
|
|
|
|
|
2023-09-11 14:54:43 +00:00
|
|
|
// Get memref type from function arguments and set the noalias to
|
|
|
|
|
// pointer arguments.
|
2024-01-03 14:28:15 -06:00
|
|
|
for (const auto [idx, argTy] :
|
|
|
|
|
llvm::enumerate(gpuFuncOp.getArgumentTypes())) {
|
|
|
|
|
auto remapping = signatureConversion.getInputMapping(idx);
|
|
|
|
|
NamedAttrList argAttr =
|
2024-04-19 15:58:27 +02:00
|
|
|
argAttrs ? cast<DictionaryAttr>(argAttrs[idx]) : NamedAttrList();
|
2024-01-03 14:28:15 -06:00
|
|
|
auto copyAttribute = [&](StringRef attrName) {
|
|
|
|
|
Attribute attr = argAttr.erase(attrName);
|
|
|
|
|
if (!attr)
|
|
|
|
|
return;
|
|
|
|
|
for (size_t i = 0, e = remapping->size; i < e; ++i)
|
|
|
|
|
llvmFuncOp.setArgAttr(remapping->inputNo + i, attrName, attr);
|
|
|
|
|
};
|
2023-09-11 14:54:43 +00:00
|
|
|
auto copyPointerAttribute = [&](StringRef attrName) {
|
|
|
|
|
Attribute attr = argAttr.erase(attrName);
|
|
|
|
|
|
|
|
|
|
if (!attr)
|
|
|
|
|
return;
|
|
|
|
|
if (remapping->size > 1 &&
|
|
|
|
|
attrName == LLVM::LLVMDialect::getNoAliasAttrName()) {
|
|
|
|
|
emitWarning(llvmFuncOp.getLoc(),
|
|
|
|
|
"Cannot copy noalias with non-bare pointers.\n");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
for (size_t i = 0, e = remapping->size; i < e; ++i) {
|
2024-04-19 15:58:27 +02:00
|
|
|
if (isa<LLVM::LLVMPointerType>(
|
|
|
|
|
llvmFuncOp.getArgument(remapping->inputNo + i).getType())) {
|
2023-09-11 14:54:43 +00:00
|
|
|
llvmFuncOp.setArgAttr(remapping->inputNo + i, attrName, attr);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
if (argAttr.empty())
|
|
|
|
|
continue;
|
|
|
|
|
|
2024-01-03 14:28:15 -06:00
|
|
|
copyAttribute(LLVM::LLVMDialect::getReturnedAttrName());
|
|
|
|
|
copyAttribute(LLVM::LLVMDialect::getNoUndefAttrName());
|
|
|
|
|
copyAttribute(LLVM::LLVMDialect::getInRegAttrName());
|
|
|
|
|
bool lowersToPointer = false;
|
|
|
|
|
for (size_t i = 0, e = remapping->size; i < e; ++i) {
|
|
|
|
|
lowersToPointer |= isa<LLVM::LLVMPointerType>(
|
|
|
|
|
llvmFuncOp.getArgument(remapping->inputNo + i).getType());
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (lowersToPointer) {
|
2023-09-11 14:54:43 +00:00
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getNoAliasAttrName());
|
2024-01-03 14:28:15 -06:00
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getNoCaptureAttrName());
|
|
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getNoFreeAttrName());
|
|
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getAlignAttrName());
|
2023-09-11 14:54:43 +00:00
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getReadonlyAttrName());
|
|
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getWriteOnlyAttrName());
|
2024-01-03 14:28:15 -06:00
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getReadnoneAttrName());
|
2023-09-11 14:54:43 +00:00
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getNonNullAttrName());
|
|
|
|
|
copyPointerAttribute(LLVM::LLVMDialect::getDereferenceableAttrName());
|
|
|
|
|
copyPointerAttribute(
|
|
|
|
|
LLVM::LLVMDialect::getDereferenceableOrNullAttrName());
|
2024-08-09 16:09:11 +02:00
|
|
|
copyPointerAttribute(
|
|
|
|
|
LLVM::LLVMDialect::WorkgroupAttributionAttrHelper::getNameStr());
|
2023-09-11 14:54:43 +00:00
|
|
|
}
|
|
|
|
|
}
|
2021-02-12 12:53:12 +01:00
|
|
|
rewriter.eraseOp(gpuFuncOp);
|
|
|
|
|
return success();
|
|
|
|
|
}
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
LogicalResult GPUPrintfOpToHIPLowering::matchAndRewrite(
|
|
|
|
|
gpu::PrintfOp gpuPrintfOp, gpu::PrintfOpAdaptor adaptor,
|
|
|
|
|
ConversionPatternRewriter &rewriter) const {
|
|
|
|
|
Location loc = gpuPrintfOp->getLoc();
|
|
|
|
|
|
|
|
|
|
mlir::Type llvmI8 = typeConverter->convertType(rewriter.getI8Type());
|
2023-11-03 13:02:35 +01:00
|
|
|
auto ptrType = LLVM::LLVMPointerType::get(rewriter.getContext());
|
2021-12-08 23:28:06 +00:00
|
|
|
mlir::Type llvmI32 = typeConverter->convertType(rewriter.getI32Type());
|
|
|
|
|
mlir::Type llvmI64 = typeConverter->convertType(rewriter.getI64Type());
|
|
|
|
|
// Note: this is the GPUModule op, not the ModuleOp that surrounds it
|
|
|
|
|
// This ensures that global constants and declarations are placed within
|
|
|
|
|
// the device code, not the host code
|
|
|
|
|
auto moduleOp = gpuPrintfOp->getParentOfType<gpu::GPUModuleOp>();
|
|
|
|
|
|
|
|
|
|
auto ocklBegin =
|
|
|
|
|
getOrDefineFunction(moduleOp, loc, rewriter, "__ockl_printf_begin",
|
|
|
|
|
LLVM::LLVMFunctionType::get(llvmI64, {llvmI64}));
|
|
|
|
|
LLVM::LLVMFuncOp ocklAppendArgs;
|
2022-09-30 12:30:41 -07:00
|
|
|
if (!adaptor.getArgs().empty()) {
|
2021-12-08 23:28:06 +00:00
|
|
|
ocklAppendArgs = getOrDefineFunction(
|
|
|
|
|
moduleOp, loc, rewriter, "__ockl_printf_append_args",
|
|
|
|
|
LLVM::LLVMFunctionType::get(
|
|
|
|
|
llvmI64, {llvmI64, /*numArgs*/ llvmI32, llvmI64, llvmI64, llvmI64,
|
|
|
|
|
llvmI64, llvmI64, llvmI64, llvmI64, /*isLast*/ llvmI32}));
|
|
|
|
|
}
|
|
|
|
|
auto ocklAppendStringN = getOrDefineFunction(
|
|
|
|
|
moduleOp, loc, rewriter, "__ockl_printf_append_string_n",
|
|
|
|
|
LLVM::LLVMFunctionType::get(
|
|
|
|
|
llvmI64,
|
2023-11-03 13:02:35 +01:00
|
|
|
{llvmI64, ptrType, /*length (bytes)*/ llvmI64, /*isLast*/ llvmI32}));
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
/// Start the printf hostcall
|
2022-08-09 14:40:07 -04:00
|
|
|
Value zeroI64 = rewriter.create<LLVM::ConstantOp>(loc, llvmI64, 0);
|
2021-12-08 23:28:06 +00:00
|
|
|
auto printfBeginCall = rewriter.create<LLVM::CallOp>(loc, ocklBegin, zeroI64);
|
2022-08-11 00:34:02 -04:00
|
|
|
Value printfDesc = printfBeginCall.getResult();
|
2021-12-08 23:28:06 +00:00
|
|
|
|
2024-10-01 09:12:08 +02:00
|
|
|
// Create the global op or find an existing one.
|
2025-01-06 12:00:11 +01:00
|
|
|
LLVM::GlobalOp global = getOrCreateStringConstant(
|
|
|
|
|
rewriter, loc, moduleOp, llvmI8, "printfFormat_", adaptor.getFormat());
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
// Get a pointer to the format string's first element and pass it to printf()
|
2023-02-21 07:51:44 +01:00
|
|
|
Value globalPtr = rewriter.create<LLVM::AddressOfOp>(
|
|
|
|
|
loc,
|
2023-11-03 13:02:35 +01:00
|
|
|
LLVM::LLVMPointerType::get(rewriter.getContext(), global.getAddrSpace()),
|
2023-02-21 07:51:44 +01:00
|
|
|
global.getSymNameAttr());
|
2024-10-01 09:12:08 +02:00
|
|
|
Value stringStart =
|
|
|
|
|
rewriter.create<LLVM::GEPOp>(loc, ptrType, global.getGlobalType(),
|
|
|
|
|
globalPtr, ArrayRef<LLVM::GEPArg>{0, 0});
|
|
|
|
|
Value stringLen = rewriter.create<LLVM::ConstantOp>(
|
|
|
|
|
loc, llvmI64, cast<StringAttr>(global.getValueAttr()).size());
|
2021-12-08 23:28:06 +00:00
|
|
|
|
2022-08-09 14:40:07 -04:00
|
|
|
Value oneI32 = rewriter.create<LLVM::ConstantOp>(loc, llvmI32, 1);
|
|
|
|
|
Value zeroI32 = rewriter.create<LLVM::ConstantOp>(loc, llvmI32, 0);
|
2021-12-08 23:28:06 +00:00
|
|
|
|
2021-12-10 05:02:25 +00:00
|
|
|
auto appendFormatCall = rewriter.create<LLVM::CallOp>(
|
|
|
|
|
loc, ocklAppendStringN,
|
|
|
|
|
ValueRange{printfDesc, stringStart, stringLen,
|
2022-09-30 12:30:41 -07:00
|
|
|
adaptor.getArgs().empty() ? oneI32 : zeroI32});
|
2022-08-11 00:34:02 -04:00
|
|
|
printfDesc = appendFormatCall.getResult();
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
// __ockl_printf_append_args takes 7 values per append call
|
|
|
|
|
constexpr size_t argsPerAppend = 7;
|
2022-09-30 12:30:41 -07:00
|
|
|
size_t nArgs = adaptor.getArgs().size();
|
2021-12-08 23:28:06 +00:00
|
|
|
for (size_t group = 0; group < nArgs; group += argsPerAppend) {
|
|
|
|
|
size_t bound = std::min(group + argsPerAppend, nArgs);
|
|
|
|
|
size_t numArgsThisCall = bound - group;
|
|
|
|
|
|
|
|
|
|
SmallVector<mlir::Value, 2 + argsPerAppend + 1> arguments;
|
|
|
|
|
arguments.push_back(printfDesc);
|
2022-08-09 14:40:07 -04:00
|
|
|
arguments.push_back(
|
|
|
|
|
rewriter.create<LLVM::ConstantOp>(loc, llvmI32, numArgsThisCall));
|
2021-12-08 23:28:06 +00:00
|
|
|
for (size_t i = group; i < bound; ++i) {
|
2022-09-30 12:30:41 -07:00
|
|
|
Value arg = adaptor.getArgs()[i];
|
2023-05-08 16:33:54 +02:00
|
|
|
if (auto floatType = dyn_cast<FloatType>(arg.getType())) {
|
2021-12-08 23:28:06 +00:00
|
|
|
if (!floatType.isF64())
|
|
|
|
|
arg = rewriter.create<LLVM::FPExtOp>(
|
|
|
|
|
loc, typeConverter->convertType(rewriter.getF64Type()), arg);
|
|
|
|
|
arg = rewriter.create<LLVM::BitcastOp>(loc, llvmI64, arg);
|
|
|
|
|
}
|
|
|
|
|
if (arg.getType().getIntOrFloatBitWidth() != 64)
|
|
|
|
|
arg = rewriter.create<LLVM::ZExtOp>(loc, llvmI64, arg);
|
|
|
|
|
|
|
|
|
|
arguments.push_back(arg);
|
|
|
|
|
}
|
|
|
|
|
// Pad out to 7 arguments since the hostcall always needs 7
|
|
|
|
|
for (size_t extra = numArgsThisCall; extra < argsPerAppend; ++extra) {
|
|
|
|
|
arguments.push_back(zeroI64);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
auto isLast = (bound == nArgs) ? oneI32 : zeroI32;
|
|
|
|
|
arguments.push_back(isLast);
|
|
|
|
|
auto call = rewriter.create<LLVM::CallOp>(loc, ocklAppendArgs, arguments);
|
2022-08-11 00:34:02 -04:00
|
|
|
printfDesc = call.getResult();
|
2021-12-08 23:28:06 +00:00
|
|
|
}
|
|
|
|
|
rewriter.eraseOp(gpuPrintfOp);
|
|
|
|
|
return success();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
LogicalResult GPUPrintfOpToLLVMCallLowering::matchAndRewrite(
|
|
|
|
|
gpu::PrintfOp gpuPrintfOp, gpu::PrintfOpAdaptor adaptor,
|
|
|
|
|
ConversionPatternRewriter &rewriter) const {
|
|
|
|
|
Location loc = gpuPrintfOp->getLoc();
|
|
|
|
|
|
|
|
|
|
mlir::Type llvmI8 = typeConverter->convertType(rewriter.getIntegerType(8));
|
2023-11-03 13:02:35 +01:00
|
|
|
mlir::Type ptrType =
|
|
|
|
|
LLVM::LLVMPointerType::get(rewriter.getContext(), addressSpace);
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
// Note: this is the GPUModule op, not the ModuleOp that surrounds it
|
|
|
|
|
// This ensures that global constants and declarations are placed within
|
|
|
|
|
// the device code, not the host code
|
|
|
|
|
auto moduleOp = gpuPrintfOp->getParentOfType<gpu::GPUModuleOp>();
|
|
|
|
|
|
2023-11-03 13:02:35 +01:00
|
|
|
auto printfType =
|
|
|
|
|
LLVM::LLVMFunctionType::get(rewriter.getI32Type(), {ptrType},
|
|
|
|
|
/*isVarArg=*/true);
|
2021-12-08 23:28:06 +00:00
|
|
|
LLVM::LLVMFuncOp printfDecl =
|
|
|
|
|
getOrDefineFunction(moduleOp, loc, rewriter, "printf", printfType);
|
|
|
|
|
|
2024-10-01 09:12:08 +02:00
|
|
|
// Create the global op or find an existing one.
|
2025-01-06 12:00:11 +01:00
|
|
|
LLVM::GlobalOp global = getOrCreateStringConstant(
|
|
|
|
|
rewriter, loc, moduleOp, llvmI8, "printfFormat_", adaptor.getFormat(),
|
|
|
|
|
/*alignment=*/0, addressSpace);
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
// Get a pointer to the format string's first element
|
2023-02-21 07:51:44 +01:00
|
|
|
Value globalPtr = rewriter.create<LLVM::AddressOfOp>(
|
|
|
|
|
loc,
|
2023-11-03 13:02:35 +01:00
|
|
|
LLVM::LLVMPointerType::get(rewriter.getContext(), global.getAddrSpace()),
|
2023-02-21 07:51:44 +01:00
|
|
|
global.getSymNameAttr());
|
2024-10-01 09:12:08 +02:00
|
|
|
Value stringStart =
|
|
|
|
|
rewriter.create<LLVM::GEPOp>(loc, ptrType, global.getGlobalType(),
|
|
|
|
|
globalPtr, ArrayRef<LLVM::GEPArg>{0, 0});
|
2021-12-08 23:28:06 +00:00
|
|
|
|
|
|
|
|
// Construct arguments and function call
|
2022-09-30 12:30:41 -07:00
|
|
|
auto argsRange = adaptor.getArgs();
|
2021-12-08 23:28:06 +00:00
|
|
|
SmallVector<Value, 4> printfArgs;
|
|
|
|
|
printfArgs.reserve(argsRange.size() + 1);
|
|
|
|
|
printfArgs.push_back(stringStart);
|
|
|
|
|
printfArgs.append(argsRange.begin(), argsRange.end());
|
|
|
|
|
|
|
|
|
|
rewriter.create<LLVM::CallOp>(loc, printfDecl, printfArgs);
|
|
|
|
|
rewriter.eraseOp(gpuPrintfOp);
|
|
|
|
|
return success();
|
|
|
|
|
}
|
2022-10-27 10:08:52 +02:00
|
|
|
|
2023-01-05 21:20:45 +00:00
|
|
|
LogicalResult GPUPrintfOpToVPrintfLowering::matchAndRewrite(
|
|
|
|
|
gpu::PrintfOp gpuPrintfOp, gpu::PrintfOpAdaptor adaptor,
|
|
|
|
|
ConversionPatternRewriter &rewriter) const {
|
|
|
|
|
Location loc = gpuPrintfOp->getLoc();
|
|
|
|
|
|
|
|
|
|
mlir::Type llvmI8 = typeConverter->convertType(rewriter.getIntegerType(8));
|
2023-10-17 11:33:45 +02:00
|
|
|
mlir::Type ptrType = LLVM::LLVMPointerType::get(rewriter.getContext());
|
2023-01-05 21:20:45 +00:00
|
|
|
|
|
|
|
|
// Note: this is the GPUModule op, not the ModuleOp that surrounds it
|
|
|
|
|
// This ensures that global constants and declarations are placed within
|
|
|
|
|
// the device code, not the host code
|
|
|
|
|
auto moduleOp = gpuPrintfOp->getParentOfType<gpu::GPUModuleOp>();
|
|
|
|
|
|
|
|
|
|
auto vprintfType =
|
2023-10-17 11:33:45 +02:00
|
|
|
LLVM::LLVMFunctionType::get(rewriter.getI32Type(), {ptrType, ptrType});
|
2023-01-05 21:20:45 +00:00
|
|
|
LLVM::LLVMFuncOp vprintfDecl =
|
|
|
|
|
getOrDefineFunction(moduleOp, loc, rewriter, "vprintf", vprintfType);
|
|
|
|
|
|
2024-10-01 09:12:08 +02:00
|
|
|
// Create the global op or find an existing one.
|
2025-01-06 12:00:11 +01:00
|
|
|
LLVM::GlobalOp global = getOrCreateStringConstant(
|
|
|
|
|
rewriter, loc, moduleOp, llvmI8, "printfFormat_", adaptor.getFormat());
|
2023-01-05 21:20:45 +00:00
|
|
|
|
|
|
|
|
// Get a pointer to the format string's first element
|
|
|
|
|
Value globalPtr = rewriter.create<LLVM::AddressOfOp>(loc, global);
|
2024-10-01 09:12:08 +02:00
|
|
|
Value stringStart =
|
|
|
|
|
rewriter.create<LLVM::GEPOp>(loc, ptrType, global.getGlobalType(),
|
|
|
|
|
globalPtr, ArrayRef<LLVM::GEPArg>{0, 0});
|
2023-01-05 21:20:45 +00:00
|
|
|
SmallVector<Type> types;
|
|
|
|
|
SmallVector<Value> args;
|
|
|
|
|
// Promote and pack the arguments into a stack allocation.
|
|
|
|
|
for (Value arg : adaptor.getArgs()) {
|
|
|
|
|
Type type = arg.getType();
|
|
|
|
|
Value promotedArg = arg;
|
|
|
|
|
assert(type.isIntOrFloat());
|
2023-05-08 16:33:54 +02:00
|
|
|
if (isa<FloatType>(type)) {
|
2023-01-05 21:20:45 +00:00
|
|
|
type = rewriter.getF64Type();
|
|
|
|
|
promotedArg = rewriter.create<LLVM::FPExtOp>(loc, type, arg);
|
|
|
|
|
}
|
|
|
|
|
types.push_back(type);
|
|
|
|
|
args.push_back(promotedArg);
|
|
|
|
|
}
|
|
|
|
|
Type structType =
|
|
|
|
|
LLVM::LLVMStructType::getLiteral(gpuPrintfOp.getContext(), types);
|
|
|
|
|
Value one = rewriter.create<LLVM::ConstantOp>(loc, rewriter.getI64Type(),
|
|
|
|
|
rewriter.getIndexAttr(1));
|
2023-10-17 11:33:45 +02:00
|
|
|
Value tempAlloc =
|
|
|
|
|
rewriter.create<LLVM::AllocaOp>(loc, ptrType, structType, one,
|
|
|
|
|
/*alignment=*/0);
|
2023-01-05 21:20:45 +00:00
|
|
|
for (auto [index, arg] : llvm::enumerate(args)) {
|
2023-10-17 06:31:48 +00:00
|
|
|
Value ptr = rewriter.create<LLVM::GEPOp>(
|
2024-01-26 15:27:51 +02:00
|
|
|
loc, ptrType, structType, tempAlloc,
|
|
|
|
|
ArrayRef<LLVM::GEPArg>{0, static_cast<int32_t>(index)});
|
2023-01-05 21:20:45 +00:00
|
|
|
rewriter.create<LLVM::StoreOp>(loc, arg, ptr);
|
|
|
|
|
}
|
|
|
|
|
std::array<Value, 2> printfArgs = {stringStart, tempAlloc};
|
|
|
|
|
|
|
|
|
|
rewriter.create<LLVM::CallOp>(loc, vprintfDecl, printfArgs);
|
|
|
|
|
rewriter.eraseOp(gpuPrintfOp);
|
|
|
|
|
return success();
|
|
|
|
|
}
|
|
|
|
|
|
2025-02-19 14:52:02 -05:00
|
|
|
/// Helper for impl::scalarizeVectorOp. Scalarizes vectors to elements.
|
|
|
|
|
/// Used either directly (for ops on 1D vectors) or as the callback passed to
|
|
|
|
|
/// detail::handleMultidimensionalVectors (for ops on higher-rank vectors).
|
|
|
|
|
static Value scalarizeVectorOpHelper(Operation *op, ValueRange operands,
|
|
|
|
|
Type llvm1DVectorTy,
|
|
|
|
|
ConversionPatternRewriter &rewriter,
|
|
|
|
|
const LLVMTypeConverter &converter) {
|
2022-10-27 10:08:52 +02:00
|
|
|
TypeRange operandTypes(operands);
|
2025-02-19 14:52:02 -05:00
|
|
|
VectorType vectorType = cast<VectorType>(llvm1DVectorTy);
|
2022-10-27 10:08:52 +02:00
|
|
|
Location loc = op->getLoc();
|
2025-02-06 10:49:30 -08:00
|
|
|
Value result = rewriter.create<LLVM::PoisonOp>(loc, vectorType);
|
2022-10-27 10:08:52 +02:00
|
|
|
Type indexType = converter.convertType(rewriter.getIndexType());
|
|
|
|
|
StringAttr name = op->getName().getIdentifier();
|
|
|
|
|
Type elementType = vectorType.getElementType();
|
|
|
|
|
|
|
|
|
|
for (int64_t i = 0; i < vectorType.getNumElements(); ++i) {
|
|
|
|
|
Value index = rewriter.create<LLVM::ConstantOp>(loc, indexType, i);
|
|
|
|
|
auto extractElement = [&](Value operand) -> Value {
|
2023-05-08 16:33:54 +02:00
|
|
|
if (!isa<VectorType>(operand.getType()))
|
2022-10-27 10:08:52 +02:00
|
|
|
return operand;
|
|
|
|
|
return rewriter.create<LLVM::ExtractElementOp>(loc, operand, index);
|
|
|
|
|
};
|
2023-05-03 10:01:22 -04:00
|
|
|
auto scalarOperands = llvm::map_to_vector(operands, extractElement);
|
2022-10-27 10:08:52 +02:00
|
|
|
Operation *scalarOp =
|
|
|
|
|
rewriter.create(loc, name, scalarOperands, elementType, op->getAttrs());
|
2023-06-30 16:04:08 -06:00
|
|
|
result = rewriter.create<LLVM::InsertElementOp>(
|
|
|
|
|
loc, result, scalarOp->getResult(0), index);
|
2022-10-27 10:08:52 +02:00
|
|
|
}
|
2025-02-19 14:52:02 -05:00
|
|
|
return result;
|
|
|
|
|
}
|
2022-10-27 10:08:52 +02:00
|
|
|
|
2025-02-19 14:52:02 -05:00
|
|
|
/// Unrolls op to array/vector elements.
|
|
|
|
|
LogicalResult impl::scalarizeVectorOp(Operation *op, ValueRange operands,
|
|
|
|
|
ConversionPatternRewriter &rewriter,
|
|
|
|
|
const LLVMTypeConverter &converter) {
|
|
|
|
|
TypeRange operandTypes(operands);
|
|
|
|
|
if (llvm::any_of(operandTypes, llvm::IsaPred<VectorType>)) {
|
2025-02-20 19:43:33 -05:00
|
|
|
VectorType vectorType =
|
|
|
|
|
cast<VectorType>(converter.convertType(op->getResultTypes()[0]));
|
2025-02-19 14:52:02 -05:00
|
|
|
rewriter.replaceOp(op, scalarizeVectorOpHelper(op, operands, vectorType,
|
|
|
|
|
rewriter, converter));
|
|
|
|
|
return success();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (llvm::any_of(operandTypes, llvm::IsaPred<LLVM::LLVMArrayType>)) {
|
|
|
|
|
return LLVM::detail::handleMultidimensionalVectors(
|
|
|
|
|
op, operands, converter,
|
|
|
|
|
[&](Type llvm1DVectorTy, ValueRange operands) -> Value {
|
|
|
|
|
return scalarizeVectorOpHelper(op, operands, llvm1DVectorTy, rewriter,
|
|
|
|
|
converter);
|
|
|
|
|
},
|
|
|
|
|
rewriter);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return rewriter.notifyMatchFailure(op, "no llvm.array or vector to unroll");
|
2022-10-27 10:08:52 +02:00
|
|
|
}
|
Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.
Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.
Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.
ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.
(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)
As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.
I expect that the type attribute conversion system may be useful in
other contexts.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D142159
2023-01-19 21:56:04 +00:00
|
|
|
|
|
|
|
|
static IntegerAttr wrapNumericMemorySpace(MLIRContext *ctx, unsigned space) {
|
|
|
|
|
return IntegerAttr::get(IntegerType::get(ctx, 64), space);
|
|
|
|
|
}
|
|
|
|
|
|
[mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (#71546)
While the `gpu.launch` Op allows setting the size via the
`dynamic_shared_memory_size` argument, accessing the dynamic shared
memory is very convoluted. This PR implements the proposed Op,
`gpu.dynamic_shared_memory` that aims to simplify the utilization of
dynamic shared memory.
RFC:
https://discourse.llvm.org/t/rfc-simplifying-dynamic-shared-memory-access-in-gpu/
**Proposal from RFC**
This PR `gpu.dynamic.shared.memory` Op to use dynamic shared memory
feature efficiently. It is is a powerful feature that enables the
allocation of shared memory at runtime with the kernel launch on the
host. Afterwards, the memory can be accessed directly from the device. I
believe similar story exists for AMDGPU.
**Current way Using Dynamic Shared Memory with MLIR**
Let me illustrate the challenges of using dynamic shared memory in MLIR
with an example below. The process involves several steps:
- memref.global 0-sized array LLVM's NVPTX backend expects
- dynamic_shared_memory_size Set the size of dynamic shared memory
- memref.get_global Access the global symbol
- reinterpret_cast and subview Many OPs for pointer arithmetic
```
// Step 1. Create 0-sized global symbol. Manually set the alignment
memref.global "private" @dynamicShmem : memref<0xf16, 3> { alignment = 16 }
func.func @main() {
// Step 2. Allocate shared memory
gpu.launch blocks(...) threads(...)
dynamic_shared_memory_size %c10000 {
// Step 3. Access the global object
%shmem = memref.get_global @dynamicShmem : memref<0xf16, 3>
// Step 4. A sequence of `memref.reinterpret_cast` and `memref.subview` operations.
%4 = memref.reinterpret_cast %shmem to offset: [0], sizes: [14, 64, 128], strides: [8192,128,1] : memref<0xf16, 3> to memref<14x64x128xf16,3>
%5 = memref.subview %4[7, 0, 0][7, 64, 128][1,1,1] : memref<14x64x128xf16,3> to memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3>
%6 = memref.subview %5[2, 0, 0][1, 64, 128][1,1,1] : memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3> to memref<64x128xf16, strided<[128, 1], offset: 73728>, 3>
%7 = memref.subview %6[0, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>
%8 = memref.subview %6[32, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>
// Step.5 Use
"test.use.shared.memory"(%7) : (memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>) -> (index)
gpu.terminator
}
```
Let’s write the program above with that:
```
func.func @main() {
gpu.launch blocks(...) threads(...) dynamic_shared_memory_size %c10000 {
%i = arith.constant 18 : index
// Step 1: Obtain shared memory directly
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
%c147456 = arith.constant 147456 : index
%c155648 = arith.constant 155648 : index
%7 = memref.view %shmem[%c147456][] : memref<?xi8, 3> to memref<64x64xf16, 3>
%8 = memref.view %shmem[%c155648][] : memref<?xi8, 3> to memref<64x64xf16, 3>
// Step 2: Utilize the shared memory
"test.use.shared.memory"(%7) : (memref<64x64xf16, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, 3>) -> (index)
}
}
```
This PR resolves #72513
2023-11-16 14:42:17 +01:00
|
|
|
/// Generates a symbol with 0-sized array type for dynamic shared memory usage,
|
|
|
|
|
/// or uses existing symbol.
|
2024-09-30 21:20:48 +02:00
|
|
|
LLVM::GlobalOp getDynamicSharedMemorySymbol(
|
|
|
|
|
ConversionPatternRewriter &rewriter, gpu::GPUModuleOp moduleOp,
|
|
|
|
|
gpu::DynamicSharedMemoryOp op, const LLVMTypeConverter *typeConverter,
|
|
|
|
|
MemRefType memrefType, unsigned alignmentBit) {
|
[mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (#71546)
While the `gpu.launch` Op allows setting the size via the
`dynamic_shared_memory_size` argument, accessing the dynamic shared
memory is very convoluted. This PR implements the proposed Op,
`gpu.dynamic_shared_memory` that aims to simplify the utilization of
dynamic shared memory.
RFC:
https://discourse.llvm.org/t/rfc-simplifying-dynamic-shared-memory-access-in-gpu/
**Proposal from RFC**
This PR `gpu.dynamic.shared.memory` Op to use dynamic shared memory
feature efficiently. It is is a powerful feature that enables the
allocation of shared memory at runtime with the kernel launch on the
host. Afterwards, the memory can be accessed directly from the device. I
believe similar story exists for AMDGPU.
**Current way Using Dynamic Shared Memory with MLIR**
Let me illustrate the challenges of using dynamic shared memory in MLIR
with an example below. The process involves several steps:
- memref.global 0-sized array LLVM's NVPTX backend expects
- dynamic_shared_memory_size Set the size of dynamic shared memory
- memref.get_global Access the global symbol
- reinterpret_cast and subview Many OPs for pointer arithmetic
```
// Step 1. Create 0-sized global symbol. Manually set the alignment
memref.global "private" @dynamicShmem : memref<0xf16, 3> { alignment = 16 }
func.func @main() {
// Step 2. Allocate shared memory
gpu.launch blocks(...) threads(...)
dynamic_shared_memory_size %c10000 {
// Step 3. Access the global object
%shmem = memref.get_global @dynamicShmem : memref<0xf16, 3>
// Step 4. A sequence of `memref.reinterpret_cast` and `memref.subview` operations.
%4 = memref.reinterpret_cast %shmem to offset: [0], sizes: [14, 64, 128], strides: [8192,128,1] : memref<0xf16, 3> to memref<14x64x128xf16,3>
%5 = memref.subview %4[7, 0, 0][7, 64, 128][1,1,1] : memref<14x64x128xf16,3> to memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3>
%6 = memref.subview %5[2, 0, 0][1, 64, 128][1,1,1] : memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3> to memref<64x128xf16, strided<[128, 1], offset: 73728>, 3>
%7 = memref.subview %6[0, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>
%8 = memref.subview %6[32, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>
// Step.5 Use
"test.use.shared.memory"(%7) : (memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>) -> (index)
gpu.terminator
}
```
Let’s write the program above with that:
```
func.func @main() {
gpu.launch blocks(...) threads(...) dynamic_shared_memory_size %c10000 {
%i = arith.constant 18 : index
// Step 1: Obtain shared memory directly
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
%c147456 = arith.constant 147456 : index
%c155648 = arith.constant 155648 : index
%7 = memref.view %shmem[%c147456][] : memref<?xi8, 3> to memref<64x64xf16, 3>
%8 = memref.view %shmem[%c155648][] : memref<?xi8, 3> to memref<64x64xf16, 3>
// Step 2: Utilize the shared memory
"test.use.shared.memory"(%7) : (memref<64x64xf16, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, 3>) -> (index)
}
}
```
This PR resolves #72513
2023-11-16 14:42:17 +01:00
|
|
|
uint64_t alignmentByte = alignmentBit / memrefType.getElementTypeBitWidth();
|
|
|
|
|
|
|
|
|
|
FailureOr<unsigned> addressSpace =
|
|
|
|
|
typeConverter->getMemRefAddressSpace(memrefType);
|
|
|
|
|
if (failed(addressSpace)) {
|
|
|
|
|
op->emitError() << "conversion of memref memory space "
|
|
|
|
|
<< memrefType.getMemorySpace()
|
|
|
|
|
<< " to integer address space "
|
|
|
|
|
"failed. Consider adding memory space conversions.";
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Step 1. Collect symbol names of LLVM::GlobalOp Ops. Also if any of
|
|
|
|
|
// LLVM::GlobalOp is suitable for shared memory, return it.
|
|
|
|
|
llvm::StringSet<> existingGlobalNames;
|
2024-09-30 21:20:48 +02:00
|
|
|
for (auto globalOp : moduleOp.getBody()->getOps<LLVM::GlobalOp>()) {
|
[mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (#71546)
While the `gpu.launch` Op allows setting the size via the
`dynamic_shared_memory_size` argument, accessing the dynamic shared
memory is very convoluted. This PR implements the proposed Op,
`gpu.dynamic_shared_memory` that aims to simplify the utilization of
dynamic shared memory.
RFC:
https://discourse.llvm.org/t/rfc-simplifying-dynamic-shared-memory-access-in-gpu/
**Proposal from RFC**
This PR `gpu.dynamic.shared.memory` Op to use dynamic shared memory
feature efficiently. It is is a powerful feature that enables the
allocation of shared memory at runtime with the kernel launch on the
host. Afterwards, the memory can be accessed directly from the device. I
believe similar story exists for AMDGPU.
**Current way Using Dynamic Shared Memory with MLIR**
Let me illustrate the challenges of using dynamic shared memory in MLIR
with an example below. The process involves several steps:
- memref.global 0-sized array LLVM's NVPTX backend expects
- dynamic_shared_memory_size Set the size of dynamic shared memory
- memref.get_global Access the global symbol
- reinterpret_cast and subview Many OPs for pointer arithmetic
```
// Step 1. Create 0-sized global symbol. Manually set the alignment
memref.global "private" @dynamicShmem : memref<0xf16, 3> { alignment = 16 }
func.func @main() {
// Step 2. Allocate shared memory
gpu.launch blocks(...) threads(...)
dynamic_shared_memory_size %c10000 {
// Step 3. Access the global object
%shmem = memref.get_global @dynamicShmem : memref<0xf16, 3>
// Step 4. A sequence of `memref.reinterpret_cast` and `memref.subview` operations.
%4 = memref.reinterpret_cast %shmem to offset: [0], sizes: [14, 64, 128], strides: [8192,128,1] : memref<0xf16, 3> to memref<14x64x128xf16,3>
%5 = memref.subview %4[7, 0, 0][7, 64, 128][1,1,1] : memref<14x64x128xf16,3> to memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3>
%6 = memref.subview %5[2, 0, 0][1, 64, 128][1,1,1] : memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3> to memref<64x128xf16, strided<[128, 1], offset: 73728>, 3>
%7 = memref.subview %6[0, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>
%8 = memref.subview %6[32, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>
// Step.5 Use
"test.use.shared.memory"(%7) : (memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>) -> (index)
gpu.terminator
}
```
Let’s write the program above with that:
```
func.func @main() {
gpu.launch blocks(...) threads(...) dynamic_shared_memory_size %c10000 {
%i = arith.constant 18 : index
// Step 1: Obtain shared memory directly
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
%c147456 = arith.constant 147456 : index
%c155648 = arith.constant 155648 : index
%7 = memref.view %shmem[%c147456][] : memref<?xi8, 3> to memref<64x64xf16, 3>
%8 = memref.view %shmem[%c155648][] : memref<?xi8, 3> to memref<64x64xf16, 3>
// Step 2: Utilize the shared memory
"test.use.shared.memory"(%7) : (memref<64x64xf16, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, 3>) -> (index)
}
}
```
This PR resolves #72513
2023-11-16 14:42:17 +01:00
|
|
|
existingGlobalNames.insert(globalOp.getSymName());
|
|
|
|
|
if (auto arrayType = dyn_cast<LLVM::LLVMArrayType>(globalOp.getType())) {
|
|
|
|
|
if (globalOp.getAddrSpace() == addressSpace.value() &&
|
|
|
|
|
arrayType.getNumElements() == 0 &&
|
|
|
|
|
globalOp.getAlignment().value_or(0) == alignmentByte) {
|
|
|
|
|
return globalOp;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Step 2. Find a unique symbol name
|
|
|
|
|
unsigned uniquingCounter = 0;
|
|
|
|
|
SmallString<128> symName = SymbolTable::generateSymbolName<128>(
|
|
|
|
|
"__dynamic_shmem_",
|
|
|
|
|
[&](StringRef candidate) {
|
|
|
|
|
return existingGlobalNames.contains(candidate);
|
|
|
|
|
},
|
|
|
|
|
uniquingCounter);
|
|
|
|
|
|
|
|
|
|
// Step 3. Generate a global op
|
|
|
|
|
OpBuilder::InsertionGuard guard(rewriter);
|
2024-09-30 21:20:48 +02:00
|
|
|
rewriter.setInsertionPointToStart(moduleOp.getBody());
|
[mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (#71546)
While the `gpu.launch` Op allows setting the size via the
`dynamic_shared_memory_size` argument, accessing the dynamic shared
memory is very convoluted. This PR implements the proposed Op,
`gpu.dynamic_shared_memory` that aims to simplify the utilization of
dynamic shared memory.
RFC:
https://discourse.llvm.org/t/rfc-simplifying-dynamic-shared-memory-access-in-gpu/
**Proposal from RFC**
This PR `gpu.dynamic.shared.memory` Op to use dynamic shared memory
feature efficiently. It is is a powerful feature that enables the
allocation of shared memory at runtime with the kernel launch on the
host. Afterwards, the memory can be accessed directly from the device. I
believe similar story exists for AMDGPU.
**Current way Using Dynamic Shared Memory with MLIR**
Let me illustrate the challenges of using dynamic shared memory in MLIR
with an example below. The process involves several steps:
- memref.global 0-sized array LLVM's NVPTX backend expects
- dynamic_shared_memory_size Set the size of dynamic shared memory
- memref.get_global Access the global symbol
- reinterpret_cast and subview Many OPs for pointer arithmetic
```
// Step 1. Create 0-sized global symbol. Manually set the alignment
memref.global "private" @dynamicShmem : memref<0xf16, 3> { alignment = 16 }
func.func @main() {
// Step 2. Allocate shared memory
gpu.launch blocks(...) threads(...)
dynamic_shared_memory_size %c10000 {
// Step 3. Access the global object
%shmem = memref.get_global @dynamicShmem : memref<0xf16, 3>
// Step 4. A sequence of `memref.reinterpret_cast` and `memref.subview` operations.
%4 = memref.reinterpret_cast %shmem to offset: [0], sizes: [14, 64, 128], strides: [8192,128,1] : memref<0xf16, 3> to memref<14x64x128xf16,3>
%5 = memref.subview %4[7, 0, 0][7, 64, 128][1,1,1] : memref<14x64x128xf16,3> to memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3>
%6 = memref.subview %5[2, 0, 0][1, 64, 128][1,1,1] : memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3> to memref<64x128xf16, strided<[128, 1], offset: 73728>, 3>
%7 = memref.subview %6[0, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>
%8 = memref.subview %6[32, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>
// Step.5 Use
"test.use.shared.memory"(%7) : (memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>) -> (index)
gpu.terminator
}
```
Let’s write the program above with that:
```
func.func @main() {
gpu.launch blocks(...) threads(...) dynamic_shared_memory_size %c10000 {
%i = arith.constant 18 : index
// Step 1: Obtain shared memory directly
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
%c147456 = arith.constant 147456 : index
%c155648 = arith.constant 155648 : index
%7 = memref.view %shmem[%c147456][] : memref<?xi8, 3> to memref<64x64xf16, 3>
%8 = memref.view %shmem[%c155648][] : memref<?xi8, 3> to memref<64x64xf16, 3>
// Step 2: Utilize the shared memory
"test.use.shared.memory"(%7) : (memref<64x64xf16, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, 3>) -> (index)
}
}
```
This PR resolves #72513
2023-11-16 14:42:17 +01:00
|
|
|
|
|
|
|
|
auto zeroSizedArrayType = LLVM::LLVMArrayType::get(
|
|
|
|
|
typeConverter->convertType(memrefType.getElementType()), 0);
|
|
|
|
|
|
|
|
|
|
return rewriter.create<LLVM::GlobalOp>(
|
|
|
|
|
op->getLoc(), zeroSizedArrayType, /*isConstant=*/false,
|
|
|
|
|
LLVM::Linkage::Internal, symName, /*value=*/Attribute(), alignmentByte,
|
|
|
|
|
addressSpace.value());
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
LogicalResult GPUDynamicSharedMemoryOpLowering::matchAndRewrite(
|
|
|
|
|
gpu::DynamicSharedMemoryOp op, OpAdaptor adaptor,
|
|
|
|
|
ConversionPatternRewriter &rewriter) const {
|
|
|
|
|
Location loc = op.getLoc();
|
|
|
|
|
MemRefType memrefType = op.getResultMemref().getType();
|
|
|
|
|
Type elementType = typeConverter->convertType(memrefType.getElementType());
|
|
|
|
|
|
|
|
|
|
// Step 1: Generate a memref<0xi8> type
|
|
|
|
|
MemRefLayoutAttrInterface layout = {};
|
|
|
|
|
auto memrefType0sz =
|
|
|
|
|
MemRefType::get({0}, elementType, layout, memrefType.getMemorySpace());
|
|
|
|
|
|
|
|
|
|
// Step 2: Generate a global symbol or existing for the dynamic shared
|
|
|
|
|
// memory with memref<0xi8> type
|
2024-09-30 21:20:48 +02:00
|
|
|
auto moduleOp = op->getParentOfType<gpu::GPUModuleOp>();
|
|
|
|
|
LLVM::GlobalOp shmemOp = getDynamicSharedMemorySymbol(
|
[mlir][gpu] Introduce `gpu.dynamic_shared_memory` Op (#71546)
While the `gpu.launch` Op allows setting the size via the
`dynamic_shared_memory_size` argument, accessing the dynamic shared
memory is very convoluted. This PR implements the proposed Op,
`gpu.dynamic_shared_memory` that aims to simplify the utilization of
dynamic shared memory.
RFC:
https://discourse.llvm.org/t/rfc-simplifying-dynamic-shared-memory-access-in-gpu/
**Proposal from RFC**
This PR `gpu.dynamic.shared.memory` Op to use dynamic shared memory
feature efficiently. It is is a powerful feature that enables the
allocation of shared memory at runtime with the kernel launch on the
host. Afterwards, the memory can be accessed directly from the device. I
believe similar story exists for AMDGPU.
**Current way Using Dynamic Shared Memory with MLIR**
Let me illustrate the challenges of using dynamic shared memory in MLIR
with an example below. The process involves several steps:
- memref.global 0-sized array LLVM's NVPTX backend expects
- dynamic_shared_memory_size Set the size of dynamic shared memory
- memref.get_global Access the global symbol
- reinterpret_cast and subview Many OPs for pointer arithmetic
```
// Step 1. Create 0-sized global symbol. Manually set the alignment
memref.global "private" @dynamicShmem : memref<0xf16, 3> { alignment = 16 }
func.func @main() {
// Step 2. Allocate shared memory
gpu.launch blocks(...) threads(...)
dynamic_shared_memory_size %c10000 {
// Step 3. Access the global object
%shmem = memref.get_global @dynamicShmem : memref<0xf16, 3>
// Step 4. A sequence of `memref.reinterpret_cast` and `memref.subview` operations.
%4 = memref.reinterpret_cast %shmem to offset: [0], sizes: [14, 64, 128], strides: [8192,128,1] : memref<0xf16, 3> to memref<14x64x128xf16,3>
%5 = memref.subview %4[7, 0, 0][7, 64, 128][1,1,1] : memref<14x64x128xf16,3> to memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3>
%6 = memref.subview %5[2, 0, 0][1, 64, 128][1,1,1] : memref<7x64x128xf16, strided<[8192, 128, 1], offset: 57344>, 3> to memref<64x128xf16, strided<[128, 1], offset: 73728>, 3>
%7 = memref.subview %6[0, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>
%8 = memref.subview %6[32, 0][64, 64][1,1] : memref<64x128xf16, strided<[128, 1], offset: 73728>, 3> to memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>
// Step.5 Use
"test.use.shared.memory"(%7) : (memref<64x64xf16, strided<[128, 1], offset: 73728>, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, strided<[128, 1], offset: 77824>, 3>) -> (index)
gpu.terminator
}
```
Let’s write the program above with that:
```
func.func @main() {
gpu.launch blocks(...) threads(...) dynamic_shared_memory_size %c10000 {
%i = arith.constant 18 : index
// Step 1: Obtain shared memory directly
%shmem = gpu.dynamic_shared_memory : memref<?xi8, 3>
%c147456 = arith.constant 147456 : index
%c155648 = arith.constant 155648 : index
%7 = memref.view %shmem[%c147456][] : memref<?xi8, 3> to memref<64x64xf16, 3>
%8 = memref.view %shmem[%c155648][] : memref<?xi8, 3> to memref<64x64xf16, 3>
// Step 2: Utilize the shared memory
"test.use.shared.memory"(%7) : (memref<64x64xf16, 3>) -> (index)
"test.use.shared.memory"(%8) : (memref<64x64xf16, 3>) -> (index)
}
}
```
This PR resolves #72513
2023-11-16 14:42:17 +01:00
|
|
|
rewriter, moduleOp, op, getTypeConverter(), memrefType0sz, alignmentBit);
|
|
|
|
|
|
|
|
|
|
// Step 3. Get address of the global symbol
|
|
|
|
|
OpBuilder::InsertionGuard guard(rewriter);
|
|
|
|
|
rewriter.setInsertionPoint(op);
|
|
|
|
|
auto basePtr = rewriter.create<LLVM::AddressOfOp>(loc, shmemOp);
|
|
|
|
|
Type baseType = basePtr->getResultTypes().front();
|
|
|
|
|
|
|
|
|
|
// Step 4. Generate GEP using offsets
|
|
|
|
|
SmallVector<LLVM::GEPArg> gepArgs = {0};
|
|
|
|
|
Value shmemPtr = rewriter.create<LLVM::GEPOp>(loc, baseType, elementType,
|
|
|
|
|
basePtr, gepArgs);
|
|
|
|
|
// Step 5. Create a memref descriptor
|
|
|
|
|
SmallVector<Value> shape, strides;
|
|
|
|
|
Value sizeBytes;
|
|
|
|
|
getMemRefDescriptorSizes(loc, memrefType0sz, {}, rewriter, shape, strides,
|
|
|
|
|
sizeBytes);
|
|
|
|
|
auto memRefDescriptor = this->createMemRefDescriptor(
|
|
|
|
|
loc, memrefType0sz, shmemPtr, shmemPtr, shape, strides, rewriter);
|
|
|
|
|
|
|
|
|
|
// Step 5. Replace the op with memref descriptor
|
|
|
|
|
rewriter.replaceOp(op, {memRefDescriptor});
|
|
|
|
|
return success();
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-23 09:51:12 +02:00
|
|
|
LogicalResult GPUReturnOpLowering::matchAndRewrite(
|
|
|
|
|
gpu::ReturnOp op, OpAdaptor adaptor,
|
|
|
|
|
ConversionPatternRewriter &rewriter) const {
|
|
|
|
|
Location loc = op.getLoc();
|
|
|
|
|
unsigned numArguments = op.getNumOperands();
|
|
|
|
|
SmallVector<Value, 4> updatedOperands;
|
|
|
|
|
|
|
|
|
|
bool useBarePtrCallConv = getTypeConverter()->getOptions().useBarePtrCallConv;
|
|
|
|
|
if (useBarePtrCallConv) {
|
|
|
|
|
// For the bare-ptr calling convention, extract the aligned pointer to
|
|
|
|
|
// be returned from the memref descriptor.
|
|
|
|
|
for (auto it : llvm::zip(op->getOperands(), adaptor.getOperands())) {
|
|
|
|
|
Type oldTy = std::get<0>(it).getType();
|
|
|
|
|
Value newOperand = std::get<1>(it);
|
|
|
|
|
if (isa<MemRefType>(oldTy) && getTypeConverter()->canConvertToBarePtr(
|
|
|
|
|
cast<BaseMemRefType>(oldTy))) {
|
|
|
|
|
MemRefDescriptor memrefDesc(newOperand);
|
|
|
|
|
newOperand = memrefDesc.allocatedPtr(rewriter, loc);
|
|
|
|
|
} else if (isa<UnrankedMemRefType>(oldTy)) {
|
|
|
|
|
// Unranked memref is not supported in the bare pointer calling
|
|
|
|
|
// convention.
|
|
|
|
|
return failure();
|
|
|
|
|
}
|
|
|
|
|
updatedOperands.push_back(newOperand);
|
|
|
|
|
}
|
|
|
|
|
} else {
|
|
|
|
|
updatedOperands = llvm::to_vector<4>(adaptor.getOperands());
|
|
|
|
|
(void)copyUnrankedDescriptors(rewriter, loc, op.getOperands().getTypes(),
|
|
|
|
|
updatedOperands,
|
|
|
|
|
/*toDynamic=*/true);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// If ReturnOp has 0 or 1 operand, create it and return immediately.
|
|
|
|
|
if (numArguments <= 1) {
|
|
|
|
|
rewriter.replaceOpWithNewOp<LLVM::ReturnOp>(
|
|
|
|
|
op, TypeRange(), updatedOperands, op->getAttrs());
|
|
|
|
|
return success();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Otherwise, we need to pack the arguments into an LLVM struct type before
|
|
|
|
|
// returning.
|
|
|
|
|
auto packedType = getTypeConverter()->packFunctionResults(
|
|
|
|
|
op.getOperandTypes(), useBarePtrCallConv);
|
|
|
|
|
if (!packedType) {
|
|
|
|
|
return rewriter.notifyMatchFailure(op, "could not convert result types");
|
|
|
|
|
}
|
|
|
|
|
|
2025-02-06 10:49:30 -08:00
|
|
|
Value packed = rewriter.create<LLVM::PoisonOp>(loc, packedType);
|
2024-06-23 09:51:12 +02:00
|
|
|
for (auto [idx, operand] : llvm::enumerate(updatedOperands)) {
|
|
|
|
|
packed = rewriter.create<LLVM::InsertValueOp>(loc, packed, operand, idx);
|
|
|
|
|
}
|
|
|
|
|
rewriter.replaceOpWithNewOp<LLVM::ReturnOp>(op, TypeRange(), packed,
|
|
|
|
|
op->getAttrs());
|
|
|
|
|
return success();
|
|
|
|
|
}
|
|
|
|
|
|
Add generic type attribute mapping infrastructure, use it in GpuToX
Remapping memory spaces is a function often needed in type
conversions, most often when going to LLVM or to/from SPIR-V (a future
commit), and it is possible that such remappings may become more
common in the future as dialects take advantage of the more generic
memory space infrastructure.
Currently, memory space remappings are handled by running a
special-purpose conversion pass before the main conversion that
changes the address space attributes. In this commit, this approach is
replaced by adding a notion of type attribute conversions
TypeConverter, which is then used to convert memory space attributes.
Then, we use this infrastructure throughout the *ToLLVM conversions.
This has the advantage of loosing the requirements on the inputs to
those passes from "all address spaces must be integers" to "all
memory spaces must be convertible to integer spaces", a looser
requirement that reduces the coupling between portions of MLIR.
ON top of that, this change leads to the removal of most of the calls
to getMemorySpaceAsInt(), bringing us closer to removing it.
(A rework of the SPIR-V conversions to use this new system will be in
a folowup commit.)
As a note, one long-term motivation for this change is that I would
eventually like to add an allocaMemorySpace key to MLIR data layouts
and then call getMemRefAddressSpace(allocaMemorySpace) in the
relevant *ToLLVM in order to ensure all alloca()s, whether incoming or
produces during the LLVM lowering, have the correct address space for
a given target.
I expect that the type attribute conversion system may be useful in
other contexts.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D142159
2023-01-19 21:56:04 +00:00
|
|
|
void mlir::populateGpuMemorySpaceAttributeConversions(
|
|
|
|
|
TypeConverter &typeConverter, const MemorySpaceMapping &mapping) {
|
|
|
|
|
typeConverter.addTypeAttributeConversion(
|
|
|
|
|
[mapping](BaseMemRefType type, gpu::AddressSpaceAttr memorySpaceAttr) {
|
|
|
|
|
gpu::AddressSpace memorySpace = memorySpaceAttr.getValue();
|
|
|
|
|
unsigned addressSpace = mapping(memorySpace);
|
|
|
|
|
return wrapNumericMemorySpace(memorySpaceAttr.getContext(),
|
|
|
|
|
addressSpace);
|
|
|
|
|
});
|
|
|
|
|
}
|