[mlir] Add support for TF32 as a Builtin FloatType

This diff adds support for TF32 as a Builtin floating point type. This
supplements the recent addition of the TF32 semantic to the LLVM APFloat class
by extending usage to MLIR.

https://reviews.llvm.org/D151923

More information on the TF32 type can be found here:

https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D153705
This commit is contained in:
Jeremy Furtek
2023-07-06 08:56:05 -07:00
committed by Mehdi Amini
parent 8f7e41d040
commit 6685fd8239
20 changed files with 93 additions and 3 deletions

View File

@@ -56,6 +56,7 @@ __all__ = [
"Float8E4M3B11FNUZType",
"Float8E5M2FNUZType",
"F16Type",
"FloatTF32Type",
"F32Type",
"F64Type",
"FlatSymbolRefAttr",
@@ -627,6 +628,14 @@ class F16Type(Type):
@staticmethod
def isinstance(arg: Any) -> bool: ...
# TODO: Auto-generated. Audit and fix.
class FloatTF32Type(Type):
def __init__(self, cast_from_type: Type) -> None: ...
@staticmethod
def get(*args, **kwargs) -> FloatTF32Type: ...
@staticmethod
def isinstance(arg: Any) -> bool: ...
# TODO: Auto-generated. Audit and fix.
class F32Type(Type):
def __init__(self, cast_from_type: Type) -> None: ...