[APFloat] Add E4M3B11FNUZ

X. Sun et al. (https://dl.acm.org/doi/10.5555/3454287.3454728) published
a paper showing that an FP format with 4 bits of exponent, 3 bits of
significand and an exponent bias of 11 would work quite well for ML
applications.

Google hardware supports a variant of this format where 0x80 is used to
represent NaN, as in the Float8E4M3FNUZ format. Just like the
Float8E4M3FNUZ format, this format does not support -0 and values which
would map to it will become +0.

This format is proposed for inclusion in OpenXLA's StableHLO dialect: https://github.com/openxla/stablehlo/pull/1308

As part of inclusion in that dialect, APFloat needs to know how to
handle this format.

Differential Revision: https://reviews.llvm.org/D146441
This commit is contained in:
David Majnemer
2023-03-09 23:10:57 +00:00
parent 5a9bad171b
commit 2f086f265b
23 changed files with 365 additions and 149 deletions

View File

@@ -53,6 +53,7 @@ __all__ = [
"Float8E4M3FNType",
"Float8E5M2Type",
"Float8E4M3FNUZType",
"Float8E4M3B11FNUZType",
"Float8E5M2FNUZType",
"F16Type",
"F32Type",
@@ -602,6 +603,13 @@ class Float8E4M3FNUZType(Type):
@staticmethod
def isinstance(arg: Any) -> bool: ...
class Float8E4M3B11FNUZType(Type):
def __init__(self, cast_from_type: Type) -> None: ...
@staticmethod
def get(*args, **kwargs) -> Float8E4M3B11FNUZType: ...
@staticmethod
def isinstance(arg: Any) -> bool: ...
class Float8E5M2FNUZType(Type):
def __init__(self, cast_from_type: Type) -> None: ...
@staticmethod