A torch.DType is an object that represents the data type of a torch.Tensor. PyTorch has twelve different data types:
Data type | dtype |
---|---|
32-bit floating point | torch.float32 or torch.float |
64-bit floating point | torch.float64 or torch.double |
64-bit complex | torch.complex64 or torch.cfloat |
128-bit complex | torch.complex128 or torch.cdouble |
16-bit floating point[1] | torch.float16 or torch.half |
16-bit floating point[2] | torch.bfloat16 |
8-bit integer (unsigned) | torch.uint8 |
8-bit integer (signed) | torch.int8 |
16-bit integer (signed) | torch.int16 or torch.short |
32-bit integer (signed) | torch.int32 or torch.int |
64-bit integer (signed) | torch.int64 or torch.long |
Boolean | torch.bool |
To find out if a torch.dtype
is a floating point data type, the property is_floating_point
can be used, which returns True
if the data type is a floating point data type.
To find out if a torch.dtype
is a complex data type, the property is_complex
can be used, which returns True
if the data type is a complex data type.
When the dtypes of inputs to an arithmetic operation (add, sub, div, mul) differ, we promote by finding the minimum dtype that satisfies the following rules:
- If the type of a scalar operand is of a higher category than tensor operands (where complex > floating > integral > boolean), we promote to a type with sufficient size to hold all scalar operands of that category.
- If a zero-dimension tensor operand has a higher category than dimensioned operands, we promote to a type with sufficient size and category to hold all zero-dim tensor operands of that category.
- If there are no higher-category zero-dim operands, we promote to a type with sufficient size and category to hold all dimensioned operands.
A floating point scalar operand has dtype torch.get_default_dtype() and an integral non-boolean scalar operand has dtype torch.int64. Unlike numpy, we do not inspect values when determining the minimum dtypes of an operand. Quantized and complex types are not yet supported.
[1] Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important.
[2] Sometimes referred to as Brain Floating Point: use 1 sign, 8 exponent and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as float32
Attributes
- Companion
- object
- Source
- DType.scala
- Graph
-
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class BFloat16object bfloat16.typeclass Bits16object bits16.typeclass Bits1x8object bits1x8.typeclass Bits2x4object bits2x4.typeclass Bits4x2object bits4x2.typeclass Bits8object bits8.typeclass Boolobject bool.typeclass Complex128object complex128.typeclass Complex32object complex32.typeclass Complex64object complex64.typeclass Float16object float16.typeclass Float32object float32.typeclass Float64object float64.typeclass Float8_e4m3fnobject float8_e4m3fn.typeclass Float8_e5m2object float8_e5m2.typeclass Int16object int16.typeclass Int32object int32.typeclass Int64object int64.typeclass Int8object int8.typeclass NumOptionsobject numoptions.typeclass QInt32object qint32.typeclass QInt8object qint8.typeclass QUInt2x4object quint2x4.typeclass QUInt4x2object quint4x2.typeclass QUInt8object quint8.typeclass UInt8object uint8.typeclass Undefinedobject undefined.type