Numerical Precision
Sources of errors in a computer
-
Integers:
- exact, but can overflow
- numpy int32, int64. values within $\pm 2^{n-1}$
- Python3 native int increases size automatically to avoid overflow, but numpy only deals with less than int64 (for efficiency)
-
Floats:
- uses some memory for exponent and some for significand
- come with errors and appx.
In python 3, 1_000_000
$=1,000,000$ (underscores ignored).
Errors can accumulate.
IEEE floating points
- [
sign
] [exponent
] [fraction
] - leading bit is always 1 convention to improve memory effieciency.
0
is represented by both exponent and fraction being bunch of000
- biased exponents, such that the actual exponent is
bias - exponent
, so2**bias
is not representable (it is taken already by0
) - numerical precision $\epsilon$
0.0/0.0
returns NaN- overflow returns
inf = 1.0/0.0
Warned - underflow returns
0.0
No Warning - Python and Numpy use int64 by default (and we almost always want to use this exclusively)