Precision (computer science)
From Infogalactic: the planetary knowledge core
This article does not cite any sources. (March 2007) (Learn how and when to remove this template message)
|
In computer science, the precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a value.
Rounding error
Precision is often the source of rounding errors in computation. The number of bits used to store a number will often cause some loss of accuracy. An example would be to store "sin(0.1)" in IEEE single precision floating point standard. The error is then often magnified as subsequent computations are made using the data (although it can also be reduced).
See also
- Arbitrary-precision arithmetic
- IEEE754 (IEEE floating point standard)
- Integer (computer science)
- Significant figures
- Truncation