Examining these examples with IEEE-754 binary half-precision format, we start with:
Code:
3 = 0 10000 1000000000
7 = 0 10001 1100000000
8 = 0 10010 0000000000
10 = 0 10010 0100000000
11 = 0 10010 0110000000
Dividing the four initial values by 3 gives, with guard bits before rounding:
Code:
7/3 = 0 10000 0010101010 10 1
8/3 = 0 10000 0101010101 01 1
10/3 = 0 10000 1010101010 10 1
11/3 = 0 10000 1101010101 01 1
In each case, the final sticky bit is set because the remainder is non-zero after producing all the mantissa and guard bits. The rounded values are:
Code:
7/3 = 0 10000 0010101011
8/3 = 0 10000 0101010101
10/3 = 0 10000 1010101011
11/3 = 0 10000 1101010101
If we now re-multiply these results by 3, we get:
Code:
7/3*3 = 0 10001 1100000000 10 0 <-- a tie case for rounding, resolved in favour of the even mantissa, which is exactly 7.0.
8/3*3 = 0 10001 1111111111 10 0 <-- another tie case for rounding, resolved by incrementing the mantissa *and* exponent to get exactly 8.0.
10/3*3 = 0 10010 0100000000 01 0 <-- this unambiguously rounds down to exactly 10.0.
11/3*3 = 0 10010 0101111111 11 0 <-- this unambiguously rounds up to exactly 11.0.
So, with just 16 bits, IEEE-standard floating-point binary arithmetic passes an accuracy test that many pre-IEEE and decimal floating-point implementations fail with 32, 40 or even more.