Quote:
Isn't this a bit strange...? I always thought that a 32-bit dividend should result in a 32-bit quotient, since the divisor could be a small number, like 1, and you would need the full 32-bit to represent the result.
The typical use is in 16-bit integers, often in fixed-point or scaled-integer work. It is quite useful to have 32-bit
intermediate results. Suppose you had for example a number 17,481, and want to multiply by PI. Using straight integers, there's no way to represent 3.14159, but instead of settling for multiplying by 3, you'll get excellent accuracy by multiplying by 355 and dividing by 113. The final result (54,918) still fits in 16 bits, but the intermediate result (6,205,755) will need 23 bits in this case (but fits comfortably in 32 bits also). The final result is off by less than 0.1ppm (<0.00001%) here, which is of course better resolution than 16 bits can give you anyway. If you want to correctly round after the division portion of the calculation instead of just truncating (which was adequate this time), you'll need the remainder that is output by the routine that divides 32-bit numbers by 16-bit numbers and gives 16-bit quotients and remainders.
If you wanted to multiply a larger number like 41,000 by PI, then your final result won't fit in 16 bits. If that's not ok, you can still resort to double precision, or change the scaling of the numbers so the representations don't run you out of the 16-range. Same goes for negative numbers.
Sometimes overflow is not a problem. Consider where you represent a 360° circle with the 16-bit range. Each step then corresponds to 0.0055°, which is plenty good resolution for many applications. (1 radian is represented by $28BE with less than 0.004% error.) What happens if you add 210° to 210°? you get 420°; but since it overflowed at 360°, you only got the low 16 bits, which show 60°, which is correct anyway. Signed or unsigned arithmetic works out equally well here too.
Having a simple set of arithmetic tools like this for 16-bit multiply to get a 16- or 32-bit answer, and 32-bit divide by 16-bit dividend, you can piece together virtually anything you need in other precisions as well. Since a high percentage of the processor time gets spent in these iterative routines, you don't lose much efficiency when you use them as subroutines in a 64-bit-precision routine for example.
It is common to think that certain kinds of work require floating point and could not be at all practical without it. I myself was skeptical when I was presented with the idea of using fixed-point and scaled integer arithmetic to dramatically cut the overhead the computer has to contend with and improve speed. Now after getting some good experience in it however, I'm a convert.
There's more about this kind of thing at
http://wilsonminesco.com/16bitMathTables/ .