Quote:
How many cycles does an average operation take?
I'm not sure how to calculate an average, but I can provide an idea (estimate) of how many cycles the various ops take. Most operations end with a normalization step, the number of clock cycles required to perform a normalization varies depending on the data.
Normalize: max 160 clock cycles (2 per bit), minimum 1 clock cycle.
ADD: minimum of 8 clocks under idea conditions (exponents are the same so there's no shifting of operands) If non-ideal add plus 3x a max of 80 to align operands, plus normalization
SUB: 3 + number of clock cycles to do an ADD. (A SUB is just an ADD with a complemented FAC).
MUL: about 10 + 4x 80 (about 350 clocks) + normalization
DIV: about 10 + 2x 80 (about 200 clocks) + normalization (Divide is faster than MUL because it has a dedicated shifter)
It does take a ton of clock cycles because it's shifting an 80 bit mantissa a bit at a time. So it can take 100's of clock cycle to complete an operation. It's still much faster than software for the same operation. * I counted up the clock cycles manually by looking at the state machine, so I could be a little off. It can be determined by running simulations exactly how long a particular operation takes.
Quote:
And what happens if an interrupt hits during that time?
The core is organized as a memory mapped peripheral. It accepts a command, then processes the command independently of the cpu. So the cpu can go off and perform other processing like interrupts while it waits for an operation to complete. It doesn't delay the cpu any more than a normal memory or I.O cycle would. Software must poll the busy bit in the FPU's status register to find out when an operation is complete. The status has to be checked before a command sent to the FPU, otherwise if the FPU is busy it'll ignore the command.
I've been thinking of adding an interrupt signal to indicate the completion of an FP operation.