Quote:
table is less reliable because it could be a "wrong" table. That's simply not true,
It's always possible for people to choose a correct implementation of a table-driven algorithm. The difficulty of that you can't control for how people use the resources they have available. For example, if you have a lazy programmer, then in the same way they are not going to prefer the 'best' github source to generate the tables, then you can't even expect them to generate the tables at all - they will simply copy the first thing they see.
I have seen programmers do this, on more than one occasion. They are just minimizing effort. I'm afraid, it just is true.
So the reason tables can be less reliable, is because they're easier to get wrong, if you're copying a whole pile of data, how will you notice an error in 512 entries? Even testing on input data sets will only reveal it if you cover all the possible inputs.
And unfortunately, the same reasoning that leads people to say things like "well, it might be a 512b table, but hey memory is plentiful" is the same principle that leads people to choose quick & easy solutions in every domain.
The same is potentially true of the bit serial or byte parallel computational algorithm: programmers aren't idealistic. However, in these cases, there is less to go wrong without you noticing it. It's easier to compare a correct version with a faulty version; which means the algorithms are inherently more reliable, from a psychological perspective.
Byte-parallel computational, nevertheless is a hard sell. That's because the code looks nothing like the error correction polynomial it's supposed to implement and therefore without some persuasion of its merits, programmers are likely to play safe. That's why when I faced that issue in some development last year, the programmer referred me to his version and the web page & JavaScript CRC calculator he got it from, rather than convert the C algorithm I was using into C#. The onus was on me to prove correctness, because his code "just worked". In the end, I spent some time validating my algorithm and we found an error in his tables.
This is why, I have a hunch, that the engineers who first introduced me to the byte parallel computational version were so keen to do so, before I started using the first implementation I saw (which would probably have been a bit-serial version at the time).
Like I did earlier, it really is all about trade-offs. There are cases where a correct table driven version is best. There are cases where bit serial is best (a hardware implementation springs to mind where the algorithm is just a pair of shift registers and a few xor gates).
Any way, into the next point:
Quote:
minimize the time between the reception of the last byte of a frame and the creation of the ACK/NAK. Therefore the best solution is to incrementally calculate the CRC for every byte received
Agreed, and in a 4MHz system at 230.4Kbaud you have about 170 cycles per byte to compute the CRC (78 cycles spare), which the bit-serial algorithm cannot do, but the byte parallel one can. Thus using it we'd improve a 340+200=540 cycle/byte algorithm into a 170 cycle/byte version for the cost of 16 bytes of code. Over 3x faster. The table-driven one would of course be faster still, it could probably cope with a 460.8Kbaud (85-34 = 51 cycles spare).
Cheers julz