I doubt that hardcoding is the way to go. To calculate the next fibonacci is a single add of cpu registers. Looking things up in a table seems like it would actually take longer.
Incorrect.
You'vr omitted all the steps necessary to get the next Fibonacci number loaded into a register. (ironically, this is what hardcoding accomplishes, btw)
Your way: A math operation is require loading value 1, mathop to calculate next fibonacco value, saving value 2, compare operation to index number.
My way: A hardcoded table gets turned into a memory resident space by the compiler, never gets swapped, and requires only a lookup, and a compare. Pointer just moves one byte to next stored number, then a compare of the two registers.
It's not just a little faster; it's probably 4x faster.
Admittedly,I'm a little rusty on the inner workings of the cpu, but the table would be in a cache and not in registers, correct? So the lookup operation you refer to means it would have to be fetched from the cache into a register before the compare which AFAIK is slower vs a simple add and store of the previous 2 Fibonacci numbers happening all within registers.
https://www.gktoday.in/topic/which-is-the-fastest-computer-memory-register-or-cache-3/
Negative.
Table would be paged in at first operation, and remain paged in, since it would be sequentially referenced by the compare op.
Static lookups are always faster than any math operation.
In low level languages (Like C, for example) there are system calls like mlock() to keep memory from paging, and inline() https://www.geeksforgeeks.org/inline-function-in-c/ to do the same for compiled functions. By never allowing them to swap you can try to achieve the same efficiency of hardcoding, but you still will not quite get there. Hardcoding is faster as long as the table is referenced frequently enough to prevent swap, which by definition it would be.
(post is archived)