Admittedly,I'm a little rusty on the inner workings of the cpu, but the table would be in a cache and not in registers, correct? So the lookup operation you refer to means it would have to be fetched from the cache into a register before the compare which AFAIK is slower vs a simple add and store of the previous 2 Fibonacci numbers happening all within registers.
https://www.gktoday.in/topic/which-is-the-fastest-computer-memory-register-or-cache-3/
Negative.
Table would be paged in at first operation, and remain paged in, since it would be sequentially referenced by the compare op.
Static lookups are always faster than any math operation.
In low level languages (Like C, for example) there are system calls like mlock() to keep memory from paging, and inline() https://www.geeksforgeeks.org/inline-function-in-c/ to do the same for compiled functions. By never allowing them to swap you can try to achieve the same efficiency of hardcoding, but you still will not quite get there. Hardcoding is faster as long as the table is referenced frequently enough to prevent swap, which by definition it would be.
Paged into where? What are you talking about? Do you know what a register is?
Obviously I do. You're completely missing the point.
Ask yourself this question:
How do I get a fibonacci number to load into a register?
i.e. list all the steps, at an assembler level, needed to calculate a fibonacci number before you can load it into the register.
Compare that list of steps to a lookup of a resident, indexed, hard coded table whicj already contains the number.
(post is archived)