WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.1K

(post is archived)

[–] 1 pt (edited )

Negative.

Table would be paged in at first operation, and remain paged in, since it would be sequentially referenced by the compare op.

Static lookups are always faster than any math operation.

In low level languages (Like C, for example) there are system calls like mlock() to keep memory from paging, and inline() https://www.geeksforgeeks.org/inline-function-in-c/ to do the same for compiled functions. By never allowing them to swap you can try to achieve the same efficiency of hardcoding, but you still will not quite get there. Hardcoding is faster as long as the table is referenced frequently enough to prevent swap, which by definition it would be.

[–] 0 pt

Paged into where? What are you talking about? Do you know what a register is?

[–] 1 pt (edited )

Obviously I do. You're completely missing the point.

Ask yourself this question:

How do I get a fibonacci number to load into a register?

i.e. list all the steps, at an assembler level, needed to calculate a fibonacci number before you can load it into the register.

Compare that list of steps to a lookup of a resident, indexed, hard coded table whicj already contains the number.

[–] 0 pt

During initialization, you load 1 and 1 as immediates into two registers to represent the last two Fibonacci values.

Then, you'd just do an ADD, storing the result in the first register, then SUB to subtract the two, ADD to add that result to the index storing the memory location of the current word (also in a register). Then a check to see if index > end. Then when you do the next value, you just use the second register holding the Fibonacci values as the first and repeat.

Pulling data from memory, regardless if in a cache or not just to eliminate that one ADD operation is going to be slower. Or are you saying that pulling data from a cache in one SUB operation is the same speed or faster than two ops using registers? I