On a computer, a variable is like a container that holds a value. When working with numbers, we need to choose between two types: integers (whole numbers) and floating point numbers (numbers with decimal points). This matters because computers handle these types differently.
Integer Variables:
An integer is a whole number like 1, 42, or -10. They're great for counting things—like people or items—because they’re simple, use less memory, and are faster for the computer to process.
Floating Point Variables:
A floating point number includes decimals, like 1.5, 3.14, or 0.0001. These are used when precision is important—such as in measurements or scientific calculations. But they take more memory and are a bit slower for the computer to handle.
Why the Difference Matters:
Having both types lets computers be efficient. Use integers when you only need whole numbers—like counting votes, users or items—and floating point numbers when you need decimals.
Why Not Use Floating Point for Counting votes:
Using floating point numbers to count things like votes is a bad idea. They can store numbers imprecisely, sometimes showing values like 99.999999 instead of 100. This tiny error can cause big problems in databases, where accuracy matters. That’s why counts should always use integers—they’re not just faster, they’re more reliable.
Ok, thanks for explaining it to me.
You said
You don't use floating point to count integers unless you're planning to alter the counts by percentages.
First, I would like to know where you heard/read Diebold/Premier Election Solutions/Dominion Voting was using floating points.
Secondly, would there be any legitimate reason to use floating points as opposed to integers when calculating whole numbers, like votes during an election?
Thanks for explaining shit to my retarded ass
Legit reason : Those cracka ass Republicans only get 0.6 for their racist vote, while the remainder goes to the next D (dei) vote to give them 1.4 votes to level the playing field. Take the total and report.
(post is archived)