Removing floats seems to be a way to avoid the work of understanding them and handling them. For a lot of OS interfaces you don't use floats. I'd think 3D would be the biggest one to suffer from lack of floats in function calls.
You'd certainly lose in tasks that require precise results.
Advantages and Disadvantages of Floating-Point Numbers
Floating-point numbers have two advantages over integers. First, they can represent values between integers. Second, because of the scaling factor, they can represent a much greater range of values. On the other hand, floating point operations usually are slightly slower than integer operations, and you can lose precision.
https://www.oreilly.com/library/view/c-primer-plus/9780132781145/ch03lev2sec13.html
precise results
By their nature, floating point is an approximation.
a Float is still more precise than an Int.
Float has a greater magnitude and is an approximation beyond the maximum significant digits for any specific type of float, but int is always exact within its limits.
Doubles are more precise than ints, floats are not.
Usually when you want the max precision and your values stay in a bounded range, you use fixed point. Gives you the biggest bang for your bits.
It looks like some people are converting code from using floats to ints for optimization: http://justinparrtech.com/JustinParr-Tech/programming-tip-turn-floating-point-operations-in-to-integer-operations/
But to me that seems like something you would want to do after the fact if you need that code sped up, and after all the code is complete.
I'd hate to write something floating point heavy and have to do that kind of twiddling for every bit of it, with no exception to fall back on.
Fair enough. I think there are good chunks of code that don't require precision and can speed up things.
On another topic, what's your opinion on the new crApple M1 architecture.
I had to look it up, but I'd say I'm positive to it. I have very little assembly experience but from what I've seen I really like ARM's instruction set. Having different speed cores opens up more options to developers. The neural network core is kind of cool if not a bit scary. When I say I'm for AI what I mean is I'm for democratizing AI and want the average person to understand it and use it. On a highly locked down device (any apple device), I'm not 100% certain it will be used for good.
So yeah, I think their architecture is interesting, if it was being run on an open system.
It creates job security.
Floating point is more efficient. It has some accuracy issues with very large scale differences (e.g. a Space Sim calculating the physics of a flying toaster vis a vis Jupiter), but it's better for objects of similar sizes (e.g. calculating the physics of an X-Wing vs a TIE Fighter).
The only one I can think of is since the only floating points in Zenith are 64-bit, and 64-bit int is the standard int in it as well, that you can dedicate more bits to precision and ask the programmer to keep track of scale either by context, or with another variable, making Zenith have the highest precision floating point math as a default methodology for any system. That seems like a fetch but it's the best I've got for any rational reason to do it.
The other is that by having the programmer do it themselves they can have control over the number of bits for precision and exponential with their selection of I8,I16,I32,I64,U8,U16,U32,U64 pairings. That would give you 64 different equivalent floating point types each with their own advantages.
Absolutely. On a lot of chips you don't have to save and restore the floating point registers if you never use them. This saves resources on every context switch, which becomes huge since you do it so much.
Of course, I assume you/they mean getting rid of floating point ops in the kernel, which it shouldn't need anyways. And possibly disallowing them in kernel threads. Disabling it for user space would just limit what you can do in apps.
So in the context of TempleOS/ZenithOS there is no kernel vs user land. Everything is kernel. It's basically if your kernel had JIT compiling and then you could execute C on the shell at the kernel level, line by line.
Well, in that case you have to stop using floating point entirely to get a performance benefit. Some CPUs support lazy floating point context save/restore, wherein the CPU detects whether a thread has used the floating point registers and sets a flag. When the OS goes to context switch, it can decide whether to save/restore FPU registers based on whether or not the previous/next thread used them. If most of your threads use them, say because shared libraries use them, you may reap little benefit. If you can't detect usage, and you have the FPU enabled, you have to assume they are used. If they are not usable, then you just ignore the FPU entirely.
Btw an option for C programs that depend on float support is FPU emulation. It's costly in terms of performance, but mechanisms for it are well established.
(post is archived)