WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

865

Even a dumbass Calculator is like 10 MB+. And why would it ever need an update, eh?

Even a dumbass Calculator is like 10 MB+. And why would it ever need an update, eh?

(post is archived)

[–] 2 pts

Why isn't it instinctual to produce optimized code the first time, rather than it being a process undertaken at the end?

[–] 4 pts

In programming there is a rule followed by all skilled programmers. "Premature optimization is the root of all evil."

Optimizing code is difficult and frequently makes code much harder to understand and is itself an extremely common source of bugs. Optimizing code takes extra time. You can't optimize everything and developers usually don't know what actually requires optimization. The 80/20 rule almost universally applies. 80% of your runtime will be spent in 20% of your code. 20% of your code will take 80% of your time to develop. So on and so on.

The question becomes, which 20% of your code will you optimize? If you pick poorly you may spend 80% of you time on code which only effects 5% of the runtime overhead. Which means we need to know because our gut is frequently wrong.

Code optimization, concurrency, and parallelism is some of my core background. This is where profiling comes into plays. You have to have it coded to profile it. If you can accurately test it, we can determine which 20% is taking your time. Which tells you where to start digging.

Sadly, in many cases, optimization doesn't always mean tweaking code, it frequently means rewriting algorithms or changing the algorithms used in the current implementation, which in turn can ripple out to every bit of code which uses the code requiring optimization.

Optimization can be expensive. While there is usually a lot of low hanging fruit to improve things, most coders these days don't really know anything about optimization because it frequently requires low level knowledge and understanding of CPUs, cache, compilers, assembly language, and so on. A small fraction of developers know this stuff anymore. Which means you frequently use your most experienced developers to optimize, pulling from other tasks and further increasing costs.

Which feeds into my previous snarky comment about project management. Historically your experienced developers had broad control of what they worked on. These days, with agile for idiots, everything must be accounted for. As a result, experienced developers spend much of their time fixing street shitter fuck ups, bugs, and pulling things together just so it will run without crashing. Which is why languages like c#, java, and python have become so popular. These languages generally make it harder to crash. Though far from impossible.

[–] 1 pt

Great background. I appreciate it.

[–] 2 pts

IMO the biggest factor in optimization in modern software is libraries.

Modern software often results in larger size files due, in part, to the utilization of numerous libraries and frameworks, which have become an integral part of software development. Libraries provide pre-written code, functions, and modules that developers can leverage to enhance functionality, save time, and ensure more stable and reliable code. However, the trade-off is that these libraries can be quite extensive, containing many additional features and resources that a particular software may not utilize in its entirety. This "bloat" accumulates when developers incorporate several libraries, each possibly containing superfluous code and data, thereby inflating the overall size of the software. Moreover, to prioritize rapid development, ease of use, and robustness, modern software tends to bundle together diverse functionalities and features through these libraries, even if only a subset is required for a particular application, contributing further to the expansion of file sizes and resource demands. This phenomenon can be observed across various software applications, from desktop programs to mobile apps, where developers prioritize feature-rich, multifaceted, and user-friendly environments, which often necessitate the integration of comprehensive libraries and framework.

So, if it is really important for a specific user case (perhaps application needs to fit into a very small size), work can be done to reduce reliance on libraries, and ensure only the bare minimum of code is required, however this requires more time investment.

[–] 0 pt

If so many of today's programs rely on libraries, why not share them among programs? I.e. you'd install all the libraries upfront on a new computer so that programs need not ship with their own copies?

[–] 3 pts

While the idea of sharing libraries among programs to reduce individual software size is sound in principle and does get implemented in certain contexts (e.g., shared DLLs in Windows, shared libraries in Unix/Linux), numerous challenges and issues arise with a universal application of this concept.

Version Conflict: Different applications might require different versions of a library due to dependency on particular features or configurations. Having a single shared version could introduce compatibility issues.

Security and Stability: If a shared library gets updated, it might introduce bugs or alter functionality in a way that affects all software relying on it. This can introduce unexpected behavior or vulnerabilities into applications that were stable with a prior version of the library.

Customization: Developers often use customized versions of libraries, which might be modified to suit the specific needs of their application. Sharing such customized libraries centrally could create conflicts and complicate software development and deployment.

Dependency Management: Managing shared libraries, ensuring all are up-to-date and compatible with all installed software, would require a robust and complex dependency management system, which could be challenging to create and maintain.

User Experience: For end-users, a need to manage, update, and troubleshoot issues with shared libraries may prove to be a complex and frustrating experience, especially for those not versed in technical troubleshooting.

Isolation and Sandboxing: Modern software design often prioritizes isolating applications to enhance security and stability. This means ensuring that the failure or compromise of one component (e.g., a library) does not impact others. Sharing libraries could undermine this isolation.

Despite these challenges, some operating systems and development environments do utilize shared libraries to a certain extent, managing some of these challenges to leverage the benefits of reduced redundancy and resource usage. However, striking the right balance between shared and bundled resources continues to be a complex and nuanced aspect of software design and deployment.

[–] 1 pt

Because most computers have at least 8GB of RAM and a SSD that's as fast as RAM for the suckers stuck on 32-bit and its max of 4GB of RAM. The CPU and the paths the data takes across the motherboard are also fast or powerful enough that coders can get lazy and not have to optimize their code.

The fact that 64-bit systems means there's no limit on RAM means code optimization doesn't need to happen. The fact that storage is cheap ($14 for an 128GB SSD - https://www.microcenter.com/product/659866/inland-professional-128gb-ssd-3d-tlc-nand-sata-30-6-gbps-25-inch-7mm-internal-solid-state-drive) means code optimization doesn't need to happen. (((After all, its cheaper to have the end user of your software to upgrade their system, it's time for a new one anyways, that things an antique.)))

[–] 1 pt

Computer components can't continue improving forever, barring quantum... Stands to reason once physical limits are reached the programs will be forced to be efficient again, like the early days.

[–] 1 pt

And they will be so used to not having to optimize, they won't be able too. It's stupid...I hate and loathe bloated, poorly written software. My level of coding doesn't extend far beyond some PowerShell scripts, but I try to make that optimized.