WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2026 Poal.co

229

Here is a interesting thought. Does this mean that you could probably create things like this with the right model. As in, they have been trained on code that is hidden behind NDA's and such?

Archive: https://archive.today/nyv2V

From the post:

>My old 2016 MacBook Pro has been collecting dust in a cabinet for some time now. The laptop suffers from a “flexgate” problem, and I don’t have any practical use for it. For quite some time, I’ve been thinking about repurposing it as a guinea pig, to play with FreeBSD — an OS that I’d aspired to play with for a long while, but had never had a real reason to. During the recent holiday season, right after FreeBSD 15 release, I’ve finally found time to set the laptop up. Doing that I didn’t plan, or even think, this may turn into a story about AI coding.

Here is a interesting thought. Does this mean that you could probably create things like this with the right model. As in, they have been trained on code that is hidden behind NDA's and such? Archive: https://archive.today/nyv2V From the post: >>My old 2016 MacBook Pro has been collecting dust in a cabinet for some time now. The laptop suffers from a “flexgate” problem, and I don’t have any practical use for it. For quite some time, I’ve been thinking about repurposing it as a guinea pig, to play with FreeBSD — an OS that I’d aspired to play with for a long while, but had never had a real reason to. During the recent holiday season, right after FreeBSD 15 release, I’ve finally found time to set the laptop up. Doing that I didn’t plan, or even think, this may turn into a story about AI coding.
[–] 1 pt

Does this mean that you could probably create things like this with the right model. As in, they have been trained on code that is hidden behind NDA's and such?

None of the publicly available LLMs were trained on proprietary source code. Even Microsoft’s own Copilot was not trained on any of its vast collection of internal, working, production code. They know full well these things will leak their training data—that’s what they are designed to do.

[–] 1 pt

I've been looking for info about AI writing drivers. Thanks for posting this!

It seems to me this would be a great way to get away from operating systems that are globohomo junk like windows or Linux distros that get taken over by SJWs & other bad actors.

[–] 1 pt

It would be funny to have it un-woke linux distros. Have it read a Code of Conduct where it says "don't use words like master and slave" then have it go through the entire code base and re-name things BACK to master/slave then submit endless PR's.

[–] 1 pt

But what is the price of convenience? These companies are not benevolent entities. A shortcut here is your data monetized, filed, blacklisted, documented for profit meta data hell.

[–] 1 pt

Not suggesting its a good thing. Just that it IS a thing.

The box is open and the curse is released. No going back now.

[–] 1 pt

Aye, fucking cracks me up an 'AI safety officer' (nigger speak for where do we put these retarded extra female hr broads...) ran the model on her local machine and it deleted all her emails... It's called a vm... we've had em for decades now.

[–] 1 pt

Oh right, Yeah, I read that one earlier. Apparently she tried to run it against a "test" inbox and it worked fine so she decided "just go fuck my shit up" and.. it did.