WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

513

I was on a ham radio net recently and the topic of AI came up. The other operator I was talking with convinced me to download Ollama (Meta, yuck) to at least play and see how it works.

Supposedly there are versions of Meta's Ollama that aren't woke or at least are more politically incorrect, but I didn't have the 500GB space on the VM I was working with, yet...

But before I move into building and deploying some massive AI into a VM, are there any recommendations for a better non-Facebook AI?

I was on a ham radio net recently and the topic of AI came up. The other operator I was talking with convinced me to download Ollama (Meta, yuck) to at least play and see how it works. Supposedly there are versions of Meta's Ollama that aren't woke or at least are more politically incorrect, but I didn't have the 500GB space on the VM I was working with, yet... But before I move into building and deploying some massive AI into a VM, are there any recommendations for a better non-Facebook AI?

(post is archived)

[–] 3 pts

Ollama with Dolphin-llama3 model, faster and better than llama2-uncensored.

[–] 1 pt

uncensored

Most of these models (for example, Alpaca, Vicuna, WizardLM, MPT-7B-Chat, Wizard-Vicuna, GPT4-X-Vicuna) have some sort of embedded alignment. For general purposes, this is a good thing. This is what stops the model from doing bad things, like teaching you how to cook meth and make bombs. But what is the nature of this alignment? And, why is it so?

this is a good thing

If cooking meth is your metric for censorship you're on the wrong path. (that's not to AOU, he's not the developer instituting such things, I think) That is as easy as using jewgle. I probably have instructions somewhere.

[–] 1 pt

(that's not to AOU, he's not the developer instituting such things, I think)

That's correct. That's the name they gave to the model because it supposedly can tell you things that can be considered illegal depending the country or state you're living in.

[–] 0 pt

It seems like you've set this up on a machine you have? How easy is it to do? Is it more fun than useful or more useful than fun?

[–] 1 pt

Excellent, thanks!

[–] 1 pt

Depending on how much ram you have, choose wisely.

8b > 8~16GB of ram.

70b > 64GB or more

[–] 1 pt

Is it all CPU based or does it offload to the GPU? There a couple older workstations I could get my hands on with still decent CPUs that go up to 128 Gb of RAM, but sourcing a decent GPU for them would be expensive.

[–] 0 pt

How do I know which was installed if I simply performed the "ollama run dolphin-llama3" command and it fetched the Dolphin model?

Am I even using the right vernacular? I need to skill up on all this, LOL.