WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

767

The use of $6 million should have clued all of you in on the lie about deepseek. T he model sucks. Doesn't matter if it's the HOLY YAS COOL SMALL MODEL!!! - deepseek-r1:1.5b or deepseek-r1:32b, they suck ass. You (me, I plan on it if I can get some nVidia 32gb 5090 founders editions) train your own LORA, which can be done with multiple GPUs even without memory sharing due to the ability to split a model's layers. Though another problem is you often need 2-3x as much VRAM as the model ends up actually being in size, thus quantization is necessary, which lowers the quality just a bit, or a lot, whatever.

THE NEW CHINK THING IS TRASH AND A LIE. /rant

Oh, I fixed my problem. Want to know the funny thing? It was because I had a bash shell function I forgot about and didn't have the flags -it in it to set interactivity. Docker is still a nigger, this wouldn't have been an issue if docker didn't exist and force me to write; docker exec -it ollama ollama "$@" Rather than; ollama "$@"

The use of $6 million should have clued all of you in on the lie about deepseek. T he model sucks. Doesn't matter if it's the HOLY YAS COOL SMALL MODEL!!! - deepseek-r1:1.5b or deepseek-r1:32b, they suck ass. You (me, I plan on it if I can get some nVidia 32gb 5090 founders editions) train your own LORA, which can be done with multiple GPUs even without memory sharing due to the ability to split a model's layers. Though another problem is you often need 2-3x as much VRAM as the model ends up actually being in size, thus quantization is necessary, which lowers the quality just a bit, or a lot, whatever. THE NEW CHINK THING IS TRASH AND A LIE. /rant Oh, I fixed my problem. Want to know the funny thing? It was because I had a bash shell function I forgot about and didn't have the flags -it in it to set interactivity. Docker is still a nigger, this wouldn't have been an issue if docker didn't exist and force me to write; docker exec -it ollama ollama "$@" Rather than; ollama "$@"

(post is archived)

[–] 2 pts

Note to your future-self: Don't try to run AI models in docker.

[–] 0 pt

Standardize it! REEEEEEE!