Standardize it! REEEEEEE!
Standardize it! REEEEEEE!
The use of $6 million should have clued all of you in on the lie about deepseek. T he model sucks. Doesn't matter if it's the HOLY YAS COOL SMALL MODEL!!! - deepseek-r1:1.5b or deepseek-r1:32b, they suck ass. You (me, I plan on it if I can get some nVidia 32gb 5090 founders editions) train your own LORA, which can be done with multiple GPUs even without memory sharing due to the ability to split a model's layers. Though another problem is you often need 2-3x as much VRAM as the model ends up actually being in size, thus quantization is necessary, which lowers the quality just a bit, or a lot, whatever.
THE NEW CHINK THING IS TRASH AND A LIE. /rant
Oh, I fixed my problem. Want to know the funny thing? It was because I had a bash shell function I forgot about and didn't have the flags -it in it to set interactivity. Docker is still a nigger, this wouldn't have been an issue if docker didn't exist and force me to write; docker exec -it ollama ollama "$@" Rather than; ollama "$@"
This sub is text post only.
If you want to add pics in your post, use https://pic8.co and then paste the short link in the post content.
Tellpoal is for telling things, not just for posting links
If it’s not telling Poal something, please post it to the appropriate sub
If what you want to say is NSFW, please mark your post as such.
No violating Poal
NO POSTS ENCOURAGING ANYONE TO END THEIR LIFE. Will be referred to site owner and admin for further review.
(post is archived)