WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2026 Poal.co

226

Archive: https://archive.today/ScjY7

From the post:

>I’m running Proxmox 9 on an Intel i5–12400 with an RTX 3050 8GB. As AI/LLM rigs go, it’s not that impressive, but it’s enough to play with smaller models for basic text generation, rudimentary code completion, and many embedding models, without sending my data to third-party APIs. And running vLLM in an LXC allows me to share that one GPU with other services (e.g. Immich, Jellyfin, Frigate).

Archive: https://archive.today/ScjY7 From the post: >>I’m running Proxmox 9 on an Intel i5–12400 with an RTX 3050 8GB. As AI/LLM rigs go, it’s not that impressive, but it’s enough to play with smaller models for basic text generation, rudimentary code completion, and many embedding models, without sending my data to third-party APIs. And running vLLM in an LXC allows me to share that one GPU with other services (e.g. Immich, Jellyfin, Frigate).

Be the first to comment!