WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

212

I have been working on something similar. Nothing good enough to show off at this point but maybe if I get it where I want it I will write up a DIY post in the future.

Archive: https://archive.today/JGMYN

From the post:

>Over the last two years, I’ve experimented with various Cloud AI models (ChatGPT, Claude, Gemini). The results were satisfactory across the board, partly because the task’s complexity is relatively low. The prompt and tool definitions total about 2,300 tokens. My switching between models was primarily driven by the search for better pricing. The more useful the assistant becomes, the more we use it—and obviously, higher usage leads to higher costs. In some months, API expenses exceeded €12. This is largely because the bot is autonomous; it doesn’t just wait for user input—it acts proactively. While this proactivity is a crucial feature, it translates into significantly higher token consumption.

I have been working on something similar. Nothing good enough to show off at this point but maybe if I get it where I want it I will write up a DIY post in the future. Archive: https://archive.today/JGMYN From the post: >>Over the last two years, I’ve experimented with various Cloud AI models (ChatGPT, Claude, Gemini). The results were satisfactory across the board, partly because the task’s complexity is relatively low. The prompt and tool definitions total about 2,300 tokens. My switching between models was primarily driven by the search for better pricing. The more useful the assistant becomes, the more we use it—and obviously, higher usage leads to higher costs. In some months, API expenses exceeded €12. This is largely because the bot is autonomous; it doesn’t just wait for user input—it acts proactively. While this proactivity is a crucial feature, it translates into significantly higher token consumption.