Two and a half years ago I was one of the earliest users of ChatGPT, and like most others I was amazed at what it could do, but right away I started dreaming up potential use cases that it was in no way ready to support. As I started testing ideas, the top of my wishlist read, “Run an LLM at home.” Not for any particular reason at the time, other than my love of self-hosting anything I could reasonably run locally.
Fast-forward to today, and much has changed. Dozens of open-source models have been developed and released to meet a variety of needs, some rivaling all but the best commercial offerings on the market. Over the past year I have tested various models on local hardware, learning their limitations as well as their advantages. I’ve also had a ChatGPT Plus subscription for about two years, and while my primary use of LLMs has been focused on OpenAI models, recently I’ve found that their generation quality can mostly be matched with the right combination of open-source models (and some smart system prompting), at least for my needs. It also doesn’t hurt that locally-run models don’t call home to report on your conversations.
The only problem I had was that the turnkey options for running models locally are fairly limited, and even moreso for interacting with them. Ollama is a great way to easily download and run open-source models, but its UI is nonexistent, requiring users to summon a model from the command line. That’s fine when you’re working on a terminal, but for everyday use a suitable replacement for a commercial LLM experience would have to include a GUI with similar features.
At first, what I was hoping to find didn’t exist. The small number of projects in development that even vaguely resembled such an interface were either stagnant or very light on features. I set the idea aside for a while to focus on more important work. It wasn’t until just after this past Christmas that I searched again and found a new project that didn’t just meet all of my requirements, it greatly exceeded my expectations. It only took a couple of hours of testing it to decide that I was ready to move to a fully self-hosted solution for everyday LLM use.
And so, with a slightly heavier wallet in my pocket I bid OpenAI’s subscription farewell, and so far I haven’t regretted it one bit. I’ll follow up soon on the tech stack I’m using once I’ve had time to flesh out more of its features and determine how well they work.