Playing with Local LLMs

After experimenting with OpenAI, Claude, and Gemini Pro APIs, I am now deep into playing with locally hosted large language models like llama2 and mistral. Initially, I thought this wouldn’t be an exciting experiment, but I was wrong… very wrong, actually. For the past few days, I have been using these LLMs on my local machine and trying to see what they can be used best for.

In fact, I was able to get outputs from local LLMs to a Google Sheets file. And I have even recorded a detailed video about it that you can watch here:

I am using LLMs via Ollama because it’s extremely easy to use and manage multiple famous models at the same time. And have put all the involved code from my experiment into this GitHub repository (not to mention, you can access it for free).

Apart from this, I am still looking for better use cases for these large language models. I will also be experimenting with local speech-to-text transcription models like Insanely Fast Whisper. Am I starting to transcribe podcasts? No, but will definitely experiment with something interesting use case.

Let’s see!


Recent Posts

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *