Using Large Language Models Locally with R

When it comes to Large Language Models (LLMs), there are moments when I ask myself: Do I really need to send my data halfway across the globe to OpenAI just to get a summary? With Ollama, there is now a standard for running models like Llama 3 or Mistral locally. And the best part: These models can be controlled directly from R. This not only saves money but also solves the data privacy problem quite elegantly. This article is about how to best implement this in R. Spoiler: There isn’t just one way, but (at least) two very good packages with completely different philosophies.

Continue reading