Featured image of post Ollama: running Large Language Models locally

Ollama: running Large Language Models locally

Ollama is a tool to run Large Language Models locally, without the need of a cloud service. Its usage is similar to Docker, but it's specifically designed for LLMs. You can use it as an interactive shell, through its REST API or using it from a Python library.