iA Writer recently introduced Authorship to help writers keep track of external contributors. The feature is brilliant but its implementation creates issues for static websites generators. I implemented a workaround and wrote some feedback for iA.
Being able to run a Large Language Model locally also means to be able to use existing models (fine tuned for coding) to implement a self hosted solution to replace GitHub Copilot. In this post I will talk about my personal experience.
Ollama is a tool to run Large Language Models locally, without the need of a cloud service. Its usage is similar to Docker, but it's specifically designed for LLMs. You can use it as an interactive shell, through its REST API or using it from a Python library.