Customise, run and save LLMs utilizing OLLAMA and the Modelfile
![Towards Data Science](https://miro.medium.com/v2/resize:fill:48:48/1*CJe3891yB1A1mzMdqemkdg.jpeg)
On this article, I’ll present you find out how to use the Modelfile in Ollama to vary how an present LLM (Llama2) behaves when interacting with it. I’ll additionally present you find out how to save your newly personalized mannequin to your private namespace on the Ollama server.
I do know it could actually get a bit complicated with all of the completely different ”llamas” flying round. Simply bear in mind, Ollama is the corporate that lets you obtain and regionally run many various LLMs. Whereas, Llama2 is a selected LLM created by Meta the proprietor of Fb. Other than this relationship, they don’t seem to be related in some other method.
If you happen to’ve by no means heard of Ollama earlier than I like to recommend that you just try my article under the place I am going into depth on what Ollama is and find out how to set up it in your system.
What’s a modelfile?
In Ollama, a modelfile refers to a configuration file that defines the blueprint to create and share fashions with Ollama.