Ollama
How to Integrate Ollama with Jan
Ollama provides you with largen language that you can run locally. There are two methods to integrate Ollama with Jan:
- Integrate Ollama server with Jan.
- Migrate the downloaded model from Ollama to Jan.
To integrate Ollama with Jan, follow the steps below:
note
In this tutorial, we'll show how to integrate Ollama with Jan using the first method. We will use the llama2 model as an example.
Step 1: Start the Ollama Server
- Choose your model from the Ollama library.
- Run your model with this command:
ollama run <model-name>
- According to the Ollama documentation on OpenAI compatibility, you can connect to the Ollama server using the web address
http://localhost:11434/v1/chat/completions
. To do this, change theopenai.json
file in the~/jan/engines
folder to add the Ollama server's full web address:
~/jan/engines/openai.json
{
"full_url": "http://localhost:11434/v1/chat/completions"
}
Step 2: Model Configuration
- Navigate to the
~/jan/models
folder. - Create a folder named
(ollam-modelname)
, for example,lmstudio-phi-2
. - Create a
model.json
file inside the folder including the following configurations:
- Set the
id
property to the model name as Ollama model name. - Set the
format
property toapi
. - Set the
engine
property toopenai
. - Set the
state
property toready
.
~/jan/models/llama2/model.json
{
"sources": [
{
"filename": "llama2",
"url": "https://ollama.com/library/llama2"
}
],
"id": "llama2",
"object": "model",
"name": "Ollama - Llama2",
"version": "1.0",
"description": "Llama 2 is a collection of foundation language models ranging from 7B to 70B parameters.",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "Meta",
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}
note
For more details regarding the model.json
settings and parameters fields, please see here.
Step 3: Start the Model
- Restart Jan and navigate to the Hub.
- Locate your model and click the Use button.