Skip to main content

Remote Server Integration

This guide will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.

OpenAI Platform Configuration

1. Create a Model JSON

  1. In ~/jan/models, create a folder named gpt-3.5-turbo-16k.

  2. In this folder, add a model.json file with Filename as model.json, id matching folder name, Format as api, Engine as openai, and State as ready.

~/jan/models/gpt-3.5-turbo-16k/model.json
{
"sources": [
{
"filename": "openai",
"url": "https://openai.com"
}
],
"id": "gpt-3.5-turbo-16k",
"object": "model",
"name": "OpenAI GPT 3.5 Turbo 16k",
"version": "1.0",
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}

model.json

The model.json file is used to set up your local models.

note
  • If you've set up your model's configuration in nitro.json, please note that model.json can overwrite the settings.
  • When using OpenAI models like GPT-3.5 and GPT-4, you can use the default settings in model.json file.

There are two important fields in model.json that you need to setup:

Settings

This is the field where to set your engine configurations, there are two imporant field that you need to define for your local models:

TermDescription
ctx_lenDefined based on the model's context size.
prompt_templateDefined based on the model's trained template (e.g., ChatML, Alpaca).

To set up the prompt_template based on your model, follow the steps below:

  1. Visit Hugging Face, an open-source machine learning platform.
  2. Find the current model that you're using (e.g., Gemma 7b it).
  3. Review the text and identify the template.

Parameters

parameters is the adjustable settings that affect how your model operates or processes the data. The fields in parameters are typically general and can be the same across models. An example is provided below:

"parameters":{
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 4096,
"frequency_penalty": 0,
"presence_penalty": 0
}
tip
  • You can find the list of available models in the OpenAI Platform.
  • The id property needs to match the model name in the list.
    • For example, if you want to use the GPT-4 Turbo, you must set the id property to gpt-4-1106-preview.

2. Configure OpenAI API Keys

  1. Find your API keys in the OpenAI Platform.
  2. Set the OpenAI API keys in ~/jan/engines/openai.json file.
~/jan/engines/openai.json
{
"full_url": "https://api.openai.com/v1/chat/completions",
"api_key": "sk-<your key here>"
}

3. Start the Model

Restart Jan and navigate to the Hub. Then, select your configured model and start the model.

Engines with OAI Compatible Configuration

This section will show you how to configure a client connection to a remote/local server using Jan's API server running model mistral-ins-7b-q4 as an example.

note

Currently, you can only connect to one OpenAI-compatible endpoint at a time.

1. Configure a Client Connection

  1. Navigate to the ~/jan/engines folder.
  2. Modify the openai.json file.
note

Please note that currently, the code that supports any OpenAI-compatible endpoint only reads engine/openai.json file. Thus, it will not search any other files in this directory.

  1. Configure full_url properties with the endpoint server that you want to connect. For example, if you're going to communicate to Jan's API server, you can configure it as follows:
~/jan/engines/openai.json
{
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
// Skip api_key if your local server does not require authentication
// "api_key": "sk-<your key here>"
}

2. Create a Model JSON

  1. In ~/jan/models, create a folder named mistral-ins-7b-q4.

  2. In this folder, add a model.json file with Filename as model.json, ensure the following configurations:

  • id matching folder name.
  • Format set to api.
  • Engine set to openai
  • State set to ready.
~/jan/models/mistral-ins-7b-q4/model.json
{
"sources": [
{
"filename": "janai",
"url": "https://jan.ai"
}
],
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4 on Jan API Server",
"version": "1.0",
"description": "Jan integration with remote Jan API server",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "MistralAI, The Bloke",
"tags": ["remote", "awesome"]
},
"engine": "openai"
}

3. Start the Model

  1. Restart Jan and navigate to the Hub.
  2. Locate your model and click the Use button.
Assistance and Support

If you have questions or want more preconfigured GGUF models, please join our Discord community for support, updates, and discussions.