Ollama
Overview
Ollama is an open-source platform for running large language models locally or on your own infrastructure. AI Gateway can be configured to use Ollama as a language model provider.
Configuration
To use Ollama, configure it in the Plugin settings page under the AI Gateway section. Ollama requires a base URL pointing to your Ollama instance:
| Field | Description | Example |
|---|---|---|
| Base URL | Required: URL to your Ollama instance | http://your-ollama-host:11434 |
| Headers | Optional, e.g. for authentication with gateway | Key: X-Custom-HeaderValue: custom-value |
Then set the default language model with your Ollama model name:
Language Model: ollama|llama2
Replace llama2 with the name of the Ollama model you wish to use (e.g., mistral, codellama, etc.).
Example
In Plugin Settings:
- Base URL:
http://localhost:11434
In Default Models:
- Language Model:
ollama|mistral
Ensure your Ollama instance is running and accessible. For more details on available models and setup, refer to the Ollama documentation.