Skip to main content

Ollama

Overview

Ollama is an open-source platform for running large language models locally or on your own infrastructure. AI Gateway can be configured to use Ollama as a language model provider.

Configuration

To use Ollama, configure it in the Config Manager under the AI Gateway section. Ollama requires a base URL pointing to your Ollama instance:

ai:
providers:
ollama:
baseURL: http://your-ollama-host:11434 # Required: URL to your Ollama instance
headers: # Optional, e.g. for authentication with gateway
- key: X-Custom-Header
value: custom-value

Then set the default language model with your Ollama model name:

ai:
models:
languageModel: ollama|llama2 # Replace llama2 with your model name

Replace llama2 with the name of the Ollama model you wish to use (e.g., mistral, codellama, etc.).

Example

ai:
providers:
ollama:
baseURL: http://localhost:11434
models:
languageModel: ollama|mistral

Ensure your Ollama instance is running and accessible. For more details on available models and setup, refer to the Ollama documentation.