How to use Ollama with Promptrak
This guide shows how to use Ollama with Promptrak so you can run Ollama models and manage your prompt workflows from Promptrak.
Prerequisites
- Ollama installed and running (
ollama serveor the Ollama app) - At least one model pulled (e.g.
ollama pull llama3.2) - Promptrak installed
Steps
- Start Ollama so the API is available (default is usually
http://localhost:11434). - In Promptrak, add or select a model runner of type “Ollama.”
- Configure the Ollama runner in Promptrak with the correct base URL (e.g.
http://localhost:11434) and any options your setup needs. - Choose the model in Promptrak that matches an Ollama model you have pulled (e.g.
llama3.2). The name in Promptrak should match the model name you use withollama run <model>. - Run prompts from Promptrak; requests go to Ollama and responses are shown in Promptrak.
For the exact configuration fields and UI, check the Promptrak documentation. If something fails, confirm Ollama is running, the URL/port is correct, and the model name in Promptrak matches an installed Ollama model.