How to use Ollama with Promptrak

Published: 2026-02-01 · #Promptrak

This guide shows how to use Ollama with Promptrak so you can run Ollama models and manage your prompt workflows from Promptrak.

Prerequisites

  • Ollama installed and running (ollama serve or the Ollama app)
  • At least one model pulled (e.g. ollama pull llama3.2)
  • Promptrak installed

Steps

  1. Start Ollama so the API is available (default is usually http://localhost:11434).
  2. In Promptrak, add or select a model runner of type “Ollama.”
  3. Configure the Ollama runner in Promptrak with the correct base URL (e.g. http://localhost:11434) and any options your setup needs.
  4. Choose the model in Promptrak that matches an Ollama model you have pulled (e.g. llama3.2). The name in Promptrak should match the model name you use with ollama run <model>.
  5. Run prompts from Promptrak; requests go to Ollama and responses are shown in Promptrak.

For the exact configuration fields and UI, check the Promptrak documentation. If something fails, confirm Ollama is running, the URL/port is correct, and the model name in Promptrak matches an installed Ollama model.