How to use LM Studio with Promptrak

Published: 2026-01-31 · #Promptrak

This guide walks you through using LM Studio as a model backend for Promptrak, so you can run local models and manage prompt workflows from one place.

Prerequisites

  • LM Studio installed and running on your machine
  • At least one model loaded in LM Studio (e.g. via the in-app discovery)
  • Promptrak installed

Steps

  1. Start LM Studio and ensure the local server is running (LM Studio exposes a local API when the server is on).
  2. Open Promptrak and add or select a model runner of type “LM Studio.”
  3. Point Promptrak at LM Studio’s endpoint (typically http://localhost:1234 or the port shown in LM Studio). Use Promptrak’s runner settings to set the base URL and any required options.
  4. Select the model in Promptrak that corresponds to the model you have loaded in LM Studio (model name should match what LM Studio reports).
  5. Run a prompt from Promptrak; it will send the request to LM Studio and stream or return the response.

For exact field names and UI steps, see the Promptrak docs. If you hit connection or model-mismatch errors, double-check the port, that the model is loaded in LM Studio, and that the model name in Promptrak matches.