How to use LM Studio with Promptrak
This guide walks you through using LM Studio as a model backend for Promptrak, so you can run local models and manage prompt workflows from one place.
Prerequisites
- LM Studio installed and running on your machine
- At least one model loaded in LM Studio (e.g. via the in-app discovery)
- Promptrak installed
Steps
- Start LM Studio and ensure the local server is running (LM Studio exposes a local API when the server is on).
- Open Promptrak and add or select a model runner of type “LM Studio.”
- Point Promptrak at LM Studio’s endpoint (typically
http://localhost:1234or the port shown in LM Studio). Use Promptrak’s runner settings to set the base URL and any required options. - Select the model in Promptrak that corresponds to the model you have loaded in LM Studio (model name should match what LM Studio reports).
- Run a prompt from Promptrak; it will send the request to LM Studio and stream or return the response.
For exact field names and UI steps, see the Promptrak docs. If you hit connection or model-mismatch errors, double-check the port, that the model is loaded in LM Studio, and that the model name in Promptrak matches.