LM Studio Adapter
Connect to LM Studio's local OpenAI-compatible server. LM Studio's strict JSON schema mode gives the best structured output quality with local models.
Configuration
Ts
1const llmConfig: LocalLLMConfig = {
2 provider: "lmstudio",
3 baseUrl: "http://localhost:1234",
4 model: "llama-3-8b-instruct", // model name as shown in LM Studio
5 timeoutMs: 60_000,
6};Setup
1
Download LM Studio
lmstudio.ai: available for Mac, Windows, Linux.
2
Load a model and start the local server
In LM Studio, go to the Local Server tab, load a model, and click Start Server. The default port is 1234.
3
Copy the model identifier
The model name shown in LM Studio's UI is what you pass as model.
How it works under the hood
LM Studio exposes an OpenAI-compatible API. The adapter uses strict JSON schema mode for the best structured output:
Request body
1{
2 "model": "llama-3-8b-instruct",
3 "messages": [...],
4 "response_format": {
5 "type": "json_schema",
6 "json_schema": {
7 "strict": true,
8 "schema": { "type": "object" }
9 }
10 }
11}The adapter pings /v1/models for connectivity checks.
Best JSON quality among local models
LM Studio's strict JSON schema mode consistently outperforms Ollama's format mode for structured output. If you're seeing JSON parsing errors with Ollama, try LM Studio with the same model for better reliability.