LM Studio Embeddor
Local Studio Embeddings
Overview
Generate embeddings using LM Studio's local model server. Perfect for running embeddings locally with a user-friendly interface and support for multiple open-source models.
Key Features
- Local model serving
- Temperature control
- OpenAI-compatible API
- Multiple model support
Requirements
- LM Studio application
- Running local server
- Compatible embedding model
Configuration
Required Parameters
lmStudioBaseUrl
LM Studio server URLlmStudioApiKey
API key (if configured)
Optional Parameters
temperature
Model temperature (Default: 0.0)
Example Usage
// Basic configuration const embedder = new LMStudioEmbeddor({ lmStudioBaseUrl: "http://localhost:1234/v1", lmStudioApiKey: "your-api-key" }); // Configuration with temperature const customEmbedder = new LMStudioEmbeddor({ lmStudioBaseUrl: "http://localhost:1234/v1", lmStudioApiKey: "your-api-key", temperature: 0.7 }); // Generate embeddings const result = await embedder.embed({ input: "Your text to embed" }); // Batch processing const batchResult = await embedder.embedBatch({ inputs: [ "First text to embed", "Second text to embed" ] });
Best Practices
- Verify server is running
- Monitor resource usage
- Cache frequent embeddings
- Handle connection errors
Performance Tips
- Use appropriate batch sizes
- Optimize temperature settings
- Pre-load models in LM Studio
Response Format
{ "embeddings": { "vectors": number[][], "dimensions": number }, "metadata": { "model_info": { "name": string, "temperature": number }, "processing_time": number }, "status": { "success": boolean, "error": string | null } }