Ollama Embeddor
Local Embeddings
Overview
Generate embeddings using Ollama's local models. Run embeddings locally with no cloud dependencies, perfect for privacy-focused and offline applications.
Popular Models
- llama2
- nomic-embed-text
- mistral
- codellama
Key Features
- Local processing
- No API keys needed
- Custom model support
- Offline capability
Configuration
Required Parameters
ollamaModel
Name of the Ollama model
Optional Parameters
ollamaBaseUrl
Default: http://localhost:11434
Example Usage
// Basic configuration with default base URL const embedder = new OllamaEmbeddor({ ollamaModel: "nomic-embed-text" }); // Custom base URL configuration const customEmbedder = new OllamaEmbeddor({ ollamaModel: "llama2", ollamaBaseUrl: "http://your-ollama-server:11434" }); // Generate embeddings const result = await embedder.embed({ input: "Your text to embed" }); // Batch processing const batchResult = await embedder.embedBatch({ inputs: [ "First text to embed", "Second text to embed" ] });
Best Practices
- Choose appropriate model size
- Monitor system resources
- Cache frequent embeddings
- Implement error handling
Performance Tips
- Use GPU acceleration if available
- Batch similar length texts
- Pre-download models
Response Format
{ "embeddings": { "vectors": number[][], "dimensions": number, "model": string }, "metadata": { "model_name": string, "processing_time": number, "total_tokens": number }, "status": { "success": boolean, "error": string | null } }