Mistral AI Embeddor
State-of-the-Art EmbeddingsOverview
Generate high-quality embeddings using Mistral AI's advanced language models. Features robust rate limiting, automatic retries, and configurable API settings.

Available Models
- mistral-embed
- mistral-embed-light
Key Features
- Concurrent request handling
- Automatic retry mechanism
- Configurable timeouts
- Custom API endpoints
Configuration
Required Parameters
model
Mistral AI model namemistralApiKey
API authentication key
Optional Parameters
maxConcurrentRequests
Default: 5maxRetries
Default: 3requestTimeout
Default: 30000msapiEndpoint
Default: https://api.mistral.ai/v1
Example Usage
// Basic configuration const embedder = new MistralAIEmbeddor({ model: "mistral-embed", mistralApiKey: "your-api-key", }); // Advanced configuration const advancedEmbedder = new MistralAIEmbeddor({ model: "mistral-embed-light", mistralApiKey: "your-api-key", maxConcurrentRequests: 10, maxRetries: 5, requestTimeout: 60000, apiEndpoint: "https://custom-endpoint.com/v1" }); // Generate embeddings const result = await embedder.embed({ input: "Your text to embed" }); // Batch processing const batchResult = await embedder.embedBatch({ inputs: [ "First text to embed", "Second text to embed" ] });
Best Practices
- Use environment variables for API keys
- Implement proper error handling
- Monitor rate limits
- Cache frequently used embeddings
Performance Tips
- Adjust concurrent requests based on usage
- Set appropriate timeout values
- Use batch processing for multiple inputs
Response Format
{ "embeddings": { "vectors": number[][], "dimensions": number, "model": string }, "usage": { "prompt_tokens": number, "total_tokens": number }, "metadata": { "processing_time": number, "retries": number, "api_version": string }, "status": { "success": boolean, "error": string | null } }