Mistral AI Embeddor

State-of-the-Art Embeddings

Overview

Generate high-quality embeddings using Mistral AI's advanced language models. Features robust rate limiting, automatic retries, and configurable API settings.

Mistral AI Embeddor Diagram

Available Models

  • mistral-embed
  • mistral-embed-light

Key Features

  • Concurrent request handling
  • Automatic retry mechanism
  • Configurable timeouts
  • Custom API endpoints

Configuration

Required Parameters

  • modelMistral AI model name
  • mistralApiKeyAPI authentication key

Optional Parameters

  • maxConcurrentRequestsDefault: 5
  • maxRetriesDefault: 3
  • requestTimeoutDefault: 30000ms
  • apiEndpointDefault: https://api.mistral.ai/v1

Example Usage

// Basic configuration
const embedder = new MistralAIEmbeddor({
  model: "mistral-embed",
  mistralApiKey: "your-api-key",
});

// Advanced configuration
const advancedEmbedder = new MistralAIEmbeddor({
  model: "mistral-embed-light",
  mistralApiKey: "your-api-key",
  maxConcurrentRequests: 10,
  maxRetries: 5,
  requestTimeout: 60000,
  apiEndpoint: "https://custom-endpoint.com/v1"
});

// Generate embeddings
const result = await embedder.embed({
  input: "Your text to embed"
});

// Batch processing
const batchResult = await embedder.embedBatch({
  inputs: [
    "First text to embed",
    "Second text to embed"
  ]
});

Best Practices

  • Use environment variables for API keys
  • Implement proper error handling
  • Monitor rate limits
  • Cache frequently used embeddings

Performance Tips

  • Adjust concurrent requests based on usage
  • Set appropriate timeout values
  • Use batch processing for multiple inputs

Response Format

{
  "embeddings": {
    "vectors": number[][],
    "dimensions": number,
    "model": string
  },
  "usage": {
    "prompt_tokens": number,
    "total_tokens": number
  },
  "metadata": {
    "processing_time": number,
    "retries": number,
    "api_version": string
  },
  "status": {
    "success": boolean,
    "error": string | null
  }
}