Documentation

Mistral AI Embeddings

Generate high-quality embeddings using Mistral AI's advanced language models. Features robust rate limiting, automatic retries, and configurable API settings for enterprise deployments.

Mistral AI Embeddings Component

Mistral AI Embeddings component interface and configuration

API Key Notice: A valid Mistral AI API key is required to use this component. Ensure your API key has sufficient quota and appropriate rate limits for your embedding needs.

Component Inputs

  • Model: The Mistral AI embedding model to use

    Example: "mistral-embed", "mistral-embed-light"

  • Mistral API Key: Your Mistral AI API authentication key

    Example: "m-abc123xyz456..."

  • Max Concurrent Requests: Maximum number of parallel requests

    Example: 5 (Default)

  • Max Retries: Number of retry attempts for failed requests

    Example: 3 (Default)

  • Request Timeout: Timeout in milliseconds for API requests

    Example: 30000 (Default, 30 seconds)

  • API Endpoint: Custom API endpoint URL

    Example: "https://api.mistral.ai/v1" (Default)

Component Outputs

  • Embeddings: Vector representation of the input text

    Example: [0.018, -0.032, 0.067, ...]

  • Dimensions: The dimension size of the embedding vector

    Example: 1024 for mistral-embed, 512 for mistral-embed-light

  • Usage: Token usage information for billing purposes

    Example: prompt_tokens: 12, total_tokens: 12

Model Comparison

mistral-embed

Mistral AI's high-performance embedding model with superior semantic understanding

Dimensions: 1024 Performance: High-quality semantic representations Language Support: Multilingual Ideal for: Production systems requiring top-tier semantic search and retrieval

mistral-embed-light

Lightweight and efficient embedding model with faster processing

Dimensions: 512 Performance: Faster processing with good quality Language Support: Multilingual with focus on major languages Ideal for: Cost-effective embeddings at scale or latency-sensitive applications

Implementation Example

// Basic configuration const embedder = new MistralAIEmbeddor({ model: "mistral-embed", mistralApiKey: process.env.MISTRAL_API_KEY, }); // Advanced configuration const advancedEmbedder = new MistralAIEmbeddor({ model: "mistral-embed-light", mistralApiKey: process.env.MISTRAL_API_KEY, maxConcurrentRequests: 10, maxRetries: 5, requestTimeout: 60000, apiEndpoint: "https://custom-endpoint.com/v1" }); // Generate embeddings const result = await embedder.embed({ input: "Your text to embed" }); // Batch processing const batchResult = await embedder.embedBatch({ inputs: [ "First text to embed", "Second text to embed" ] }); console.log(result.embeddings);

Use Cases

  • Enterprise Search: Build high-quality semantic search for document retrieval
  • Retrieval-Augmented Generation: Create advanced RAG systems with precise context retrieval
  • Content Recommendation: Develop intelligent content recommendation engines
  • Multilingual Applications: Support global content with strong multilingual capabilities
  • Hybrid Search Solutions: Combine with keyword search for comprehensive retrieval systems

Best Practices

  • Use environment variables for API keys in production environments
  • Implement caching for frequently used embeddings to reduce API costs
  • Monitor rate limits and adjust concurrent requests accordingly
  • Choose mistral-embed-light for high-volume or latency-sensitive applications
  • Set appropriate timeout values based on your application's requirements