Documentation

Mistral AI Models

A drag-and-drop component for integrating Mistral AI models into your workflow. Configure model parameters and connect inputs/outputs to other components.

Mistral AI Component

Mistral AI component interface and configuration

API Key Notice: A valid Mistral AI API key is required to use this component. Ensure your API key has sufficient quota and appropriate rate limits for your application needs.

Component Inputs

  • Input: Text input for the model

    Example: "Write a summary of deep learning techniques for computer vision."

  • System Message: System prompt to guide model behavior

    Example: "You are a technical expert specializing in machine learning concepts."

  • Stream: Toggle for streaming responses

    Example: true (for real-time token streaming) or false (for complete response)

  • Model Name: The Mistral AI model to use

    Example: "codestral-latest", "mistral-medium", "mistral-small-latest"

  • Mistral API Base: API endpoint URL

    Example: "https://api.mistral.ai/v1"

  • Mistral API Key: Your API authentication key

    Example: "m-xzy123abcdef789..."

Component Outputs

  • Text: Generated text output

    Example: "Deep learning techniques for computer vision include convolutional neural networks (CNNs), which..."

  • Language Model: Model information and metadata

    Example: model: mistral-medium, usage: {prompt_tokens: 45, completion_tokens: 150, total_tokens: 195}

Model Parameters

Max Tokens

Maximum number of tokens to generate

Default: Model-dependent Range: 1 to model maximum Recommendation: Set based on expected response length

Temperature

Controls randomness in the output - higher values increase creativity

Default: 0.5 Range: 0.0 to 1.0 Recommendation: Lower (0.1-0.3) for factual responses, Higher (0.7-0.9) for creative tasks

Top P

Nucleus sampling parameter - controls randomness along with temperature

Default: 1.0 Range: 0.0 to 1.0 Recommendation: Lower values (e.g., 0.9) for more focused text generation

Random Seed

For reproducible outputs across multiple runs

Default: 1 Range: Integer values Recommendation: Set a specific seed when reproducibility is important

Safe Mode

Controls content filtering for safer outputs

Options: true/false Recommendation: Enable for public-facing applications

API Configuration

Max Retries

Number of retry attempts for failed requests

Default: 5 Range: 0 to any reasonable number Recommendation: Increase for critical applications

Timeout

Request timeout in seconds

Default: 60 Range: Any positive number Recommendation: Increase for longer generations, decrease for time-sensitive applications

Max Concurrent Requests

Limit on concurrent API calls

Default: 3 Range: 1 to any reasonable number Recommendation: Adjust based on Mistral AI rate limits and your application's needs

Implementation Example

// Basic configuration const mistralAI = { modelName: "mistral-medium", mistralApiKey: process.env.MISTRAL_API_KEY, systemMessage: "You are a helpful assistant." }; // Advanced configuration const advancedMistralAI = { modelName: "codestral-latest", mistralApiKey: process.env.MISTRAL_API_KEY, mistralApiBase: "https://api.mistral.ai/v1", maxTokens: 2000, temperature: 0.7, topP: 0.95, randomSeed: 42, safeMode: true, maxRetries: 8, timeout: 120, maxConcurrentRequests: 5, stream: true }; // Usage example async function generateCode(input) { const response = await mistralComponent.generate({ input: input, systemMessage: "You are an expert programmer. Write clean, well-documented code.", modelName: "codestral-latest", temperature: 0.2 }); return response.text; }

Use Cases

  • Code Generation: Use Codestral models for programming assistance and code completion
  • Content Creation: Generate articles, blog posts, and creative writing with medium or large models
  • Conversational Agents: Build chatbots and virtual assistants with context awareness
  • Text Summarization: Condense long documents into concise summaries
  • Knowledge-Based Applications: Create applications that require access to general knowledge

Best Practices

  • Secure API keys using environment variables
  • Monitor rate limits and concurrent requests
  • Start with default temperature (0.5) and adjust based on needs
  • Use system messages for consistent outputs
  • Enable streaming for real-time responses in interactive applications
  • Set appropriate timeout values based on expected generation length