Mistral AI Component
Drag & Drop LLM Component
Overview
A drag-and-drop component for integrating Mistral AI models into your workflow. Configure model parameters and connect inputs/outputs to other components.
Component Configuration
Basic Parameters
Input
Text input for the modelSystem Message
System prompt to guide model behaviorStream
Toggle for streaming responsesModel Name
e.g., codestral-latest
Model Settings
Max Tokens
Maximum number of tokens to generateTemperature
Creativity control (0.5 default)Top P
Nucleus sampling parameter (1 default)Random Seed
For reproducible outputs (1 default)Safe Mode
Toggle content filtering
API Configuration
Mistral API Base
API endpoint URLMistral API Key
Your API authentication keyMax Retries
Number of retry attempts (5 default)Timeout
Request timeout in seconds (60 default)Max Concurrent Requests
Concurrent request limit (3 default)
Output Connections
Text
Generated text outputLanguage Model
Model information and metadata
Usage Tips
- Start with default temperature (0.5)
- Use system messages for consistent outputs
- Enable streaming for real-time responses
- Set appropriate timeout values
Best Practices
- Secure API keys using environment variables
- Monitor rate limits and concurrent requests
- Implement proper error handling
- Test with small inputs first