Hugging Face Component
Drag & Drop LLM Component
Overview
A drag-and-drop component for integrating Hugging Face's inference API. Configure model parameters and connect inputs/outputs to access thousands of open-source models.
Component Configuration
Basic Parameters
Input
Text input for the modelSystem Message
System prompt to guide model behaviorStream
Toggle for streaming responsesModel ID
e.g., openai-community/gpt2
API Configuration
API Token
Your Hugging Face API tokenInference Endpoint
API endpoint URLTask
Specific task for the model
Generation Parameters
Max New Tokens
Maximum tokens to generate (512 default)Temperature
Creativity control (0.8 default)Top K
Top-k sampling parameterTop P
Nucleus sampling (0.95 default)Typical P
Typical probability (0.95 default)Repetition Penalty
Penalty for repeated tokensRetry Attempts
Number of retry attempts (1 default)
Output Connections
Text
Generated text outputLanguage Model
Model information and metadata
Usage Tips
- Choose appropriate model for task
- Balance sampling parameters
- Use retry attempts for reliability
- Monitor token usage limits
Best Practices
- Secure API token handling
- Test with smaller token limits
- Implement proper error handling
- Consider model-specific parameters