Ollama Component
Drag & Drop Local LLM Component
Overview
A drag-and-drop component for running local LLMs through Ollama. Configure model parameters and connect inputs/outputs to other components while keeping all processing on your machine.
Component Configuration
Basic Parameters
Base URL
Default: http://localhost:11434Template
Custom prompt templateFormat
Response format specificationSystem
System promptInput
User input text
Model Parameters
Temperature
Creativity control (0.7 default)Context Window Size
Maximum context lengthNumber of GPU
GPUs to use for inferenceNumber of Threads
CPU threads to utilize
Advanced Settings
Mirostat
Sampling algorithm (Disabled/Enabled)Mirostat Eta
Learning rate for mirostatMirostat Tau
Target entropy for mirostatRepeat Penalty
Penalty for repeated tokensTop K
Top-k sampling parameterTop P
Nucleus sampling parameter
Output Connections
Text
Generated text outputLanguage Model
Model information and metadata
Usage Tips
- Ensure Ollama is running locally
- Adjust thread count based on CPU
- Configure GPU usage appropriately
- Test with different sampling methods
Best Practices
- Monitor system resources
- Use appropriate context windows
- Balance speed and quality
- Implement proper error handling