Ollama Component

Drag & Drop Local LLM Component
Ollama Component

Overview

A drag-and-drop component for running local LLMs through Ollama. Configure model parameters and connect inputs/outputs to other components while keeping all processing on your machine.

Component Configuration

Basic Parameters

  • Base URLDefault: http://localhost:11434
  • TemplateCustom prompt template
  • FormatResponse format specification
  • SystemSystem prompt
  • InputUser input text

Model Parameters

  • TemperatureCreativity control (0.7 default)
  • Context Window SizeMaximum context length
  • Number of GPUGPUs to use for inference
  • Number of ThreadsCPU threads to utilize

Advanced Settings

  • MirostatSampling algorithm (Disabled/Enabled)
  • Mirostat EtaLearning rate for mirostat
  • Mirostat TauTarget entropy for mirostat
  • Repeat PenaltyPenalty for repeated tokens
  • Top KTop-k sampling parameter
  • Top PNucleus sampling parameter

Output Connections

  • TextGenerated text output
  • Language ModelModel information and metadata

Usage Tips

  • Ensure Ollama is running locally
  • Adjust thread count based on CPU
  • Configure GPU usage appropriately
  • Test with different sampling methods

Best Practices

  • Monitor system resources
  • Use appropriate context windows
  • Balance speed and quality
  • Implement proper error handling