Documentation

Off-Topic Evaluator

The Off-Topic Evaluator is a specialized component that assesses whether generated responses stay within specified topic boundaries. It helps ensure content relevance and prevents topic drift in language model outputs.

Off-Topic Evaluator Component

Off-Topic Evaluator interface and configuration

Usage Note: Define allowed topics clearly and comprehensively. Consider edge cases and related topics when configuring topic boundaries.

Component Inputs

  • Input Text: The text to evaluate

    Example: "Content to check for topic relevance"

  • Allowed Topics: List of permitted topics

    Example: ["technology", "science", "education"]

  • LLM Model: The language model for evaluation

    Example: "gpt-4", "claude-2"

Component Outputs

  • Is On Topic: Boolean result of topic evaluation

    Example: true/false

  • Detected Topics: Topics found in the content

    List of identified topics

  • Confidence Score: Confidence in the evaluation

    Example: 0.95 (95% confidence)

  • Explanation: Reasoning for the evaluation

    Detailed explanation of topic assessment

How It Works

The Off-Topic Evaluator uses advanced topic detection and semantic analysis to determine if content stays within specified topic boundaries. It considers context, related concepts, and topic hierarchies in its evaluation.

Evaluation Process

  1. Topic boundary definition
  2. Content analysis
  3. Topic detection
  4. Relevance assessment
  5. Confidence calculation
  6. Explanation generation

Use Cases

  • Content Moderation: Ensure topic compliance
  • Discussion Control: Maintain conversation focus
  • Quality Assurance: Verify content relevance
  • Topic Filtering: Filter off-topic content
  • Content Organization: Categorize by topic relevance

Implementation Example

const offTopicEvaluator = new OffTopicEvaluator({ inputText: "The latest developments in quantum computing...", allowedTopics: ["technology", "science", "computing"], llmModel: "gpt-4" }); const result = await offTopicEvaluator.evaluate(); // Output: // { // isOnTopic: true, // detectedTopics: ["computing", "technology", "quantum physics"], // confidence: 0.95, // explanation: "Content primarily discusses quantum computing, // which falls under allowed topics of technology // and computing" // }

Best Practices

  • Define clear topic boundaries
  • Include related topics
  • Set appropriate confidence thresholds
  • Regular topic list updates
  • Monitor false positives/negatives