Query Resolution Evaluator
The Query Resolution Evaluator is a specialized component that assesses how effectively queries are resolved in conversations. It analyzes the completeness, accuracy, and relevance of responses to user queries across conversation entries.

Query Resolution Evaluator interface and configuration
Usage Note: Provide complete conversation context for accurate evaluation. The evaluator's effectiveness depends on having sufficient context about the query and its resolution process.
Component Inputs
- Conversation Entries: The conversation history to evaluate
Example: Array of conversation messages
- LLM Model: The language model for evaluation
Example: "gpt-4", "claude-2"
- Max Tokens: Maximum tokens for evaluation
Example: 2000
Component Outputs
- Resolution Score: Overall query resolution effectiveness
Example: 0.95 (95% effective)
- Resolution Details: Specific aspects of query resolution
Detailed breakdown of resolution quality
- Improvement Suggestions: Areas for better resolution
Recommendations for enhancement
- Unresolved Points: Aspects of query not fully addressed
List of pending or unclear points
How It Works
The Query Resolution Evaluator analyzes conversation flow and response quality to determine how effectively queries are being addressed. It considers factors like completeness, accuracy, and clarity of responses.
Evaluation Process
- Conversation analysis
- Query identification
- Response evaluation
- Resolution assessment
- Gap identification
- Recommendation generation
Use Cases
- Support Quality: Evaluate support response effectiveness
- Conversation Analysis: Assess query handling quality
- Training Improvement: Identify areas for model enhancement
- User Satisfaction: Monitor query resolution success
- Process Optimization: Improve query handling workflows
Implementation Example
const queryResolutionEvaluator = new QueryResolutionEvaluator({
conversationEntries: [
{ role: "user", content: "How do I reset my password?" },
{ role: "assistant", content: "Here are the steps..." }
],
llmModel: "gpt-4",
maxTokens: 2000
});
const result = await queryResolutionEvaluator.evaluate();
// Output:
// {
// resolutionScore: 0.92,
// details: {
// completeness: 0.95,
// clarity: 0.90,
// accuracy: 0.92
// },
// suggestions: ["Add verification step explanation"],
// unresolvedPoints: ["Account recovery options"]
// }
Best Practices
- Provide complete conversation context
- Set appropriate evaluation thresholds
- Monitor resolution trends
- Address identified gaps promptly
- Maintain consistent evaluation criteria