Skip to content
Published on

RAG Quality Evaluation and Failure Pattern Analysis: Diagnosing and Improving Retrieval-Augmented Generation

Authors
  • Name
    Twitter

Introduction: Why RAG Evaluation Is Challenging

RAG (Retrieval-Augmented Generation), developed to address the limitations of large language models (LLMs), has become a core architecture in enterprise AI systems. However, systematically answering the question "Why is the answer quality poor?" after deploying a RAG system to production is far from straightforward.

Quality issues in RAG can arise across the entire pipeline, not from a single source. The retrieval stage may have fetched the wrong documents, or the LLM may have ignored correctly retrieved content and produced hallucinations instead.

This article analyzes failure modes for each RAG pipeline component and introduces systematic evaluation methodologies and debugging strategies.

Understanding RAG Pipeline Components

A RAG system consists of three core components.

1. Retriever

The retriever takes a user query and fetches relevant document chunks from a vector database or search engine.

  • Dense Retrieval: Semantic similarity-based search using embedding models (e.g., OpenAI text-embedding-3-small, Cohere embed-v3)
  • Sparse Retrieval: Keyword-based search such as BM25
  • Hybrid Retrieval: Combining Dense + Sparse approaches
# Dense Retrieval example
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = Chroma.from_documents(documents, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 5})

# Retrieve relevant documents for a query
results = retriever.get_relevant_documents("How to evaluate RAG?")

2. Reranker

The reranker refines the initial search results by re-ranking them with higher precision. Cross-encoder models are the most common approach.

# Cohere Reranker example
from cohere import Client

co = Client(api_key="...")
reranked = co.rerank(
    model="rerank-v3.5",
    query="RAG evaluation methods",
    documents=retrieved_docs,
    top_n=3
)

3. Generator

The LLM that produces the final answer based on the retrieved context.

# Context-based answer generation
prompt = f"""Answer the question based on the following context.

Context:
{context}

Question: {query}

Answer:"""

response = llm.generate(prompt)

Key Evaluation Metrics

Metrics for measuring RAG system quality fall into two categories: retrieval performance metrics and generation performance metrics.

Retrieval Performance Metrics

MetricDescriptionFormula/ConceptWhen to Use
Recall@KProportion of relevant documents found in top K resultsRetrieved relevant / Total relevantDiagnosing retrieval misses
Precision@KProportion of relevant documents among top K resultsRelevant docs / KDiagnosing noisy results
MRR (Mean Reciprocal Rank)Average reciprocal rank of the first relevant document1/rank of first correctMeasuring ranking quality
NDCG (Normalized DCG)Rank-aware retrieval quality scoreDCG / Ideal DCGOverall ranking quality
Hit RateProportion of queries where at least one relevant doc is retrievedSuccessful queries / Total queriesOverall retrieval success rate
# Recall@K calculation example
def recall_at_k(retrieved_ids, relevant_ids, k):
    retrieved_set = set(retrieved_ids[:k])
    relevant_set = set(relevant_ids)
    return len(retrieved_set & relevant_set) / len(relevant_set)

# MRR calculation example
def mrr(retrieved_ids, relevant_ids):
    for i, doc_id in enumerate(retrieved_ids):
        if doc_id in relevant_ids:
            return 1.0 / (i + 1)
    return 0.0

Generation Performance Metrics

MetricDescriptionEvaluation Method
FaithfulnessHow well the answer is grounded in the contextLLM-as-judge verifies evidence for each sentence
Answer RelevancyHow appropriate the answer is to the questionGenerate questions from answer and compare similarity
Context RelevancyHow relevant the retrieved context is to the questionProportion of relevant sentences in context
Answer CorrectnessHow well the answer matches the ground truthComparison with ground truth
Hallucination RateProportion of information generated without context supportDetection of unsupported information in answer

Evaluation Framework Comparison

RAGAS (Retrieval Augmented Generation Assessment)

RAGAS is an open-source framework specialized for RAG system evaluation, supporting automated evaluation using LLMs.

from ragas import evaluate
from ragas.metrics import (
    faithfulness,
    answer_relevancy,
    context_precision,
    context_recall,
)
from datasets import Dataset

# Prepare evaluation data
eval_data = {
    "question": ["What is RAG?"],
    "answer": ["RAG is Retrieval-Augmented Generation that..."],
    "contexts": [["RAG (Retrieval-Augmented Generation) is..."]],
    "ground_truth": ["RAG retrieves external knowledge to..."]
}

dataset = Dataset.from_dict(eval_data)
results = evaluate(
    dataset,
    metrics=[faithfulness, answer_relevancy, context_precision, context_recall]
)
print(results)
# {'faithfulness': 0.92, 'answer_relevancy': 0.87, ...}

DeepEval

DeepEval is a framework that enables unit-test-style evaluation of LLM applications.

from deepeval import evaluate
from deepeval.metrics import (
    FaithfulnessMetric,
    AnswerRelevancyMetric,
    ContextualRelevancyMetric,
    HallucinationMetric,
)
from deepeval.test_case import LLMTestCase

test_case = LLMTestCase(
    input="What are the key RAG evaluation metrics?",
    actual_output="Key RAG evaluation metrics include faithfulness, relevancy...",
    expected_output="Faithfulness, Answer Relevancy, Context Precision...",
    retrieval_context=["Various metrics are used for RAG evaluation..."]
)

faithfulness_metric = FaithfulnessMetric(threshold=0.7)
relevancy_metric = AnswerRelevancyMetric(threshold=0.7)

evaluate([test_case], [faithfulness_metric, relevancy_metric])

LlamaIndex Evaluation

LlamaIndex provides its own evaluation module, tightly integrated with the RAG pipeline.

from llama_index.core.evaluation import (
    FaithfulnessEvaluator,
    RelevancyEvaluator,
    CorrectnessEvaluator,
    BatchEvalRunner,
)
from llama_index.llms.openai import OpenAI

llm = OpenAI(model="gpt-4o")
faithfulness_evaluator = FaithfulnessEvaluator(llm=llm)
relevancy_evaluator = RelevancyEvaluator(llm=llm)

# Batch evaluation
runner = BatchEvalRunner(
    {"faithfulness": faithfulness_evaluator, "relevancy": relevancy_evaluator},
    workers=4,
)
eval_results = await runner.aevaluate_queries(query_engine, queries=queries)

Custom LLM-as-Judge

When applying domain-specific evaluation criteria, use an LLM as the judge.

JUDGE_PROMPT = """Evaluate the following RAG system response.

[Question]: {question}
[Context]: {context}
[Answer]: {answer}

Evaluation Criteria:
1. Faithfulness (1-5): Is the answer grounded in the context?
2. Completeness (1-5): Does it address all aspects of the question?
3. Conciseness (1-5): Does it deliver key information without unnecessary details?

Output scores and reasoning in JSON format."""

def evaluate_with_judge(question, context, answer, judge_llm):
    prompt = JUDGE_PROMPT.format(
        question=question, context=context, answer=answer
    )
    result = judge_llm.generate(prompt)
    return json.loads(result)

Framework Comparison Table

FeatureRAGASDeepEvalLlamaIndex EvalCustom LLM-as-Judge
Ease of SetupHighHighMedium (requires LlamaIndex)Manual implementation
Supported Metrics6+10+5+Unlimited (custom)
CI/CD IntegrationPossibleExcellent (pytest-style)PossibleManual implementation
CostLLM API costLLM API costLLM API costLLM API cost
Domain CustomizationMediumHighMediumBest
DashboardConfident AI integrationDeepEval CloudNoneManual implementation
Open SourceYesYes (core)YesN/A
Ground Truth RequiredOptionalOptionalOptionalDepends on design

Major Failure Pattern Analysis

Failure Pattern 1: Wrong Chunks Retrieved (Retrieval Failure)

The most fundamental and most common failure. Irrelevant document chunks are retrieved for the user's question.

Root Cause Analysis:

  • Chunking strategy issues: context is split when using fixed-length instead of semantic boundaries
  • Domain mismatch in embedding model
  • Lack of metadata filtering

Example:

Question: "What was the Q4 2024 revenue?"

Retrieved chunk: "Q4 2023 revenue recorded $150 million..."
Document from wrong year retrieved (missing metadata filter)

Expected chunk: "Q4 2024 revenue was $200 million, 33% YoY growth..."

Solution:

# Retrieval with metadata filters
results = vectorstore.similarity_search(
    query="Q4 revenue",
    filter={"year": 2024, "quarter": "Q4"},
    k=5
)

# Apply Semantic Chunking
from langchain.text_splitter import SemanticChunker

splitter = SemanticChunker(
    embeddings=embeddings,
    breakpoint_threshold_type="percentile",
    breakpoint_threshold_amount=90,
)
chunks = splitter.split_documents(documents)

Failure Pattern 2: Context Window Overflow

Too many documents are retrieved, exceeding the context window, or conversely, too few are retrieved, resulting in insufficient information.

Too many documents:

  • Token limit exceeded, causing truncation
  • Noisy documents distract the LLM's attention
  • Increased cost

Too few documents:

  • Insufficient information for answering
  • Incomplete answers generated
# Adaptive K selection strategy
def adaptive_retrieval(query, retriever, min_k=3, max_k=10, threshold=0.7):
    """Dynamically adjust K based on similarity threshold"""
    results = retriever.similarity_search_with_score(query, k=max_k)

    filtered = [
        (doc, score) for doc, score in results
        if score >= threshold
    ]

    if len(filtered) < min_k:
        return [doc for doc, _ in results[:min_k]]

    return [doc for doc, _ in filtered]

Failure Pattern 3: Hallucination Despite Correct Retrieval

The LLM generates information not present in the context even when accurate documents were retrieved. This is one of the most dangerous failure patterns in RAG.

Root Cause Analysis:

  • LLM's pre-trained knowledge conflicts with the context
  • Insufficient instruction to "use only the context" in the prompt
  • Context contains only partial information, and the LLM fills in the rest
# Enhanced prompt to prevent hallucination
ANTI_HALLUCINATION_PROMPT = """You are an assistant that answers based ONLY on the given context.

Rules:
1. Use ONLY information present in the context.
2. If information is not in the context, respond: "This information is not available in the provided documents."
3. Do not speculate or use prior knowledge.
4. Cite sources at the end of each statement as [Source: Doc N].

Context:
{context}

Question: {question}

Answer:"""

Failure Pattern 4: Lost-in-the-Middle Problem

Discovered in a 2023 Stanford study, this phenomenon shows that LLMs fail to effectively utilize information located in the middle of long contexts.

Symptoms:

  • Information at the beginning and end of the context is well-utilized
  • Information in the middle is ignored or missed
  • Worsens with more retrieved documents
# Lost-in-the-Middle mitigation: place important documents at the edges
def reorder_for_lost_in_middle(documents, scores):
    """Place most relevant documents at the beginning and end"""
    sorted_docs = sorted(
        zip(documents, scores), key=lambda x: x[1], reverse=True
    )

    reordered = []
    for i, (doc, score) in enumerate(sorted_docs):
        if i % 2 == 0:
            reordered.insert(0, doc)  # Insert at front
        else:
            reordered.append(doc)      # Append at end

    return reordered

Failure Pattern 5: Embedding Model Mismatch

The query distribution and document distribution differ, causing semantic similarity to not be properly reflected in the embedding space.

Root Cause Analysis:

  • General-purpose embedding model used for specialized domain documents
  • Difference between query style (short questions) and document style (long explanations)
  • Using English-only embedding model for multilingual documents
# Mitigate mismatch by adding instruction prefix to queries
# (Using Instructor-family embedding models)
from InstructorEmbedding import INSTRUCTOR

model = INSTRUCTOR("hkunlp/instructor-xl")

# Query embedding
query_embedding = model.encode(
    [["Represent the question for retrieving supporting documents:", query]]
)

# Document embedding
doc_embedding = model.encode(
    [["Represent the technical document for retrieval:", document]]
)

Failure Pattern 6: Stale Knowledge Base

The knowledge base documents do not reflect the latest information, causing answers to be out of touch with reality.

Solution Strategy:

# Knowledge base freshness management system
class KnowledgeBaseFreshnessManager:
    def __init__(self, vectorstore, max_age_days=30):
        self.vectorstore = vectorstore
        self.max_age_days = max_age_days

    def check_staleness(self):
        """Detect stale documents"""
        cutoff = datetime.now() - timedelta(days=self.max_age_days)
        stale_docs = self.vectorstore.query(
            filter={"updated_at": {"$lt": cutoff.isoformat()}}
        )
        return stale_docs

    def incremental_update(self, new_documents):
        """Incremental update: re-embed only changed documents"""
        for doc in new_documents:
            existing = self.vectorstore.get(
                filter={"source_id": doc.metadata["source_id"]}
            )
            if existing and self._content_changed(existing, doc):
                self.vectorstore.delete(ids=[existing.id])
                self.vectorstore.add_documents([doc])
            elif not existing:
                self.vectorstore.add_documents([doc])

    def add_temporal_boost(self, results, recency_weight=0.1):
        """Give bonus score to recent documents"""
        now = datetime.now()
        boosted = []
        for doc, score in results:
            age_days = (now - doc.metadata["updated_at"]).days
            recency_score = max(0, 1 - age_days / 365)
            final_score = score + recency_weight * recency_score
            boosted.append((doc, final_score))
        return sorted(boosted, key=lambda x: x[1], reverse=True)

Failure Pattern Timeline and Severity

Failure PatternFrequencyUser ImpactDiagnostic DifficultyFix Difficulty
Wrong chunk retrievalVery HighHighMediumMedium
Context window overflowHighMediumLowLow
Correct retrieval + hallucinationMediumVery HighHighHigh
Lost-in-the-MiddleMediumMediumHighMedium
Embedding mismatchMediumHighHighHigh
Stale knowledge baseHighHighLowMedium

Systematic Debugging Workflow

Follow this workflow to efficiently diagnose quality issues in RAG systems.

Step 1: Reproduce and Classify the Problem

def classify_failure(question, retrieved_docs, generated_answer, ground_truth):
    """Systematically classify RAG failures"""

    # Step 1: Check retrieval quality
    retrieval_recall = calculate_recall(retrieved_docs, ground_truth_docs)

    if retrieval_recall < 0.5:
        return "RETRIEVAL_FAILURE"

    # Step 2: Check context relevancy
    context_relevancy = evaluate_context_relevancy(question, retrieved_docs)

    if context_relevancy < 0.5:
        return "CONTEXT_NOISE"

    # Step 3: Check answer faithfulness
    faithfulness = evaluate_faithfulness(generated_answer, retrieved_docs)

    if faithfulness < 0.7:
        return "HALLUCINATION"

    # Step 4: Check answer correctness
    correctness = evaluate_correctness(generated_answer, ground_truth)

    if correctness < 0.7:
        return "GENERATION_QUALITY"

    return "ACCEPTABLE"

Step 2: Deep Dive into Each Component

# Retrieval stage debugging
def debug_retrieval(query, vectorstore, k=10):
    results = vectorstore.similarity_search_with_score(query, k=k)

    print(f"Query: {query}")
    print(f"{'='*60}")
    for i, (doc, score) in enumerate(results):
        print(f"\n[{i+1}] Score: {score:.4f}")
        print(f"Source: {doc.metadata.get('source', 'unknown')}")
        print(f"Content: {doc.page_content[:200]}...")
        print(f"Metadata: {doc.metadata}")

    # Query embedding analysis
    query_embedding = embeddings.embed_query(query)
    print(f"\nQuery embedding norm: {np.linalg.norm(query_embedding):.4f}")
    print(f"Query embedding dim: {len(query_embedding)}")

    return results

Step 3: A/B Testing and Iterative Improvement

# RAG configuration A/B testing framework
class RAGABTest:
    def __init__(self, test_queries, ground_truths):
        self.test_queries = test_queries
        self.ground_truths = ground_truths

    def run_experiment(self, config_a, config_b, metrics):
        results_a = self._evaluate_config(config_a, metrics)
        results_b = self._evaluate_config(config_b, metrics)

        comparison = {}
        for metric_name in metrics:
            score_a = np.mean(results_a[metric_name])
            score_b = np.mean(results_b[metric_name])
            improvement = (score_b - score_a) / score_a * 100

            comparison[metric_name] = {
                "config_a": score_a,
                "config_b": score_b,
                "improvement_pct": improvement,
            }

        return comparison

# Usage example
ab_test = RAGABTest(test_queries, ground_truths)
result = ab_test.run_experiment(
    config_a={"chunk_size": 512, "k": 5, "model": "gpt-4o-mini"},
    config_b={"chunk_size": 1024, "k": 3, "model": "gpt-4o"},
    metrics=["faithfulness", "answer_relevancy", "recall"]
)

Practical Recommendations

Chunking Strategy Selection Guide

Chunking strategies by document type:

1. Technical documentation / API docs
Markdown header-based splitting + small chunks (256-512 tokens)

2. Legal/regulatory documents
Article/clause-based splitting + hierarchical indexing

3. Conversation logs / FAQ
Question-answer pair-based splitting

4. Academic papers
Section-based splitting + separate indexing for abstract/conclusion

5. General text
Semantic Chunking (meaning-based splitting)

Production Monitoring Checklist

  1. Daily Monitoring: Retrieval hit rate, average similarity scores, answer length distribution
  2. Weekly Monitoring: User feedback (thumbs up/down) trends, hallucination rate sampling
  3. Monthly Monitoring: Full RAGAS evaluation against test set, embedding drift analysis
# Production monitoring dashboard metrics
monitoring_metrics = {
    "retrieval": {
        "avg_similarity_score": 0.82,
        "hit_rate": 0.94,
        "avg_retrieved_docs": 4.2,
        "empty_retrieval_rate": 0.02,
    },
    "generation": {
        "avg_faithfulness": 0.89,
        "avg_answer_length": 245,
        "refusal_rate": 0.05,
        "avg_latency_ms": 1200,
    },
    "user_feedback": {
        "thumbs_up_rate": 0.78,
        "escalation_rate": 0.08,
    }
}

FAQ

Q1: Is ground truth data always necessary for RAG evaluation?

No. Metrics like RAGAS faithfulness and context relevancy can be measured without ground truth. However, metrics like Answer Correctness and Recall@K do require ground truth. It is recommended to start with ground-truth-free metrics initially and gradually build a golden dataset.

Q2: Should the evaluation LLM be the same as the generation LLM?

Generally, using a different model is recommended. Using the same model can introduce bias. For example, evaluating GPT-4o-generated answers with GPT-4o can create self-evaluation bias. Cross-using different model families like Claude provides more objective evaluation.

Q3: Should I improve retrieval or generation first?

In most cases, improving retrieval first is more effective. The "garbage in, garbage out" principle applies. If retrieval quality is low, even the best LLM will have limited answer quality. Once Retrieval Recall reaches 0.8 or above, it is recommended to shift focus to generation-side optimization.

Q4: What is the appropriate chunk size?

There is no single answer, but general guidelines are:

  • 256-512 tokens: Suitable for short factual QA
  • 512-1024 tokens: Suitable for general questions requiring explanations
  • 1024-2048 tokens: Suitable for questions requiring complex analysis

If chunks are too small, context is lost; if too large, noise increases. The optimal size must be determined experimentally.

Q5: Should a reranker always be used?

Rerankers are highly effective at improving the precision of initial search results, but they add latency and cost. They are especially recommended when:

  • Top-ranked retrieval results frequently contain irrelevant documents
  • Queries are complex or ambiguous
  • Retrieval Precision is low

Q6: What should be specifically considered for multilingual RAG systems?

In multilingual environments, consider the following:

  • Use multilingual embedding models (e.g., multilingual-e5-large)
  • Differentiate chunking strategies by language (morpheme-based for Korean, segmentation-based for Japanese)
  • Test cross-language retrieval (e.g., Korean question retrieving English documents)

References

Conclusion: Key Takeaways for Practice

Managing RAG system quality cannot be solved by simply swapping out the LLM. The key is to adopt a systematic evaluation framework, understand failure modes for each component, and continuously improve through ongoing monitoring.

The three most important action items:

  1. Start by building an evaluation dataset: Begin with a minimum of 50-100 question-answer pairs and continuously expand.
  2. Optimize retrieval first: Retrieval quality determines the upper bound of the entire pipeline.
  3. Integrate automated evaluation pipelines into CI/CD: Automatically verify that every change does not cause quality regression.

By implementing these three practices, you can make RAG system quality predictable and continuously improvable.