Introduction
Overview
nexa-gauge is a graph-based evaluation system for LLM and LVLM application outputs. It replaces ad-hoc manual checks with a repeatable pipeline that can be run on local datasets or hosted datasets.
At a high level, nexa-gauge:
- Normalizes raw records into a typed evaluation state.
- Executes only the nodes required for the selected target.
- Reuses prior node outputs through deterministic caching.
- Produces a consistent per-case report for downstream tooling.
This architecture supports day-to-day prompt iteration, benchmark runs, and release gating with measurable quality and safety signals.
Why LLM-As-A-Judge Is Necessary
Exact-match metrics are useful but limited for modern generative systems. In many real tasks, multiple answers can be valid, quality depends on context use, and failure modes are semantic rather than lexical.
LLM-as-a-judge provides scalable semantic evaluation by scoring outputs against explicit criteria. In nexa-gauge, this capability is combined with targeted metrics so teams can evaluate quality from multiple angles:
relevancefor question-answer alignment.groundingfor support in provided context.redteamfor safety and risk behavior.gevalfor rubric-based judgment.referencefor overlap with known reference answers.
Execution Model And Caching
nexa-gauge provides two operational modes:
runexecutes the selected branch and returns final artifacts.estimatecomputes uncached eligible cost before execution.
Both modes follow the same branch-planning logic, which makes cost estimates actionable before you run full evaluations.
Caching is route-aware and deterministic. Reuse occurs only when input content and routing semantics are unchanged. Changes to inputs, prompts, or model routing intentionally invalidate affected steps.
Practical outcome:
- Teams can estimate budget before execution.
- Iterative runs avoid recomputing stable nodes.
- Results remain reproducible under fixed inputs and model routes.
Architecture
Node Summary
Input And Orchestration
| Node | Purpose |
|---|---|
scan | Normalizes record fields and initializes case state. |
eval | Aggregates metric branches into a unified result. |
report | Projects final output into a stable report contract. |
Utility Nodes
| Node | Purpose |
|---|---|
chunk | Splits generated text for downstream extraction. |
claims | Extracts atomic claims from generated output. |
dedup | Removes duplicate claims before scoring. |
geval_steps | Resolves evaluation steps for GEval scoring. |
Metric Nodes
| Node | Purpose |
|---|---|
relevance | Measures how directly claims answer the question. |
grounding | Measures whether claims are supported by context. |
redteam | Evaluates safety and policy risk using rubrics. |
geval | Runs final rubric-driven LLM judging. |
reference | Computes reference-based lexical metrics. |
Typical Workflow
# Estimate full evaluation cost for a dataset slice
nexagauge estimate eval --input sample.json --limit 100
# Run full evaluation and write per-case report files
nexagauge run eval --input sample.json --limit 100 --output-dir ./reportFor iterative development, repeated runs on unchanged inputs and routing should show high cache reuse and lower incremental latency.