Hugging Face Data

Overview

nexa-gauge can read datasets from Hugging Face with hf://<dataset-id> sources. Rows from the selected split are treated like local records and normalized with the same field aliases.

Install the optional dependency first:

bash
pip install "nexa-gauge[huggingface]"

Basic Usage

bash
nexagauge estimate eval \
  --input hf://<dataset_id> \
  --limit 10
bash
nexagauge run eval \
  --input hf://<dataset_id> \
  --limit 10 \
  --output-dir ./report

auto adapter mode selects the Hugging Face adapter whenever the input starts with hf://.

Adapter Options

OptionPurpose
--input hf://<dataset-id>Hugging Face dataset source.
--adapter huggingfaceForce the Hugging Face adapter instead of auto-detecting.
--hf-config <name>Optional dataset config name.
--hf-revision <rev>Optional revision, tag, branch, or commit.
--split <name>Dataset split for estimate. Default is train.
--limit <n>Maximum number of rows to process.
--start <n> / --end <n>Process a deterministic row slice.

Example with a config and revision:

bash
nexagauge estimate eval \
  --input hf://<dataset_id> \
  --adapter huggingface \
  --hf-config default \
  --hf-revision main \
  --limit 25

Row Schema

Hugging Face rows must expose the same fields or aliases as local data.

PurposeAccepted field names
Case IDcase_id, id
Generationgeneration, response, answer, output, completion
Questionquestion, query, prompt
Contextcontext, contexts, documents
Referencereference, ground_truth, gold_answer, label
GEval configgeval
Redteam configredteam

Note: Aliases are normalised to the canonical field name in the output. If your input row uses answer, the metrics output will refer to it as generation; query or prompt becomes question; ground_truth/gold_answer/label becomes reference; contexts/documents becomes context; id becomes case_id. Don't be surprised when the input key you supplied isn't the key you see in the output JSON — the column on the left is what nexa-gauge reports.

Custom column mappings with --field

When a Hugging Face dataset uses column names that aren't in the table above, point nexa-gauge at them with the --field LOGICAL=COLUMN flag instead of preprocessing the dataset. The flag is repeatable, so map as many fields as you need in a single invocation:

bash
nexagauge run relevance \
  --input hf://<dataset_id> \
  --field generation=text \
  --field question=q

In this example, the row column text is treated as the generation, and q is treated as the question. Everything downstream — chunking, claim extraction, refinement, metric scoring, the cache fingerprint, and the JSON output — uses the canonical names (generation, question, …), so two runs of the same content produce the same cache key whether the dataset uses text, answer, or generation.

Allowed logical keys: case_id, generation, question, reference, context. Anything else fails fast with a list of valid options.

Precedence: if a row carries both the canonical name and your user-mapped column (e.g. an empty generation field plus a populated text), the explicit --field mapping wins. This is intentional — you asked for it.

Mirrored on nexagauge estimate: the same --field option works for cost estimation, so the mapping doesn't need to change between estimate and run.

Validation errors you might see:

  • Invalid --field value 'foo'. Expected 'LOGICAL=COLUMN'. — missing =.
  • Unknown logical key 'gen' in field mapping. Allowed: … — typo in the canonical key (use generation, not gen).
  • --field: duplicate mapping for 'generation', last value 'X' wins. — warning only, the last --field for a logical key takes effect.

If a dataset does not already include generated outputs, precompute model responses into a generation-like field before running nexa-gauge.

Metric Activation

The same activation rules apply to Hugging Face rows:

  • generation is required for chunking, refinement, claims, redteam, and most metrics.
  • question activates relevance.
  • context activates grounding.
  • reference activates reference.
  • geval activates geval_steps and geval.
  • redteam adds or overrides custom redteam rubrics.

For the complete table, see Data Schema.

Common Runs

Estimate a small slice:

bash
nexagauge run relevance \
  --input hf://sentence-transformers/natural-questions \
  --limit 2 \
  --output-dir ./data/hg_exp_relevance

Run grounding on rows that include context:

bash
nexagauge run grounding \
  --input hf://wandb/RAGTruth-processed \
  --limit 3 \
  --output-dir ./data/hg_exp_grounding

Run reference metrics on rows that include reference:

bash
nexagauge run redteam \
  --input hf://mteb/toxic_conversations_50k \
  --field generation=text \
  --limit 3 \
  --output-dir ./data/hg_exp_toxicity