Skip to content

HEDit CLI Reference

The HEDit CLI provides a command-line interface for generating and validating HED annotations.

Installation

pip install hedit

Commands

hedit init

Initialize HEDit CLI with your API key and preferences.

hedit init [OPTIONS]

Options:

Option Short Type Description
--api-key -k TEXT OpenRouter API key (prompted if not provided)
--api-url TEXT API endpoint URL (default: api.annotation.garden/hedit)
--model -m TEXT Default model for annotation
--provider TEXT Provider preference (e.g., Cerebras for fast inference)
--temperature -t FLOAT LLM temperature (0.0-1.0)

Example:

hedit init --api-key sk-or-v1-xxx --model openai/gpt-4o-mini

hedit annotate

Generate HED annotation from a text description.

hedit annotate DESCRIPTION [OPTIONS]

Arguments:

Argument Type Description
DESCRIPTION TEXT Natural language event description

Options:

Option Short Type Default Description
--api-key -k TEXT OpenRouter API key (or use env var)
--api-url TEXT API endpoint URL
--model -m TEXT Model to use for annotation
--eval-model TEXT Model for evaluation/assessment agents (see below)
--eval-provider TEXT Provider for evaluation model (e.g., Cerebras)
--provider TEXT Provider preference (e.g., Cerebras)
--temperature -t FLOAT LLM temperature
--schema -s TEXT 8.3.0 HED schema version
--output -o TEXT text Output format (text, json)
--max-attempts INT 5 Maximum validation attempts
--assessment/--no-assessment BOOL False Run completeness assessment
--standalone BOOL False Run locally without backend
--api BOOL False Use API backend (default)
--verbose -v BOOL False Show detailed output

About --eval-model and --eval-provider

The --eval-model option specifies a separate model for the evaluation, assessment, and feedback summarization agents. The --eval-provider option allows specifying a different provider for the evaluation model. This is useful for:

  • Model benchmarking: Use a consistent evaluator (e.g., qwen/qwen3-235b-a22b-2507 via Cerebras) across different annotation models for fair comparison
  • Cost optimization: Use a cheaper model for annotation and a more capable model for quality assessment
  • Speed optimization: Use a fast provider like Cerebras for the evaluation model

When not specified, all agents use the same model and provider as --model and --provider.

Examples:

# Basic usage
hedit annotate "A red circle appears on the left side of the screen"

# With specific schema version
hedit annotate "Participant pressed the spacebar" --schema 8.4.0

# JSON output for piping
hedit annotate "Audio beep plays" -o json > result.json

# With custom model settings
hedit annotate "..." --model gpt-4o-mini --temperature 0.2

# With assessment enabled
hedit annotate "A face image is shown" --assessment -v

# Standalone mode (run locally without backend)
hedit annotate "..." --standalone

# Model benchmarking with consistent evaluator
hedit annotate "A monkey reaches for a reward" \
  --model openai/gpt-4o-mini \
  --eval-model qwen/qwen3-235b-a22b-2507 \
  --eval-provider Cerebras \
  --standalone

hedit annotate-image

Generate HED annotation from an image file.

hedit annotate-image IMAGE [OPTIONS]

Arguments:

Argument Type Description
IMAGE PATH Path to image file (PNG, JPG, etc.)

Options:

Option Short Type Default Description
--prompt TEXT Custom prompt for vision model
--api-key -k TEXT OpenRouter API key
--model -m TEXT Model to use for annotation
--eval-model TEXT Model for evaluation/assessment agents
--eval-provider TEXT Provider for evaluation model (e.g., Cerebras)
--provider TEXT Provider preference (e.g., Cerebras)
--temperature -t FLOAT LLM temperature
--schema -s TEXT 8.4.0 HED schema version
--output -o TEXT text Output format
--max-attempts INT 5 Maximum validation attempts
--assessment/--no-assessment BOOL False Run completeness assessment
--standalone BOOL False Run locally without backend
--api BOOL False Use API backend (default)
--verbose -v BOOL False Show detailed output

Examples:

# Basic usage
hedit annotate-image stimulus.png

# With custom vision prompt
hedit annotate-image photo.jpg --prompt "Describe the experimental setup"

# JSON output
hedit annotate-image screen.png -o json > result.json

# Standalone mode with consistent evaluator for benchmarking
hedit annotate-image nsd_image.png \
  --model openai/gpt-4o-mini \
  --eval-model qwen/qwen3-235b-a22b-2507 \
  --eval-provider Cerebras \
  --standalone

hedit validate

Validate an existing HED annotation string.

hedit validate HED_STRING [OPTIONS]

Arguments:

Argument Type Description
HED_STRING TEXT HED annotation string to validate

Options:

Option Short Type Default Description
--api-key -k TEXT OpenRouter API key
--api-url TEXT API endpoint URL
--schema -s TEXT 8.3.0 HED schema version
--output -o TEXT text Output format

Examples:

# Validate a simple HED string
hedit validate "Sensory-event, Visual-presentation"

# Validate with specific schema
hedit validate "(Red, Circle)" --schema 8.4.0

# JSON output for parsing
hedit validate "Event" -o json

hedit config

Manage CLI configuration.

hedit config show

Show current configuration.

hedit config show [OPTIONS]

Options:

Option Type Description
--show-key BOOL Show full API key (default: masked)

hedit config set

Set a configuration value.

hedit config set KEY VALUE

Examples:

hedit config set models.default gpt-4o
hedit config set settings.temperature 0.2
hedit config set api.url https://api.example.com/hedit

hedit config path

Show configuration file paths.

hedit config path

hedit config clear-credentials

Remove stored API credentials.

hedit config clear-credentials [--force]

hedit health

Check API health status.

hedit health [OPTIONS]

Options:

Option Type Description
--api-url TEXT API endpoint URL

hedit --version

Show version and exit.

hedit --version

Configuration

HEDit stores configuration in ~/.config/hedit/:

  • config.yaml: General settings (models, temperature, API URL)
  • credentials.yaml: API keys (stored securely)

Environment Variables

Variable Description
OPENROUTER_API_KEY Default OpenRouter API key

Exit Codes

Code Description
0 Success
1 Error (validation failed, API error, etc.)

Output Formats

Text (default)

Human-readable output with colors and formatting.

JSON

Machine-readable output for scripting:

{
  "annotation": "Sensory-event, Visual-presentation, (Red, Circle, (Left-side))",
  "is_valid": true,
  "is_faithful": true,
  "is_complete": true,
  "validation_attempts": 1,
  "validation_errors": [],
  "validation_warnings": [],
  "status": "success"
}