Skip to content

The-AI-Alliance/cube-harness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

947 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

cube-harness banner

cube-harness

Open source harness for building and evaluating AI agents using the CUBE Standard.

CUBE Standard defines the benchmark protocol. cube-harness is the evaluation runtime: it runs agents against any CUBE-compatible benchmark, records trajectories, and scales execution with Ray.

Note

cube-harness is in active development (alpha). Interfaces may change. We welcome early adopters and contributors who want to shape the framework, not just use it. See our Roadmap and Contributing Guide.

Have a benchmark to contribute? Fill out this short form — no commitment required. Want to go deeper? Apply to join the core team.

Quickstart

Installation

# Clone the repository
git clone https://github.com/The-AI-Alliance/cube-harness.git
cd cube-harness

# Install dependencies
make install

API Keys

Set your OpenAI API key:

export OPENAI_API_KEY=your-key-here

Any LiteLLM-supported provider works — just change model_name in the recipe.

Run Tests

make test

Run Hello Example

The hello_miniwob recipe demonstrates running a ReAct agent on the MiniWob benchmark.

Start here — 2 tasks, sequential (fast, no Ray required):

make debug

Full benchmark (all 125 tasks, parallel via Ray):

make hello

This will:

  1. Launch a headless browser environment
  2. Run a ReAct agent powered by gpt-5-mini on MiniWob tasks
  3. Save trajectories and results to ~/cube_harness_results/{YYYYMMDD_HHMMSS}_react_miniwob/

Configuration

Recipes are the configuration. Copy one from recipes/, edit what you need, and run it. Config objects are typed Pydantic models — serialized to disk with every experiment so results are always reproducible.

See docs/configuration.md for the full philosophy, a comparison with Hydra/YAML/CLI approaches, and how to run sweeps.

Experiment Viewer

cube-harness includes a Gradio-based XRay UI for exploring experiment results, trajectories, and OpenTelemetry spans:

make xray
# or: uv run ch-xray

The viewer displays:

  • Trajectory list — all runs with task ID, steps, reward, and duration
  • Visual timeline — color-coded steps (blue=environment, green=agent) with duration-based widths
  • Screenshots — environment state at each step
  • Step details — observations, agent actions, and LLM reasoning
  • Debug data — raw JSON, LLM calls, and tool configurations

cube-harness Viewer Screenshot

Architecture Overview

cube-harness is a universal evaluation platform for agentic benchmarks and an RL data generation framework built on top of the CUBE Standard.

Core Components

cube-harness Overview

  • Agent — LLM-powered decision maker that receives observations and produces actions
  • Environment — Executes actions, provides observations and rewards (tool + task composition)
  • Tool — Modular action provider that exposes an action space, reusable across benchmarks
  • ActionSpace — Defines the set of possible actions a tool can execute
  • Task — Defines goals, validation logic, and action subsets
  • Trajectory — Stores interaction history (observations, actions, rewards)
  • Episode — Single agent-environment loop for one task; records a trajectory
  • Benchmark — Collection of tasks; produces env configs for episodes
  • Experiment — Coordinates execution of multiple episodes across a benchmark
  • ExpRunner — Execution runtime (sequential or parallel via Ray)

Design Goals

  1. Benchmark Agnostic — Plug in any CUBE-standard benchmark (MiniWob, WebArena, OSWorld, …) via the Benchmark interface
  2. Agent Agnostic — Support any agent architecture by implementing the Agent protocol
  3. RL-Ready — Trajectory format designed for training data generation with full LLM call logging
  4. Scalable — Ray integration for parallel episode execution across multiple workers
  5. Observable — Structured trajectory output for analysis and debugging

Development

make format    # Format code
make lint      # Lint and auto-fix
make help      # Show all commands
make test      # Run tests
make coverage  # Run tests with coverage report

Pre-commit hooks

Install once after cloning to get ruff lint/format, trailing-whitespace checks, and DCO sign-off enforcement on every commit:

pre-commit install --hook-type pre-commit --hook-type commit-msg --hook-type prepare-commit-msg

The prepare-commit-msg hook automatically appends Signed-off-by: Your Name <email> to every commit message (required by the DCO). You can also sign off manually with git commit -s.

Project Structure

cube-harness/
├── src/cube_harness/   # Source code for the framework
├── tests/              # Test suite
├── recipes/            # Example recipes and configurations
├── docs/               # Project documentation
└── Makefile            # Common task shortcuts

Getting Involved

All contributions are welcome — open an issue, submit a PR, or wrap a new benchmark. See CONTRIBUTING.md for the development guide, DCO requirements, and RFC process.

Want deeper involvement? Join the core team, shape the roadmap, and get credit for what you build. Apply here.

For general AI Alliance contribution guidelines, see the community repo.

About

Drive OSS standards and tools for data curation and evaluation creation for state of the art AI agents

Resources

License

Apache-2.0 and 2 other licenses found

Licenses found

Apache-2.0
LICENSE.Apache-2.0
CC-BY-4.0
LICENSE.CC-BY-4.0
Unknown
LICENSE.CDLA-2.0

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages