What we capture
Dense contact maps across the hand surface capture forces that cameras can't see. This is what closes the sim-to-real gap for dexterous manipulation.
Head-mounted and wrist-mounted cameras give the model the same viewpoint as the operator, not a detached third-person angle.
Every frame is labeled with 3D hand pose, contact state, object interactions, and spatial context. All streams time-aligned to a shared clock.
Detailed task descriptions, environmental context, and counterfactual annotations. What happened and what could have.
Physical priors distilled from our capture data. Geometry, contact, and persistence constraints that teach world models what's allowed to be true.
What you can build
Designed for direct ingestion into robotics pipelines.
Behavior cloning and action-chunking policies from human demonstrations
Contact-rich training data for dexterous hand control
Multi-modal ground truth for learning physical dynamics
Real-world anchoring data to calibrate and validate simulation
Interested in the data?
We're preparing sample datasets for early partners. Reach out to learn more about data access and format details.
REQUEST ACCESS