Skip to main content
[PLATFORM]

What we capture

Tactile sensing

Dense contact maps across the hand surface capture forces that cameras can't see. This is what closes the sim-to-real gap for dexterous manipulation.

Egocentric & wrist perspective

Head-mounted and wrist-mounted cameras give the model the same viewpoint as the operator, not a detached third-person angle.

Per-frame annotation

Every frame is labeled with 3D hand pose, contact state, object interactions, and spatial context. All streams time-aligned to a shared clock.

Task context and counterfactuals

Detailed task descriptions, environmental context, and counterfactual annotations. What happened and what could have.

Teacher modelsCOMING SOON

Physical priors distilled from our capture data. Geometry, contact, and persistence constraints that teach world models what's allowed to be true.

DATA STREAMS
VIDEOSynchronized egocentric + wrist cameras
TACTILEContact force and deformation maps
POSEFull hand articulation, 21 joints per hand
IMUWrist orientation and acceleration
SCENE3D reconstruction of the workspace
AUDIOSpatial microphone array
All streams hardware-synced to a shared clock
[USE CASES]

What you can build

Designed for direct ingestion into robotics pipelines.

Imitation learning

Behavior cloning and action-chunking policies from human demonstrations

Grasp and manipulation

Contact-rich training data for dexterous hand control

World models

Multi-modal ground truth for learning physical dynamics

Sim-to-real transfer

Real-world anchoring data to calibrate and validate simulation

[ACCESS]

Interested in the data?

We're preparing sample datasets for early partners. Reach out to learn more about data access and format details.

REQUEST ACCESS