
Training data fromreal humans, at work.
AR glasses on warehouse workers. Every shift generates labeled human demonstrations — the kind of data that doesn't exist anywhere else.
Get in touchWhat we capture.
And what's coming next.
Egocentric Video
First-person perspective from AR headsets. Every grasp, reach, and manipulation captured with surgical clarity.
Inertial Motion
Full 6-axis body kinematics synchronized to the frame. Accelerations, rotations, and micro-corrections during physical tasks.
Gaze & Attention
Where humans look before they act. Fixation points, saccades, and attention shifts tied to every manipulation event.
Semantic Labels
Every object tagged with class, pose, and state. Every action labeled — pick, place, inspect, scan, pack — at millisecond resolution.
Every shift is a
labeled dataset.
Workers do their jobs. We capture what they do — structured, timestamped, ready for policy learning. No special setup. No staged environments.
Who we work with.
Robotics labs
Real demonstrations, not simulated ones. Train manipulation policies on data that reflects how humans actually move.
Foundation model teams
Need scale? Our warehouses run every day. The data doesn't stop.
Automation companies
Skip the bootstrapping phase. Start with humans doing the exact task your robot needs to learn.
Interested in a dataset or pilot? Reach out directly.
Get in touch