The Bimanual Actions Dataset is a collection of 540 RGB-D videos, showing subjects perform bimanual actions in a kitchen or workshop context. The main purpose for its compilation is to research bimanual human behaviour in order to eventually improve the capabilities of humanoid robots.
Download ExploreSome facts about the Bimanual Actions Dataset.
Subjects | 6 subjects (3 female, 3 male; 5 right-handed, 1 left-handed) |
Tasks | 9 tasks (5 in a kitchen context, 4 in a workshop context) |
Recordings | 540 recordings in total (6 subjects performed 9 tasks with 10 repetitions) |
Playtime | 2 hours and 18 minutes, or 221 000 RGB-D image frames |
Quality | 640 px × 480 px image resolution; 30 fps (83 recordings are at 15 fps due to technical issues) |
Actions | 14 actions (idle, approach, retreat, lift, place, hold, stir, pour, cut, drink, wipe, hammer, saw, and screw) |
Objects | 12 objects (cup, bowl, whisk, bottle, banana, cutting board, knife, sponge, hammer, saw, wood, and screwdriver) |
Annotations | Actions fully labelled for both hands individually; 5 413 frames labelled with object bounding boxes |
If you use the KIT Bimanual Actions Dataset, please consider citing our corresponding work.
@article{dreher2020learning, author = {Dreher, Christian R. G. and Wächter, Mirko and Asfour, Tamim}, title = {Learning Object-Action Relations from Bimanual Human Demonstration Using Graph Networks}, journal = {IEEE Robotics and Automation Letters (RA-L)}, year = {2020}, number = {1}, pages = {187--194}, volume = {5}, doi = {10.1109/LRA.2019.2949221}, }