MagiClaw systematically integrates our years of development in vision-based tactile sensing with multi-modal embodiment of physical interactions using the patented design of the Soft Polyhedral Networks and our years of teaching experience with ME336 on Collaborative Robot Learning.

We redesigned MagiClaw by adding two vision-based Soft Polyhedral Networks at the fingertip of a fully 3D-printed robotic gripper with dimensions matching OnRobot’s RG6. We separate the hand-held section from the gripper section so that users can easily customize based on the gripper system available to their labs. We also introduced active actuation of the gripper so that it can be used alternatively as a standard robotic gripper on the standard flange of their collaborative robots.

A big change from the previous version is the introduction of an iPhone with lidar cameras as the core sensor for perceiving the scene with high-quality RGBD and depth sensing. (We use a second-hand one to reduce cost). Multi-modal data collected by the iPhone and the 6D tactile sensing and shape sensing from the fingers are streamed simultaneously to a web interface via a Raspberry Pi. We developed a website (deepclaw.com) and an iOS app (MagiClaw) to work with the hardware, aiming at streamlining the data collection, processing, and reuse at a low cost.

We are also working with an industrial design firm to improve the hardware’s ergonomics and refine the web and app’s performance before officially launching it to the general public. This is also why we came to IROS2024 in Abu Dhabi and participated in the workshop to receive valuable feedback from our fellow researchers with similar interests.