Robots Made Out of Branches Use Deep Learning to Walk

These robots figure out how to walk in simulation first, through deep reinforcement learning. The way this is implemented in the paper is by picking up some sticks, weighing and 3D scanning them, simulating the entire robot, and then rewarding gaits that result in the farthest movement. There’s also some hand-tuning involved to avoid behaviors that might (for example) “cause stress and wear in the real robot.” 

Overall, this is maybe not the kind of strategy that you’d be able to use for most applications, but we can speculate about how robots like these could become a little bit more practical at some point. The idea of being able to construct a mobile robot out of whatever is lying around (plus some servos and maybe a sensor or two) is a compelling one, and it seems like you could develop a gait from scratch on the physical robot using trial and error and feedback from some basic sensors, since we’ve seen similar things done on other robotic platforms.

Found materials robots like these are not likely to be as capable as traditional robotic designs, so they’d likely only be useful under special circumstances. Not having to worry about transporting structural materials would be nice, as would being able to create a variety of designs as necessary using one generalized hardware set. And building a robot out of locally available materials means that anything you put together will be really easy to fix, even if you do have to teach it to move all over again.

Improvised Robotic Design With Found Objects,” by Azumi Maekawa, Ayaka Kume, Hironori Yoshida, Jun Hatori, Jason Naradowsky, and Shunta Saito, from the University of Tokyo and Preferred Networks, Inc., was presented at the Workshop on Machine Learning for Creativity and Design at NeurIPS 2018.

[ Azumi Maekawa ] via [ HY-MA ]

Thanks Fan!

Source: IEEE Spectrum Robotics