Writers: Ryoma Kawajiri, Jethro Tan
Preferred Networks (PFN) attended the 30th IEEE/RSJ IROS conference held in Vancouver, Canada. IROS is known to be the second biggest robotics conference in the world after ICRA (see here for our report on this year’s ICRA) with 2797 total registrants, 2164 submitted papers (of which 970 were accepted amounting to an acceptance rate of 44.82%). With no less than 18 sessions being held in parallel, our members had a hard time to decide which ones to attend.
Deep Learning continues to be even more omnipresent
Just like how SLAM (Simultaneous Localization And Mapping) dominated IROS some years ago, the presence of deep learning continues to grow in all fields of robotics. While the majority of deep learning related articles in the sessions we attended are still tied to some form of computer vision, we also saw more deep reinforcement learning applied in various manipulation tasks, locomotion tasks, and trajectory generation. One of such work can be seen in the video below where peg-in-hole insertion task has been learned to cope with small positional as well as angular errors of the workpiece.
(Tadanobu Inoue, et. al., Deep Reinforcement Learning for High Precision Assembly Tasks, IROS 2017) peg-in-hole task
However, the problem of getting any method that makes use of deep learning to work with a generalized use case still hasn’t been solved, nor does it appear to be solved very soon. In many presentations where deep learning was involved, there would often be a comment praising the idea or application followed by a question whether this idea would also work slightly beyond the scope of the research that had been done or even in a generalized use case. These questions were often also combined with an answer almost everybody expected. Because of this flaw in learning-based methods, there are some researchers who are skeptical about completely abandoning all the work that has been done in the past on model-based research in favor of learning-based. In an excellent plenary session, Dieter Fox mentioned how instead of quarreling among ourselves in the robotics community on whether model-based or learning-based methods are more ‘righteous’ in certain applications, we should perhaps make an effort to combine both of best worlds together to tackle these challenges.
Humanoid and legged robots
One of the very few exceptions where deep learning (DL) is hardly used in is legged robots control. We believe this to be the result of e.g., sampling inefficiency of DL, large gaps between simulations and real, or requirements for high-speed control. Nevertheless, this does not mean that we cannot find incredible and inspiring research from non learning-based methods! Let’s take a look at the following three works.
(Duncan W. Haldane, et. al., Repetitive Extreme-Acceleration (14-G) Spatial Jumping with Salto-1P, IROS 2017.)
Salto, a one pedal robot, performs locomotion by jumping quickly and stable. They won IROS 2017 Best Paper Award. Once again, congratulations to the authors for having done an excellent job on writing this paper!
(Yoshiike Takahide, et. al., “Development of Experimental Legged Robot for Inspection and Disaster Response in Plants”, IROS 2017.)
With their latest work, Honda took us by surprise by once again showing new capabilities of their humanoid robot. The robot is being developed for disaster relief in locations such as factories and plants and has distinguishes itself from previous generations with its ability to climb up and down through ladders, and passing through narrow environments.
(Kim, Joohyung, et. al. “Snapbot: A Reconfigurable Legged RobotSnapbot: A Reconfigurable Legged Robot”)
The last video we want to feature is the Snapbot — a reconfigurable legged robot with the ability to emulate bodily configuration changes and various styles of legged locomotion. The legs of Snapbot can be attached to and removed from the body using magnetic mechanical couplings causing the gait to vary depending on how many legs it has. We feel this is extremely valuable for e.g., applying for our latest work from our paper we recently submitted for ICRA 2018 on real hardware to effectively simulate damage without actually damaging the real robot.
The importance of benchmarking in robotics
In one of the workshops sessions, we listened to Prof. Aaron Dollar and Dr. Berk Calli from Yale University’s GRAB Lab talk on the importance of benchmarking in robotics. We have to notice that while performance benchmarking procedures are available for computer vision related tasks (e.g., the Kitti vision benchmark suite), no such thing has been procured for any other part of the robotic manipulation pipeline (motion planning, grasping, manipulation planning, grasp manipulation, place manipulation, in-hand manipulation, etc.). Furthermore, even though full-system integration can be measured with robotic challenges such as the DARPA challenge or the Amazon Robotics Challenge to a certain extent, the end result of these competitions are too difficult to evaluate and/or do not necessarily provide direct information on which part(s) in the overall pipeline has been executed flawlessly or ended up being the bottleneck. To counter this problem, Prof. Dollar wants to start an initiative in the robotics community to provide a standardized benchmarking suite consisting of protocols maintained by the community for each process in the robotic manipulation pipeline. One of the key factors to achieve standardization is to have a dataset and its corresponding object set, for which the YCB benchmark has been created as a collaboration with Yale University, Carnegie Mellon University, and the University of California, Berkeley. More information about the workshop can be found here. As robotics researchers tackling real industrial applications, it should come to nobody’s surprise that we are interested and very looking forward to what this initiative will develop into. Not only will a good display of performance in a standard benchmark suite provide more credibility to potential customers, but also less impressive results in a certain process in the pipeline would be very valuable to identify where critical improvements can be made in our perennial quest to develop the ultimate robotic system.