Publications

Toward Onboard Control System for Mobile Robots via Deep Reinforcement Learning

Deep Reinforcement Learning Workshop at NeurIPS 2018

By : Megumi Miyashita, Shirou Maruyama, Yasuhiro Fujita, Mitsuru Kusumoto, Tobias Pfeiffer, Eiichi Matsumoto, Ryosuke Okuta, Daisuke Okanohara

Abstract

In this paper, we address the development of an autonomous control system for
mobile robots using deep reinforcement learning. We consider a wheeled vehicle as the mobile robot and a two-dimensional horizontal LiDAR as the mounted
sensor and propose a deep neural network architecture to compute a safe action
from the sensor values in an end-to-end manner. As the training phase of our control model is only performed in a simulated environment, it is possible to train
efficiently and safely without damaging the actual robot. The trained model is used to control the real robot following this. Our mobile robot achieved the state
of being a standalone system, that is, processing the sensor observation and control model is completed in the robot without relying on external computers and
communication modules. Moreover, the system efficiently runs on a limited computing environment like Raspberry Pi 3. From our experiment, we show the effectiveness of our simple control model through experimental results of the simulated environment and demonstrate the working of real robots through the video.

  • Twitter
  • Facebook