Sim-to-Real Deep Reinforcement Learning for Safe End-to-End Planning of Aerial Robots

Halil Ibrahim Ugurlu, Xuan Huy Pham, Erdal Kayacan*

*Corresponding author for this work

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review


In this study, a novel end-to-end path planning algorithm based on deep reinforcement learning is proposed for aerial robots deployed in dense environments. The learning agent finds an obstacle-free way around the provided rough, global path by only depending on the observations from a forward-facing depth camera. A novel deep reinforcement learning framework is proposed to train the end-to-end policy with the capability of safely avoiding obstacles. The Webots open-source robot simulator is utilized for training the policy, introducing highly randomized environmental configurations for better generalization. The training is performed without dynamics calculations through randomized position updates to minimize the amount of data processed. The trained policy is first comprehensively evaluated in simulations involving physical dynamics and software-in-the-loop flight control. The proposed method is proven to have a 38% and 50% higher success rate compared to both deep reinforcement learning-based and artificial potential field-based baselines, respectively. The generalization capability of the method is verified in simulation-to-real transfer without further training. Real-time experiments are conducted with several trials in two different scenarios, showing a 50% higher success rate of the proposed method compared to the deep reinforcement learning-based baseline.
Original languageEnglish
Article number109
Publication statusPublished - Oct 2022


  • deep reinforcement learning
  • obstacle avoidance
  • quadrotors
  • sim-to-real transfer


Dive into the research topics of 'Sim-to-Real Deep Reinforcement Learning for Safe End-to-End Planning of Aerial Robots'. Together they form a unique fingerprint.

Cite this