Deep learning for vision-based navigation in autonomous drone racing

Research output: Contribution to book/anthology/report/proceedingBook chapterResearchpeer-review

Abstract

Long-term autonomy is of great importance in various real-world applications of aerial robotics, including, but not limited to, search and rescue missions in underground mines, detection, and monitoring of victims trapped under collapsed buildings, aerial radiation detection in nuclear power plants after an accident, or extraterrestrial explorations. Due to the limited energy storage capacity and inefficiency of rechargeable batteries, drones must execute their missions as fast as possible with the most efficient machine learning algorithms for perception, planning, and control tasks. Motivated by the aforementioned need, agile aerial robots have gained increasing interest in the robotics community. Autonomous drone racing (ADR), which is one of the most challenging robotics problems, is an appropriate testbed for benchmarking efficiency of developed machine learning algorithms as well as their required sensors. ADR problem is chosen to demonstrate the value of the developed deep neural network-based algorithms as drones require low-latency of processing, since they have to maneuver and react to the changes in the environment rapidly throughout the race. In this chapter, we introduce deep learning methods for perception and planning of aerial robots as applied to ADR problem. We present two distinct approaches: system decomposition and end-to-end planning. In the former, we present how convolutional neural networks are used for gate localization and gate pose estimation in perception, either by predicting a gate’s center or building a 3D global gate map. In the latter, we show how to bypass the perceive-plan-act subtasks and utilize deep learning methods, such as transfer learning and reinforcement learning, to learn directly from raw images and calculate desired robot actions. Furthermore, a few useful simulation tools and data sets are introduced for ADR to develop, validate, and evaluate novel algorithms. We believe developed algorithms for ADR can be transferred to other domains in which drones must navigate in a cluttered environment without a need for external sensing.

Original languageEnglish
Title of host publicationDeep Learning for Robot Perception and Cognition
EditorsAlexandros Iosifidis, Anastasios Tefas
PublisherElsevier
Publication date2022
Pages371-406
Chapter15
ISBN (Print)978-0-323-85787-1
DOIs
Publication statusPublished - 2022

Keywords

  • Autonomous drone racing
  • Deep learning
  • End-to-end planning
  • Motion planning
  • Perception for drones
  • Vision-based navigation

Fingerprint

Dive into the research topics of 'Deep learning for vision-based navigation in autonomous drone racing'. Together they form a unique fingerprint.

Cite this