Multimodal obstacle detection in unstructured environments with conditional random fields

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Standard

Multimodal obstacle detection in unstructured environments with conditional random fields. / Kragh, Mikkel Fly; Underwood, James.

I: Journal of Field Robotics, Bind 37, Nr. 1, 01.2020, s. 53-72.

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

Harvard

APA

CBE

MLA

Vancouver

Author

Kragh, Mikkel Fly ; Underwood, James. / Multimodal obstacle detection in unstructured environments with conditional random fields. I: Journal of Field Robotics. 2020 ; Bind 37, Nr. 1. s. 53-72.

Bibtex

@article{98067e0765f2439681047ba09672f3d0,
title = "Multimodal obstacle detection in unstructured environments with conditional random fields",
abstract = "Reliable obstacle detection and classification in rough and unstructured terrain such as agricultural fields or orchards remains a challenging problem. These environments involve large variations in both geometry and appearance, challenging perception systems that rely on only a single sensor modality. Geometrically, tall grass, fallen leaves, or terrain roughness can mistakenly be perceived as nontraversable or might even obscure actual obstacles. Likewise, traversable grass or dirt roads and obstacles such as trees and bushes might be visually ambiguous. In this paper, we combine appearance‐ and geometry‐based detection methods by probabilistically fusing lidar and camera sensing with semantic segmentation using a conditional random field. We apply a state‐of‐the‐art multimodal fusion algorithm from the scene analysis domain and adjust it for obstacle detection in agriculture with moving ground vehicles. This involves explicitly handling sparse point cloud data and exploiting both spatial, temporal, and multimodal links between corresponding 2D and 3D regions. The proposed method was evaluated on a diverse data set, comprising a dairy paddock and different orchards gathered with a perception research robot in Australia. Results showed that for a two‐class classification problem (ground and nonground), only the camera leveraged from information provided by the other modality with an increase in the mean classification score of 0.5%. However, as more classes were introduced (ground, sky, vegetation, and object), both modalities complemented each other with improvements of 1.4% in 2D and 7.9% in 3D. Finally, introducing temporal links between successive frames resulted in improvements of 0.2% in 2D and 1.5% in 3D.",
keywords = "agriculture, field robots, obstacle detection, sensor fusion",
author = "Kragh, {Mikkel Fly} and James Underwood",
year = "2020",
month = jan,
doi = "10.1002/rob.21866",
language = "English",
volume = "37",
pages = "53--72",
journal = "Journal of Field Robotics",
issn = "1556-4959",
publisher = "JohnWiley & Sons, Inc.",
number = "1",

}

RIS

TY - JOUR

T1 - Multimodal obstacle detection in unstructured environments with conditional random fields

AU - Kragh, Mikkel Fly

AU - Underwood, James

PY - 2020/1

Y1 - 2020/1

N2 - Reliable obstacle detection and classification in rough and unstructured terrain such as agricultural fields or orchards remains a challenging problem. These environments involve large variations in both geometry and appearance, challenging perception systems that rely on only a single sensor modality. Geometrically, tall grass, fallen leaves, or terrain roughness can mistakenly be perceived as nontraversable or might even obscure actual obstacles. Likewise, traversable grass or dirt roads and obstacles such as trees and bushes might be visually ambiguous. In this paper, we combine appearance‐ and geometry‐based detection methods by probabilistically fusing lidar and camera sensing with semantic segmentation using a conditional random field. We apply a state‐of‐the‐art multimodal fusion algorithm from the scene analysis domain and adjust it for obstacle detection in agriculture with moving ground vehicles. This involves explicitly handling sparse point cloud data and exploiting both spatial, temporal, and multimodal links between corresponding 2D and 3D regions. The proposed method was evaluated on a diverse data set, comprising a dairy paddock and different orchards gathered with a perception research robot in Australia. Results showed that for a two‐class classification problem (ground and nonground), only the camera leveraged from information provided by the other modality with an increase in the mean classification score of 0.5%. However, as more classes were introduced (ground, sky, vegetation, and object), both modalities complemented each other with improvements of 1.4% in 2D and 7.9% in 3D. Finally, introducing temporal links between successive frames resulted in improvements of 0.2% in 2D and 1.5% in 3D.

AB - Reliable obstacle detection and classification in rough and unstructured terrain such as agricultural fields or orchards remains a challenging problem. These environments involve large variations in both geometry and appearance, challenging perception systems that rely on only a single sensor modality. Geometrically, tall grass, fallen leaves, or terrain roughness can mistakenly be perceived as nontraversable or might even obscure actual obstacles. Likewise, traversable grass or dirt roads and obstacles such as trees and bushes might be visually ambiguous. In this paper, we combine appearance‐ and geometry‐based detection methods by probabilistically fusing lidar and camera sensing with semantic segmentation using a conditional random field. We apply a state‐of‐the‐art multimodal fusion algorithm from the scene analysis domain and adjust it for obstacle detection in agriculture with moving ground vehicles. This involves explicitly handling sparse point cloud data and exploiting both spatial, temporal, and multimodal links between corresponding 2D and 3D regions. The proposed method was evaluated on a diverse data set, comprising a dairy paddock and different orchards gathered with a perception research robot in Australia. Results showed that for a two‐class classification problem (ground and nonground), only the camera leveraged from information provided by the other modality with an increase in the mean classification score of 0.5%. However, as more classes were introduced (ground, sky, vegetation, and object), both modalities complemented each other with improvements of 1.4% in 2D and 7.9% in 3D. Finally, introducing temporal links between successive frames resulted in improvements of 0.2% in 2D and 1.5% in 3D.

KW - agriculture

KW - field robots

KW - obstacle detection

KW - sensor fusion

UR - http://www.scopus.com/inward/record.url?scp=85062691163&partnerID=8YFLogxK

U2 - 10.1002/rob.21866

DO - 10.1002/rob.21866

M3 - Journal article

VL - 37

SP - 53

EP - 72

JO - Journal of Field Robotics

JF - Journal of Field Robotics

SN - 1556-4959

IS - 1

ER -