Abstract
The integration of drones into shared airspace for beyond visual line of sight (BVLOS) operations presents significant challenges but holds transformative potential for sectors like transportation, construction, energy and defence. A prerequisite for this integration is equipping drones with enhanced situational awareness to ensure collision avoidance and safe operations. Current approaches mainly target single object detection or classification, or simpler sensing outputs that offer limited perceptual understanding and lack the rapid end-to-end processing needed to convert sensor data into safety-critical insights. In contrast, our study leverages radar technology for novel end-to-end semantic segmentation of aerial point clouds to simultaneously identify multiple collision hazards. By adapting and optimizing the PointNet architecture and integrating aerial domain insights, our framework distinguishes five distinct classes: mobile targets like drones (DJI M300 and DJI Mini) and airplanes (Ikarus C42), and static returns (ground and infrastructure) which results in enhanced situational awareness for drones. To our knowledge, this is the first approach addressing simultaneous identification of multiple collision threats in an aerial setting, achieving a robust 94% accuracy. This work highlights the potential of radar technology to advance situational awareness in drones, facilitating safe and efficient BVLOS operations.
Original language | English |
---|---|
Number of pages | 16 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
Early online date | 30 Aug 2024 |
DOIs | |
Publication status | E-pub ahead of print - 30 Aug 2024 |
Keywords
- BVLOS
- UAV
- drone
- UAM
- radar
- point cloud
- airborne
- AI
- deep neural networks
- semantic segmentation
- aerial scene
- air-to-air
- detect-and-avoid
- sense-and-detect