Accelerating Online Mapping and Behavior Prediction via Direct BEV Feature Attention

Abstract

Understanding road geometry is a critical component of the autonomous vehicle (AV) stack. While high-definition (HD) maps can readily provide such information, they suffer from high labeling and maintenance costs. Accordingly, many recent works have proposed methods for estimating HD maps online from sensor data. The vast majority of recent approaches encode multi-camera observations into an intermediate representation, e.g., a bird’s eye view (BEV) grid, and produce vector map elements via a decoder. While this architecture is performant, it decimates much of the information encoded in the intermediate representation, preventing downstream tasks (e.g., behavior prediction) from leveraging them. In this work, we propose exposing the rich internal features of online map estimation methods and show how they enable more tightly integrating online mapping with trajectory forecasting. In doing so, we find that directly accessing internal BEV features yields up to 73% faster inference speeds and up to 29% more accurate predictions on the real-world nuScenes dataset.

Publication
European Conference on Computer Vision (ECCV)

Toronto Intelligent Systems Lab Co-authors

Xunjiang Gu
Xunjiang Gu
MSc Student

My name is Xunjiang (Alfred) Gu and I am a Master student supervised by Prof. Igor Gilitschenski. I finished my BASc in Engineering Science at the University of Toronto, majoring in Robotics with a Business minor. My current research interests include Autonomous Driving, Trajectory Prediction and any Agent Modeling Task.

Igor Gilitschenski
Igor Gilitschenski
Assistant Professor