About

The 3rd Autonomous Vehicle Vision (AVVision) Workshop is part of the Advanced Autonomous Driving Workshop (AADW), organized in conjunction with ECCV 2022. The other jointly organized workshop is the 2nd workshop on Self-supervised Learning for Next-Generation Industry-level Autonomous Driving (SSLAD).

The 3rd AVVision workshop aims to bring together industry professionals and academics to brainstorm and exchange ideas on the advancement of computer vision techniques for autonomous driving. In this half-day workshop, we will have four keynote talks and regular paper presentations (oral and poster) to discuss the state-of-the-art and existing challenges in autonomous driving.

Speakers

Andreas Geiger

Full professor at University of Tübingen

Fisher Yu

Assistant Professor at ETH Zürich

Wei Liu

Head of Machine Learning Research at Nuro

Raquel Urtasun

Founder and CEO of Waabi, Full Professor at University of Toronto


Organizers

Rui Ranger Fan

Tongji University

Nemanja Djuric

Aurora Innovation

Wenshuo Wang

McGill University

Peter Ondruska

Toyota WP

Jie Li

Toyota Research Institute


Program Committee

Qijun Chen Tongji University Ming Liu HKUST Junhao Xiao NUDT Yanjun Huang Tongji University Fei Gao Zhejiang University Shuai Su Tongji University Hengli Wang HKUST Jiayuan Du Tongji University Jianhao Jiao HKUST Peng Yun HKUST Hesham Eraqi AUC Xinshuo Weng NVIDIA Joshua Manela Waymo Vladan Radosavljevic Spotify Abhishek Mohta Aurora Innovation Sebastian Lopez-Cot Aurora Innovation Yan Xu Carnegie Mellon University

Ming Yang SJTU Xiang Gao Idriverplus Chengju Liu Tongji University M. Junaid Bocus University of Bristol Yi Zhou Hunan University Hong Wei Reading University Chaoqun Wang Shandong University Yi Feng Tongji University Deming Wang Tongji University Zhiyuan Wu Tongji University Bohuan Xue HKUST Dequan Wang University of California, Berkeley Peide Cai HKUST Nachuan Ma Tongji University Huaiyang Huang HKUST Meng Fan Aurora Innovation

Yong Liu Zhejiang University Xiaolin Huang SJTU Lei Qiao SJTU Wei Ye Tongji University Sen Jia Toronto AI Lab, LG Electronics Sicen Guo Tongji University Jingwei Yang Tongji University Jiahe Fan Tongji University Zhuwen Li Nuro, Inc. Zhaoen Su Aurora Inovation Yixin Fei Tongji University Shivam Gautam Aurora Inovation Henggang Cui Motional Fang-Chieh Chou Aurora Inovation Tanmay Agarwal Argo AI Shreyash Pandey Aurora Innovation


Submission

Call for Papers
With a number of breakthroughs in autonomous system technology over the past decade, the race to commercialize self-driving cars has become fiercer than ever. The integration of advanced sensing, computer vision, signal/image processing, and machine/deep learning into autonomous vehicles enables them to perceive the environment intelligently and navigate safely. Autonomous driving is required to ensure safe, reliable, and efficient automated mobility in complex uncontrolled real-world environments. Various applications range from automated transportation and farming to public safety and environment exploration. Visual perception is a critical component of autonomous driving. Enabling technologies include: a) affordable sensors that can acquire useful data under varying environmental conditions, b) reliable simultaneous localization and mapping, c) machine learning that can effectively handle varying real-world conditions and unforeseen events, as well as “machine-learning friendly” signal processing to enable more effective classification and decision making, d) hardware and software co-design for efficient real-time performance, e) resilient and robust platforms that can withstand adversarial attacks and failures, and f) end-to-end system integration of sensing, computer vision, signal/image processing and machine/deep learning. The 3rd AVVision workshop will cover all these topics. Research papers are solicited in, but not limited to, the following topics:

Important Dates

Submission Guidelines
Regular papers: Authors are encouraged to submit high-quality, original (i.e., not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference or workshop) research. The paper template is identical to the ECCV 2022 main conference. Papers are limited to 14 pages, including figures and tables, in the ECCV style. Additional pages containing only cited references are allowed. Please refer to the following files for detailed formatting instructions:
Papers that are not properly anonymized, or do not use the template, or have more than fourteen pages (not excluding references) will be rejected without review. The submission site is now open.

Extended abstracts: We encourage participants to submit preliminary ideas that have not been published before as extended abstracts. These submissions would benefit from additional exposure and discussion that can shape a better future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations. Submissions may consist of up to 7 pages plus one additional page solely for references (using the template detailed above). The extended abstracts will NOT be published in the workshop proceedings.

Program (Local Tel Aviv Time)

Opening Remarks: 14:00-14:05

Invited Talk I: 14:05-14:40

Invited Talk II: 14:40-15:15

Title: Learning Robust Policies for Self-Driving
Abstract: In this talk I will present 3 recent results from my lab: First, I will present TransFuser (PAMI 2022) which tackles the question on how representations from complementary sensors should be integrated for autonomous driving. Second, I will discuss PlanT (CoRL 2022), our recent efforts towards explainable planning for autonomous driving. Finally, I will present KING (ECCV 2022), an approach to generate novel safety-critical driving scenarios for improving the robustness of imitation learning based self-driving agents.

Lightning Talks I: 15:15-15:55

  1. "4D-StOP: Panoptic Segmentation of 4D LiDAR using Spatio-temporal Object Proposal Generation and Aggregation", Lars Kreuzberg, Idil Esen Zulfikar, Sabarinath Mahadevan, Francis Engelmann and Bastian Leibe (full paper, video)
  2. "BlindSpotNet: Seeing Where We Cannot See", Taichi Fukuda, Kotaro Hasegawa, Shinya Ishizaki, Shohei Nobuhara and Ko Nishino (full paper, video)
  3. "Gesture Recognition with Keypoint and Radar Stream Fusion for Automated Vehicles", Adrian Holzbock, Nicolai Kern, Christian Waldschmidt, Klaus Dietmayer and Vasileios Belagiannis (full paper, video)
  4. "An improved lightweight network based on YOLOv5s for object detection in autonomous driving", Guofa Li, Yingjie Zhang, Delin Ouyang and Xingda Qu (full paper, video)
  5. "Plausibility Verification For 3D Object Detectors Using Energy-Based Optimization", Abhishek Vivekanandan, Niels Maier and J. Marius Zöllner (full paper, video)
  6. "Lane Change Classification and Prediction with Action Recognition Networks", Kai Liang, Jun Wang and Abhir Bhalerao (full paper, video)
  7. "Joint Prediction of Amodal and Visible Semantic Segmentation for Automated Driving", Jasmin Breitenstein, Jonas Löhdefink and Tim Fingscheidt (full paper, video)
  8. "Leveraging Geometric Structure for Label-efficient Semi-supervised Scene Segmentation", Ping Hu, Stan Sclaroff and Kate Saenko (extended abstract, video)

Virtual Coffee Break: 15:55-16:05

Invited Talk III: 16:05-16:40

Title: Towards High-Quality 4D Scene Understanding in Autonomous Driving
Abstract: Understanding semantics and motion in dynamic 3D scenes is foundational for autonomous driving. The recent availability of large-scale driving video datasets creates new research possibilities in this broad area. In this talk, I will illustrate the trend through the lens of object tracking, the essential building block for dynamic scene understanding. I will start with our recent findings in multiple object tracking (MOT), after briefly reviewing the current works and trends on the topic. Then, I will introduce our new tracking method based on Quasi-Dense Similarity Learning. Our method is conceptually more straightforward yet more effective than the previous works. It boasts almost ten percent of accuracy on the Waymo MOT dataset. I will also talk about how to use the 2D tracking method for monocular 3D object tracking and video instance segmentation. Our quasi-dense 3D tracking pipeline achieves impressive improvements on the nuScenes 3D tracking benchmark and achieve the state of the art by utilizing panoramic views.

Invited Talk IV: 16:40-17:15

Title: Achievements and Open Questions for Autonomous Driving
Abstract: Autonomous Driving is both an exciting and challenging problem. In this talk, I will discuss the major achievements of using machine learning to transform how we think and tackle problems in perception, prediction, and planning at Nuro in the last 6 years. Despite all the achievements, there are still many open questions that we need to solve before we can really scale up autonomous driving to better everyday life. In particular, I will outline a high-level idea of how we can do joint prediction and planning with model-based reinforcement learning and call out the key components we need to build for it and the challenges.

Lightning Talks II: 17:15-17:55

  1. "Human-vehicle Cooperative Visual Perception for Autonomous Driving under Complex Traffic Environments", Yiyue Zhao, Cailin Lei, Yu Shen, Yuchuan Du and Qijun Chen (full paper, video)
  2. "MCIP: Multi-Stream Network for Pedestrian Crossing Intention Prediction", Je-Seok Ham, Kangmin Bae and Jinyoung Moon (full paper, video)
  3. "SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking", Ziqi Pang, Zhichao Li and Naiyan Wang (full paper, video)
  4. "Ego-Motion Compensation of Range-Beam-Doppler Radar Data for Object Detection", Michael Meyer, Marc Unzueta, Georg Kuschk and Sven Tomforde (full paper, video)
  5. "RPR-Net: A Point Cloud-based Rotation-aware Large Scale Place Recognition Network", Zhaoxin Fan, Zhenbo Song, Wenping Zhang, Hongyan Liu, Jun He and Xiaoyong Du (full paper, video)
  6. "Learning 3D Semantics from Pose-Noisy 2D Images with Hierarchical Full Attention Network", Yuhang He, Lin Chen, Junkun Xie and Long Chen (full paper, video)
  7. "SIM2E: Benchmarking the Group Equivariant Capability of Correspondence Matching Algorithms", Shuai Su, Zhongkai Zhao, Yixin Fei, Shuda Li, Qijun Chen and Rui Fan (full paper, video)
  8. "Number-Adaptive Prototype Learning for 3D Point Cloud Semantic Segmentation", Yangheng Zhao, Jun Wang, Xiaolong Li, Yue Hu, Ce Zhang, Siheng Chen and Yanfeng Wang (extended abstract, video)
  9. "Effectiveness of Function Matching in Driving Scene Recognition", Shingo Yashima (extended abstract, video)

Closing Remarks: 17:55-18:00




Contact

Phone: +1 (412) 710-6868

Your message has been sent. Thank you!