Overview
Explore the intricacies of self-driving vehicle technology in this 28-minute lecture from the University of Central Florida. Delve into the research question and motivation behind explainable driving models, understanding their importance and goals. Learn about the main idea of Explainable Driving Mod and its network architecture, including preprocessing, convolutional feature encoding, and vehicle controller components. Discover the Strongly Aligned Attention (SAA) mechanism and the Textual Explanation Generator with its Explanation LSTM. Examine the Berkeley Deep Drive explanation Dataset and the training process. Evaluate the vehicle controller, compare its variants, and analyze attention under regularization. Finally, assess the explanation generator through both automated and human evaluation methods.
Syllabus
Intro
Research Question and Motivation
Why It is important to know?
Goal of the work?
The main Idea : Explainable Driving Mod
The Network Architecture
Preprocessing
Convolutional Feature Encoder
Vehicle Controller (3)
Strongly Aligned Attention (SAA)
Textual Explanation Generator • Explanation LSTM
Berkeley Deep Drive explanation Dataset
Training
Evaluation of Vehicle Controller
Comparing variants of Vehicle Controller
Attention under regularization
Evaluation of Explanation Generator
Human Evaluation
Taught by
UCF CRCV