Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Textual Explanation for Self-Driving Vehicles

University of Central Florida via YouTube

Overview

Explore the intricacies of self-driving vehicle technology in this 28-minute lecture from the University of Central Florida. Delve into the research question and motivation behind explainable driving models, understanding their importance and goals. Learn about the main idea of Explainable Driving Mod and its network architecture, including preprocessing, convolutional feature encoding, and vehicle controller components. Discover the Strongly Aligned Attention (SAA) mechanism and the Textual Explanation Generator with its Explanation LSTM. Examine the Berkeley Deep Drive explanation Dataset and the training process. Evaluate the vehicle controller, compare its variants, and analyze attention under regularization. Finally, assess the explanation generator through both automated and human evaluation methods.

Syllabus

Intro
Research Question and Motivation
Why It is important to know?
Goal of the work?
The main Idea : Explainable Driving Mod
The Network Architecture
Preprocessing
Convolutional Feature Encoder
Vehicle Controller (3)
Strongly Aligned Attention (SAA)
Textual Explanation Generator • Explanation LSTM
Berkeley Deep Drive explanation Dataset
Training
Evaluation of Vehicle Controller
Comparing variants of Vehicle Controller
Attention under regularization
Evaluation of Explanation Generator
Human Evaluation

Taught by

UCF CRCV

Reviews

Start your review of Textual Explanation for Self-Driving Vehicles

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.