Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn to develop an advanced sign language detection system using action recognition and LSTM deep learning models in Python. This comprehensive tutorial video guides you through leveraging keypoint detection to build a sequence of keypoints, which are then processed by an action detection model to decode sign language. Utilize TensorFlow and Keras to construct a deep neural network with LSTM layers for handling keypoint sequences. Master techniques for extracting MediaPipe Holistic Keypoints, building a sign language model powered by LSTM layers, and predicting sign language in real-time using video sequences. Follow along with step-by-step instructions covering dependency installation, landmark detection, data collection, preprocessing, model building, training, and real-time testing. Gain insights into improving model performance and evaluating results using confusion matrices. Access provided code resources and join the developer community for further discussion and support.
Syllabus
- Start
- Gameplan
- How it Works
- Tutorial Start
- 1. Install and Import Dependencies
- 2. Detect Face, Hand and Pose Landmarks
- 3. Extract Keypoints
- 4. Setup Folders for Data Collection
- 5. Collect Keypoint Sequences
- 6. Preprocess Data and Create Labels
- 7. Build and Train an LSTM Deep Learning Model
- 8. Make Sign Language Predictions
- 9. Save Model Weights
- 10. Evaluation using a Confusion Matrix
- 11. Test in Real Time
- BONUS: Improving Performance
- Wrap Up
Taught by
Nicholas Renotte