Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the development of an intelligent camera application designed to assist visually impaired individuals in understanding their surroundings through a 28-minute talk at PyCon US. Learn about the Visual Question Answering (VQA) app that utilizes deep learning and the Vision Language Transformer (ViLT) model to provide rapid responses to image-related queries. Discover the advantages of ViLT over traditional vision language pre-trained models, best practices for modularizing the application, and steps to deploy this deep learning-based solution on Google Cloud Platform. Gain insights into how built-in Python libraries facilitate the implementation and deployment of complex models like ViLT. Access the open-source code and follow a walkthrough to build your own visual question answering application, complete with speech-to-text and text-to-speech capabilities for enhanced accessibility.