Learn to implement and control multi-modal transformer agents through a 21-minute tutorial that demonstrates how to connect various transformer applications on HuggingFace Hub using StarCoder as the central intelligence. Explore the integration of different transformer modalities - from audio to visual and written content - while using StarCoder, OpenAssistant, or OpenAI's text-davinci-003 model as the main switching intelligence. Follow along with real-time coding in a COLAB notebook to understand how to implement transformer agents, set up prompt templates, and create ordered execution flows. Master the use of Gradio tools to extend the multimodal toolbox capabilities and discover how transformer agents link different HuggingFace transformers according to specific tasks. Gain practical insights into this innovative approach that combines individual transformers with a controlling AI/LLM, representing a significant advancement in transformer technology implementation.
Multi-Modal Transformer Agents Controlled by StarCoder - Building AI Systems Without LangChain
Discover AI via YouTube
Overview
Syllabus
Multi-Modal Transformer AGENTS, controlled by StarCoder (W/o LangChain)
Taught by
Discover AI