Learn about cutting-edge developments in on-device Large Language Models (LLMs) through this technical video that explores functional token fine-tuning and the Octopus v2 framework. Dive into Stanford University's research on implementing efficient function calling capabilities for edge devices like iPhones and Pixels using GEMMA 2B. Examine practical code implementations for function calling across major AI platforms including OpenAI, Anthropic/Claude 3, and Cohere Command R PLUS. Understand how functional tokens significantly enhance energy efficiency in LLM function calls and explore the broader implications for industry leaders like NVIDIA and Microsoft. Master the technical aspects of implementing AI agents with improved accuracy and inference speed through hands-on demonstrations and real-world applications.
On-Device LLMs with Functional Token Fine-Tuning - Octopus v2 Implementation
Discover AI via YouTube
Overview
Syllabus
On Device LLM
Octopus v2 Function Calling Apple, Google
CODE Anthropic, Cohere Function Calling
Implications for NVIDIA, Microsoft?
Taught by
Discover AI