Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore parameter-efficient tuning techniques for boosting Large Language Model (LLM) performance in this 25-minute conference talk from the 2023 GAIA Conference. Delve into the adaptation of p-tuning, a prompt-learning method, for low-resource language settings, with a focus on Swedish. Learn about an improved version of p-tuning implemented in NVIDIA NeMo that enables continuous multitask learning of virtual prompts. Gain insights from Zenodia Charpy, a senior deep learning data scientist at NVIDIA, as she shares her expertise in training and deploying very large language models for non-English and low-resource languages. Discover how these techniques can help solve real-world natural language tasks and improve performance on various downstream NLP tasks while grounding the factual correctness of LLM responses.
Syllabus
P-Tuning: A Parameter Efficient Tuning to Boost LLM Performance by Zenodia Charpy
Taught by
GAIA