Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Parameter-Efficient Automation of Data Wrangling Tasks with Prefix-Tuning

DSDSD - Dutch Seminar on Data Systems Design via YouTube

Overview

Watch a 17-minute conference talk exploring the potential of prefix-tuning as a lightweight alternative to fine-tuning Large Language Models (LLMs) for data wrangling tasks. Learn how this parameter-efficient approach automatically learns continuous prompts without updating original LLM parameters, allowing for reuse across tasks while maintaining comparable performance. Discover the evaluation results across common data wrangling tasks like entity matching, error detection, and data imputation, where prefix-tuning achieves within 2.3% of fine-tuning performance while using only 0.39% of the parameter updates. Presented by David Vos, an MSc graduate in Artificial Intelligence, at the Dutch Seminar on Data Systems Design (DSDSD), this talk demonstrates how prefix-tuning offers a storage-efficient solution for automating data integration and cleaning tasks with LLMs.

Syllabus

We hold bi-weekly talks on Fridays from PM to 5 PM CET for and by researchers and practitioners designing and implementing data systems. The objective is to establish a new forum for the Dutch Data Systems community to come together, foster collaborations between its members, and bring in high-quality international speakers. We would like to invite all researchers, especially also Ph.D. students, who are working on related topics to join the events. It is an excellent opportunity to receive feedback early on from researchers in your field.

Taught by

DSDSD - Dutch Seminar on Data Systems Design

Reviews

Start your review of Parameter-Efficient Automation of Data Wrangling Tasks with Prefix-Tuning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.