Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
In this course, you’ll go through the LLMOps pipeline of pre-processing training data for supervised instruction tuning, and adapt a supervised tuning pipeline to train and deploy a custom LLM. This is useful in creating an LLM workflow for your specific application. For example, creating a question-answer chatbot tailored to answer Python coding questions, which you’ll do in this course.
Through the course, you’ll go through key steps of creating the LLMOps pipeline:
1. Retrieve and transform training data for supervised fine-tuning of an LLM.
2. Version your data and tuned models to track your tuning experiments.
3. Configure an open-source supervised tuning pipeline and then execute that pipeline to train and then deploy a tuned LLM.
4. Output and study safety scores to responsibly monitor and filter your LLM application’s behavior.
5. Try out the tuned and deployed LLM yourself in the classroom!
6. Tools you’ll practice with include BigQuery data warehouse, the open-source Kubeflow Pipelines, and Google Cloud.