Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Pluralsight

Monitor and Evaluate Model Performance During Training

via Pluralsight

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Enhance your machine-learning models!
This course will teach you
the
tools and techniques to effectively monitor and evaluate model
performance during training.


Ensuring that machine learning models perform optimally during training can be a challenging task, often leading to inefficiencies and inaccuracies in predictive outcomes. In this course, Monitor and Evaluate Model Performance During Training, you’ll gain the ability to effectively assess and enhance your machine learning models. First, you’ll explore the crucial metrics used for evaluating model performance, such as accuracy, precision, recall, F1 score, and the area under the ROC curve. Next, you’ll discover how to visualize training progress and understand the importance of loss curves, confusion matrices, and the use of ROC and precision-recall curves for binary classification. Finally, you’ll learn how to utilize real-time monitoring tools like TensorBoard, Weights & Biases, and MLflow to track and improve your model's training process. When you’re finished with this course, you’ll have the skills and knowledge of machine learning model evaluation needed to ensure your models are trained effectively, yielding reliable and robust predictive results.

Syllabus

  • Course Overview 1min
  • Understanding Key Metrics and Visualizing Training Progress 34mins
  • Real-time Monitoring, Anomaly Detection, and Feedback Integration 13mins

Taught by

Pluralsight

Reviews

Start your review of Monitor and Evaluate Model Performance During Training

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.