A predictive exercise is not finished when a model is built. This course will equip you with essential skills for understanding performance evaluation metrics, using Python, to determine whether a model is performing adequately.
Specifically, you will learn:
- Appropriate measures that are used to evaluate predictive models
- Procedures that are used to ensure that models do not cheat through, for example, overfitting or predicting incorrect distributions
- The ways that different model evaluation criteria illustrate how one model excels over another and how to identify when to use certain criteria
This is the foundation of optimising successful predictive models. The concepts will be brought together in a comprehensive case study that deals with customer churn. You will be tasked with selecting suitable variables to predict whether a customer will leave a telecommunications provider by looking into their behaviour, creating various models, and benchmarking them by using the appropriate evaluation criteria.