Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This course is an in-depth introduction to SageMaker and the support it offers to train and deploy machine learning models in a distributed environment.
SageMaker is a fully managed machine learning (ML) platform on AWS which makes prototyping, building, training, and hosting ML models very simple indeed. In this course, Deep Learning Using TensorFlow and Apache MXNet on Amazon SageMaker, you'll be shown how to use the built-in algorithms, such as the linear learner and PCA, hosted on SageMaker containers. The only code you need to write is to prepare your data. You'll then see the 3 different ways in which you build your own custom model on SageMaker. You'll bring your own pre-trained model and host it on SageMaker's first party containers. You'll then work on building your model using Apache MXNet and finally bring a custom container to be trained on SageMaker. When you have finished with this course, you will also know how you can connect to other AWS services such as S3 and Redshift to access your training data, run training in a distributed manner, and autoscale your model variants.
SageMaker is a fully managed machine learning (ML) platform on AWS which makes prototyping, building, training, and hosting ML models very simple indeed. In this course, Deep Learning Using TensorFlow and Apache MXNet on Amazon SageMaker, you'll be shown how to use the built-in algorithms, such as the linear learner and PCA, hosted on SageMaker containers. The only code you need to write is to prepare your data. You'll then see the 3 different ways in which you build your own custom model on SageMaker. You'll bring your own pre-trained model and host it on SageMaker's first party containers. You'll then work on building your model using Apache MXNet and finally bring a custom container to be trained on SageMaker. When you have finished with this course, you will also know how you can connect to other AWS services such as S3 and Redshift to access your training data, run training in a distributed manner, and autoscale your model variants.