What you'll learn:
- Understand the history about BERT and why it changed NLP more than any algorithm in the recent years
- Understand how BERT is different from other standard algorithm and is closer to how humans process languages
- Use the tokenizing tools provided with BERT to preprocess text data efficiently
- Use the BERT layer as a embedding to plug it to your own NLP model
- Use BERT as a pre-trained model and then fine tune it to get the most out of it
- Explore the Github project from the Google research team to get the tools we need
- Get models available on Tensorflow Hub, the platform where you can get already trained models
- Clean text data
- Create datasets for AI from those data
- Use Google Colab and Tensorflow 2.0 for your AI implementations
- Create customs layers and models in TF 2.0 for specific NLP tasks
Dive deep into the BERT intuition and applications:
Suitable for everyone: We will dive into the history of BERT from its origins, detailing any concept so that anyone can follow and finish the course mastering this state-of-the-art NLP algorithm even if you are new to the subject.
Powerful and disruptive: Learn the concepts behind a new BERT, getting rid of RNNs, CNNs and other heavy deep learning models to implement a more intuitive way to process language that will suit a wide range of NLP purposes, including yours!
User-friendly and efficient: We’ve designed the course using the latest technologies, using Tensorflow 2.0 and Google Colab, assuring that you won’t have any local machine/software version/compatibility issues and that you are using the most up-to-date tools.