Completed
mod12lec90-Correlated Occurence Analogue to Lexical Semantic - COALS
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Applied Natural Language Processing
Automatically move to the next video in the Classroom when playback concludes
- 1 Operations on a Corpus
- 2 Probability and NLP
- 3 Machine Translation
- 4 Statistical Properties of Words - Part 01
- 5 Statistical Properties of Words - Part 02
- 6 Statistical Properties of Words - Part 03
- 7 Vector Space Models for NLP
- 8 Document Similarity - Demo, Inverted index, Exercise
- 9 Contextual understanding of text
- 10 Collocations, Dense word Vectors
- 11 Query Processing
- 12 Topic Modeling
- 13 Introduction
- 14 Sequence Learning
- 15 Vector Representation of words
- 16 Co-occurence matrix, n-grams
- 17 SVD, Dimensionality reduction, Demo
- 18 Vector Space models
- 19 Preprocessing
- 20 Introduction to Probability in the context of NLP
- 21 Joint and conditional probabilities, independence with examples
- 22 The definition of probabilistic language model
- 23 Chain rule and Markov assumption
- 24 Out of vocabulary words and curse of dimensionality
- 25 Exercise
- 26 Examples for word prediction
- 27 Generative Models
- 28 Bigram and Trigram Language models -peeking indide the model building
- 29 Naive-Bayes, classification
- 30 Machine learning, perceptron, linearly separable
- 31 Linear Models for Claassification
- 32 Biological Neural Network
- 33 Perceptron
- 34 Perceptron Learning
- 35 Logical XOR
- 36 Activation Functions
- 37 Gradient Descent
- 38 Feedforward and Backpropagation Neural Network
- 39 Why Word2Vec?
- 40 What are CBOW and Skip-Gram Models?
- 41 One word learning architecture
- 42 Forward pass for Word2Vec
- 43 Matrix Operations Explained
- 44 CBOW and Skip Gram Models
- 45 Binay tree, Hierarchical softmax
- 46 Updating the weights using hierarchical softmax
- 47 Sequence Learning and its applications
- 48 ANN as a LM and its limitations
- 49 Discussion on the results obtained from word2vec
- 50 Recap and Introduction
- 51 Mapping the output layer to Softmax
- 52 Reduction of complexity - sub-sampling, negative sampling
- 53 Building Skip-gram model using Python
- 54 GRU
- 55 Truncated BPTT
- 56 LSTM
- 57 BPTT - Exploding and vanishing gradient
- 58 BPTT - Derivatives for W,V and U
- 59 BPTT - Forward Pass
- 60 RNN - Based Language Model
- 61 Unrolled RNN
- 62 Introuduction to Recurrent Neural Network
- 63 IBM Model 2
- 64 IBM Model 1
- 65 Alignments again!
- 66 Translation Model, Alignment Variables
- 67 Noisy Channel Model, Bayes Rule, Language Model
- 68 What is SMT?
- 69 Introduction and Historical Approaches to Machine Translation
- 70 BLEU Demo using NLTK and other metrics
- 71 BLEU - "A short Discussion of the seminal paper"
- 72 Introduction to evaluation of Machine Translation
- 73 Extraction of Phrases
- 74 Introduction to Phrase-based translation
- 75 Symmetrization of alignments
- 76 Learning/estimating the phrase probabilities using another Symmetrization example
- 77 mod10lec79-Recap and Connecting Bloom Taxonomy with Machine Learning
- 78 mod10lec80-Introduction to Attention based Translation
- 79 mod10lec81- Neural machine translation by jointly learning to align and translate
- 80 mod10lec82-Typical NMT architecture architecture and models for multi-language translation
- 81 mod10lec77-Encoder-Decoder model for Neural Machine Translation
- 82 mod10lec78-RNN Based Machine Translation
- 83 mod10lec83-Beam Search
- 84 mod10lec84-Variants of Gradient Descend
- 85 mod11lec85-Introduction to Conversation Modeling
- 86 mod11lec86-A few examples in Conversation Modeling
- 87 mod11lec87-Some ideas to Implement IR-based Conversation Modeling
- 88 mod11lec88-Discussion of some ideas in Question Answering
- 89 mod12lec89-Hyperspace Analogue to Language - HAL
- 90 mod12lec90-Correlated Occurence Analogue to Lexical Semantic - COALS
- 91 mod12lec91-Global Vectors - Glove
- 92 mod12lec92-Evaluation of Word vectors