Overview
Syllabus
– Welcome to class
– Predictive models
– Multi-output system
– Notation factor graph
– The energy function Fx, y
– Inference
– Implicit function
– Conditional EBM
– Unconditional EBM
– EBM vs. probabilistic models
– Do we need a y at inference?
– When inference is hard
– Joint embeddings
– Latent variables
– Inference with latent variables
– Energies E and F
– Preview on the EBM practicum
– From energy to probabilities
– Examples: K-means and sparse coding
– Limiting the information capacity of the latent variable
– Training EBMs
– Maximum likelihood
– How to pick β?
– Problems with maximum likelihood
– Other types of loss functions
– Generalised margin loss
– General group loss
– Contrastive joint embeddings
– Denoising or mask autoencoder
– Summary and final remarks
Taught by
Alfredo Canziani