Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Training Quantum Neural Networks with an Unbounded Loss Function - IPAM at UCLA

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Explore a groundbreaking approach to training quantum neural networks (QNNs) in this 32-minute lecture by Maria Kieferova from the University of Technology Sydney. Delve into the challenges of barren plateaus in QNN development and discover how an unbounded loss function can overcome existing limitations. Learn about a novel training algorithm that minimizes maximal Renyi divergence and techniques for gradient computation. Examine closed-form gradients for Unitary QNNs and Quantum Boltzmann Machines, and understand the conditions for avoiding barren plateaus. Witness practical applications in thermal state learning and Hamiltonian learning, with numerical experiments demonstrating rapid convergence and high fidelity results. Gain insights into quantizing feed-forward neural networks, the extended swap test, and strategies for avoiding poor initializations in this comprehensive exploration of cutting-edge quantum machine learning techniques.

Syllabus

Overview
A new type of learning
Quantizing a feed-forward neural network
Unitary quantum neural network
Two components of trainings
Barren plateaus in QNNS
The big tradeoff in OML
Circumventing barren plateau
What does classical ML do?
Extended swap test
Learning thermal states
Generative algorithm to thermal state learning
Gradients for thermal state learning
Shallow algorithm
FT algorithm
Avoiding poor initializations

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Training Quantum Neural Networks with an Unbounded Loss Function - IPAM at UCLA

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.