In this video you will learn everything about variational autoencoders. These generative models have been popular for more than a decade, and are still used in many applications.
If you want to dive even deeper into this topic, I would suggest you read the original paper from Kingma, and an overview he wrote later on:
Auto-Encoding Variational Bayes arxiv.org/abs/1312.6114
An Introduction to Variational Autoencoders arxiv.org/abs/1906.02691
If you want more accessible ressources, these blog posts by Matthew N. Bernstein are incredible to understand the different parts of the theory behind VAEs:
Variational Autoencoders mbernste.github.io/posts/vae/
Variational Inference mbernste.github.io/posts/variational_inference/
The Evidence Lower Bound mbernste.github.io/posts/elbo/
Chapters:
00:00 Introduction
01:05 Context
06:20 General Principle of VAEs
08:53 Evidence Lower Bound
11:01 The Reparameterization Trick
14:05 Training and Inference
16:28 Limitations
18:40 Bonus: ELBO derivations
This video features animations created with Manim, inspired by Grant Sanderson's work at @3blue1brown.
All the code for the animations of this video is available in the following github repository: github.com/ytdeepia/Variational-Autoencoders
If you enjoyed the content, please like, comment, and subscribe to support the channel!
#deeplearning #artificialintelligence #generativeai #machinelearning #manim #education #science
コメント