Mathematical Foundation of Deep Learning
Description
Theory of deep neural networks: Basics of deep nets; Universal approximation theory and proof. Approximation errors in deep learning.
Network training and nonconvex optimization: Automatic differentiation; Gradient descent, stochastic gradient descent, Adam and their convergence proofs. Constrained optimization; Learnable optimization algorithms.
Deep learning and optimal control: Theory of optimal control; Hamiltonian and Pontryagin Maximum Principle; Deep learning from dynamical systems perspective and neural ordinary differential equations.
Stochastic differential equations: Brownian motions; Stochastic integrals and Ito formula; Stochastic differential equations and Fokker-Planck equations; Stochastic control, Hamilton-Jacobi-Bellman equations, and reinforcement learning. Stochastic generative models.
Lecture note and references
Conduct Policy
Any type of inappropriate conduct may result in your being administratively dropped from the course. See the University's Policy on Disruptive Behavior in Georgia State University Code of Conduct.
|