Lecture 1. An Introduction to AMG
Speaker: Ludmil Zikatanov, Penn State University
Algebraic multigrid methods (AMGs) is a name used for a suite of advanced techniques for the solution of linear systems. We will introduce the basic components of AMG such as smoothing via iterative methods, coarsening using adjacency graphs and the combination of these tools which give rise to several well known AMG methods. We will also present some of the adaptive AMG techniques which construct iteratively better andbetter coarse spaces.
Instructor: Prof. Ludmil Zikatanov
Lecture 2. An Integrated Introduction to Multigrid and Deep Learning
Speaker: Jinchao Xu, Penn State University
In this short course, an integrated introduction will be given to both multigrid methods and machine learning based on deep neural networks. The presentation will be elementary as it assumes little prior knowledge on both subjects and yet advanced as it will quickly reach to core issues in the relevant algorithm/model formulation, mathematical understanding and practical applications. Practice problems will be given to both theoretical analysis and practical applications that uses iFEM (for multigrid) and TensorFlow or Pytorch (for Deep Learning).
Instructor: Prof. Jinchao Xu
Teaching assistants: Juncai He and Xiaodong Jia (Peking University)
Lecture 3. Generalization, optimization dynamics and robustness of multilayer neural networks
Speaker: Zhanxing Zhu, Peking University
In this short tutorial, I will provide a comprehensive introduction to the theoretical understanding on deep neural networks, particularly on generalization, optimization dynamics and robustness to adversarial examples. The tutorial requires some basic mathematical background and assumes no prior knowledge of machine learning or deep learning. Thus, I will first give (1) a general introduction to the preliminary and some basic principles of machine learning and deep learning, and then (2) raise the key problems of deep learning (particularly some open problems), and (3) current understanding on the generalization, optimization dynamics and robustness of deep neural networks. I will also propose some research agenda that might be important for future research for deep learning theory. I hope this discussion could potentially inspire math community to solve some interesting issues in current artificial intelligence areas.
Instructor: Prof. Zhanxing Zhu
Teaching assistants: TBA