Probabilistic Neural Networks and Deep Learning
Enrolment options
This course is designed to equip students with a basic to advanced understanding of the concepts and applications of Deep Learning. In this course, students will learn the probabilistic foundations that underlie machine learning, including probability theory, standard distributions, and parameters. The discussion continues with single-layer networks for regression and classification, which provides insight into how simple models can be linked to probability theory and loss functions.
Next, students will explore deep neural networks with a focus on multilayer perceptron (MLP) architecture, non-linear activation functions, and how network depth increases representation capacity. This course also discusses important issues such as the curse of dimensionality, regularization, and decision theory in making optimal predictions. In the final session, students are introduced to the concepts of representation learning, transfer learning, and various error functions relevant to modern model development.
Through a combination of mathematical and probabilistic theory and practical implementation, this course provides comprehensive skills for understanding, designing, and evaluating artificial neural network architectures. By the end of the course, participants are expected to be able to explain the basic principles of deep learning, implement regression and classification models, and understand the benefits of networks in representation learning and knowledge transfer.
- Teacher: Stefanus Benhard, S.Kom., M.Kom.
