Teaching / Introduction to Deep Learning

Introduction to Deep Learning

In preparation: a course on deep learning fundamentals for measurement data, covering theory, computational experiments, model validation and Python practice.

Status

The course is in preparation. This page currently acts as an information page: it describes the planned scope, prerequisites and working model. Teaching materials, notebooks and assignments will be added later.

About the course

Introduction to Deep Learning will combine deep learning fundamentals with practical analysis of measurement data. The course is intended to move from core concepts and mechanisms of deep models toward independent design, training, validation and interpretation of computational experiments.

The planned format includes 30 lecture hours and 30 computer laboratory hours. The course is designed around a test and infer workflow: short theory blocks will be combined with demonstrations, code experiments and result analysis.

Audience

The course is intended for students of Applied Computer Science and Measurement Systems, first-cycle studies, third year, winter term. The course is elective and taught in Polish.

Prerequisites include:

  • calculus and linear algebra at first-cycle study level
  • basics of probability and statistics
  • basics of measurement data processing
  • basic Python programming
  • ability to analyze results and critically interpret plots and metrics

Learning Goals

The goal of the course is to understand the basic mechanisms of deep learning and translate them into practical work with experimental data.

Main goals:

  • understanding concepts such as gradient descent, backpropagation, cost functions and regularization
  • designing, training and validating ANN, CNN and autoencoder models
  • selecting metrics and assessing model generalization
  • planning computational experiments and documenting tests
  • analyzing hyperparameter sensitivity
  • working with typical experimental data problems: noise, missing data, class imbalance and variability of measurement conditions

Planned Scope

The planned course scope includes:

  • ML and DL concepts and the specificity of measurement data
  • analysis pipeline: data, model, validation, conclusions
  • Gradient Descent
  • ANN fundamentals: forward propagation, loss/cost, backpropagation, learning rate
  • regression and classification
  • model depth and width, and parameter count
  • overfitting and train/validation/test diagnostics
  • L1/L2 regularization, batch training and minibatches
  • data normalization, batch normalization, activation functions, loss functions and optimizers
  • data augmentation and metric interpretation
  • weight initialization
  • CNN fundamentals: convolution, pooling, feature interpretation and preparation of image-like data
  • transfer learning, fine-tuning, domain shift and validation on data from a different distribution

Assessment

Laboratory assessment is planned as continuous evaluation through problem-based tasks implemented in notebooks. The tasks will include hyperparameter selection, optimizer comparisons, regularization analysis and metric evaluation on imbalanced data.

Lecture assessment is planned as a project: an analysis of a selected measurement dataset or a controlled simulation, concluded with a short report. The report should describe the problem, parameters, metrics, experiment workflow and conclusions.

Literature

Planned core sources:

  • Ian Goodfellow, Yoshua Bengio, Aaron Courville, Deep Learning, MIT Press
  • Aurélien Géron, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow
  • PyTorch documentation

Supplementary sources:

  • Christopher M. Bishop, Pattern Recognition and Machine Learning
  • Kevin P. Murphy, Probabilistic Machine Learning
  • documentation and tutorials for libraries used during the course