Week 1: Introduction, human brain, models of a neuron, neural communication, neural networks as directed graphs, network architectures (feed-forward, feedback etc.), knowledge representation.
Week 2: Learning processes, learning tasks, Perceptron, perceptron convergence theorem, relationship between perceptron and Bayes classifiers, batch perceptron algorithm
Week 3: Modeling through regression, linear and logistic regression for multiple classes.
Week 4: Multilayer perceptron, batch and online learning, derivation of the back propagation algorithm, XOR problem, Role of Hessian in online learning, annealing and optimal control of learning rate
Week 5: Approximations of functions, cross-validation, network pruning and complexity regularization, convolution networks, non-linear filtering
Week 6: Cover’s theorem and pattern separability, the interpolation problem, RBF networks, hybrid learning procedure for RBF networks, Kernel regression and relationship to RBFs.
Week 7: Support vector machines, optimal hyperplane for linear separability, optimal hyperplane for nonseparable patterns, SVM as a kernel machine,design of SVMs, XOR problem revisted, robustness considerations for regression
Week 8: SVMs contd. Optimal solution of the linear regression problem, representer theorem and related discussions. Introduction to regularization theory
Week 9: Hadamard’s condition for well-posedness, Tikhonov regularization, regularization networks, generalized RBF networks, estimation of regularization parameter etc
Week 10: L1 regularization basics, algorithms and extensions
Week 11: Principal component analysis: Hebbian based PCA, Kernel based PCA, Kernel Hebbian algorithm
Week 12: Deep multi-layer perceptrons, deep autoencoders and stacked denoising auto-encoders.
Thanks to the support from MathWorks, enrolled students have access to MATLAB for the duration of the course.