Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

NPTEL

Neural Networks for Signal Processing - I

NPTEL and Indian Institute of Science Bangalore via Swayam

This course may be unavailable.

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This will be an introductory graduate level course in neural networks for signal processing. It would be part-I of a III part series on neural networks and learning systems that the instructor intends to introduce and cover neural networks at the graduate level. The course starts with a motivation of how the human brain is inspirational to building artificial neural networks. The neural networks are viewed as directed graphs with various network topologies towards learning tasks driven by optimization techniques. The course deals with Rosenblatt’s perceptron, regression modeling, multilayer perceptron (MLP), kernel methods and radial basis functions (RBF), support vector machines (SVM), regularization theory and principal component analysis (Hebbian and kernel based). Towards the end, topics such as convolutive neural networks etc. that are based on the MLP basic topics will be touched upon. The course will have assignments that are theoretical and computer based working with actual data.
INTENDED AUDIENCE: Graduate level, Senior UG can also participate, engineers and scientists within related industry. lPREREQUISITES: Basic mathematical background in probability, linear algebra, signals and systems or equivalent INDUSTRY SUPPORT: AI based, machine learning based

Syllabus

COURSE LAYOUT Week 1:Introduction, human brain, models of a neuron, neural communication, neural networks as directed graphs, network architectures (feed-forward, feedback etc.), knowledge representation. Week 2: Learning processes, learning tasks, Perceptron, perceptron convergence theorem, relationship between perceptron and Bayes classifiers, batch perceptron algorithm Week 3: Modeling through regression, linear and logistic regression for multiple classes. Week 4: Multilayer perceptron, batch and online learning, derivation of the back propagation algorithm, XOR problem, Role of Hessian in online learning, annealing and optimal control of learning rate Week 5: Approximations of functions, cross-validation, network pruning and complexity regularization, convolution networks, non-linear filtering Week 6: Cover’s theorem and pattern separability, the interpolation problem, RBF networks, hybrid learning procedure for RBF networks, Kernel regression and relationship to RBFs. Week 7: Support vector machines, optimal hyperplane for linear separability, optimal hyperplane for nonseparable patterns, SVM as a kernel machine,design of SVMs, XOR problem revisted, robustness considerations for regression Week 8: SVMs contd. Optimal solution of the linear regression problem, representer theorem and related discussions. Introduction to regularization theory Week 9: Hadamard’s condition for well-posedness, Tikhonov regularization, regularization networks, generalized RBF networks, estimation of regularization parameter etc Week 10: L1 regularization basics, algorithms and extensions Week 11: Principal component analysis: Hebbian based PCA, Kernel based PCA, Kernel Hebbian algorithm Week 12: Deep multi-layer perceptrons, deep autoencoders and stacked denoising auto-encoders.

Taught by

Prof. Shayan Srinivasa Garani

Tags

Reviews

Start your review of Neural Networks for Signal Processing - I

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.