Instruction

Name Cr Method of study Time Location Organiser
Inverse Problems 1: convolution and deconvolution 5 Cr Lecture Course 3.9.2019 - 16.10.2019
Inverse Problems 1: convolution and deconvolution 5 Cr Lecture Course 4.9.2018 - 17.10.2018

Target group

Optional course.

Master's Programme in Mathematics and Statistics is responsible for the course.

The course belongs to the Mathematics and Applied mathematics module.

The course is available to students from other degree programmes.

Prerequisites

Preliminaries: basic linear algebra and matrix calculus. Familiarity with Matlab programming and Fast Fourier transform FFT is useful but not mandatory. For example, the course “Applications of matrix computations” is a suitable prerequisite.

Learning outcomes

The goals of the course:

(1) understand convolution as a matrix model,

(2) learn least-squares solution technique and see that it is not enough to solve deconvolution,

(3) show how to use SVD to detect ill-posedness in a matrix-based inverse problem,

(4) understand why deconvolution needs special regularised methods,

(5) write robust Matlab programs for signal deconvolution and image deblurring,

(6) learn how to extend the solution methods to large-scale deconvolution problems.

Timing

Recommended time/stage of studies for completion: 1. or 2. year

Term/teaching period when the course will be offered: varying

Contents

Inverse problems is the scientific art of going from effect to cause.

For example, the “cause” can be a clear digital recording of a spoken message. The “effect” is a mumbled, noisy, incomprehensible sound signal coming through a bad communications line. Recovering the clean signal from the noisy measurement is an inverse problem.

Convolution is a linear operation widely used in signal and image processing, where it is often called “filtering.” It is a kind of a moving average weighted by a “convolution kernel.” In a discrete setting, convolution acts on one-dimensional data (vectors) or images (matrices). It is a model for imperfect measurements; for example a poorly focused photograph is a convolved version of a sharp photo, and in spectroscopy, crisp spectral lines are blurred by a device function. Convolution is also very popular in machine learning in the form of convolutional neural networks (CNN). The so-called Deep Learning is based on CNN’s and fast convolution operations implemented using graphics processing units (GPU).

Convolution is a useful tool in pure mathematics as well, especially in harmonic analysis and the study of partial differential equations.

The inverse problem related to convolution is called deconvolution. The observed data is interpreted as clean signal convolved with a kernel and corrupted with random noise. The goal of deconvolution is to reconstruct the clean signal from the noisy data. This is an ill-posed inverse problem, meaning that the solution is highly sensitive to modelling errors and measurement noise. Robust solution of deconvolution problems is based on regularization.

The course starts by introducing convolution for discrete signals and images. Naive deconvolution is attempted computationally using matrix inversion, only to find out that the ill-posedness of the inverse problem spells trouble for this simple approach. The source of the difficulties is identified using Singular Value Decomposition (SVD), which will be introduced and discussed in detail in the course.

The difficulties can be overcome by a technique called “regularization.” Two regularized reconstruction methods are discussed both theoretically and computationally: truncated SVD and Tikhonov regularization.

Recommended optional studies

Master studies

Completion methods

The course has lectures, weekly exercises and an exam. It is possible to continue from this course to Inverse Problems Project Work course (5 credit units).