Kaisa_2012_3_photo by Veikko Somerpuro

11.2.2019 at 09:00 - 31.5.2019 at 23:59


Here is the course’s teaching schedule. Check the description for possible other schedules.

Mon 20.5.2019
13:15 - 16:00
Tue 21.5.2019
13:15 - 16:00
Wed 22.5.2019
13:15 - 16:00
Thu 23.5.2019
13:15 - 16:00
Fri 24.5.2019
13:15 - 16:00
Mon 27.5.2019
13:15 - 16:00
Tue 28.5.2019
13:15 - 16:00
Wed 29.5.2019
13:15 - 16:00
Fri 31.5.2019
13:15 - 16:00

Other teaching

20.05. - 27.05.2019 Mon 13.15-16.00
21.05. - 28.05.2019 Tue 13.15-16.00
22.05. - 29.05.2019 Wed 13.15-16.00
23.05.2019 Thu 13.15-16.00
24.05. - 31.05.2019 Fri 13.15-16.00
Mark Granroth-Wilding
Teaching language: English


-Which degree programme is responsible for the course?

Data Science MSc

-Which module does the course belong to?


-Is the course available to students from other degree programmes?


Prerequisite courses: DATA11002 Introduction to Machine Learning; TKT20005 Models of Computation

The student should have at least a basic familiarity with the following topics before the course starts.

  • Supervised vs unsupervised learning

  • Overfitting and regularization

  • Dimensionality reduction

  • Mathematics of simple probabilistic models and estimation

  • Concepts of classification and regression

  • Formal languages: in particular finite state automata and transducers, and context-free grammars
    (Covered by TKT20005 Models of Computation)
  • Programming:

    • Basic abilities in Python

    • Familiarity with Numpy recommended

Suggested reading on these topics will be provided before the course, to help students to fill in any gaps in their knowledge or revise the concepts.

Programming assignments will be completed in Python, so at least some previous experience of Python programming is essential.

Experience in linguistics / language processing is not required. However, a basic familiarity with some linguistic concepts will make it easier to follow the course. Links to recommended reading material will be provided before the start of the course.

By the end of the course, the student will:

  • have an understanding of the basic linguistic concepts underlying typical approaches to NLP;
  • be familiar with traditional pipeline approaches to NLP systems;
  • be aware of the main subtasks and typical components in such pipelines;
  • have a good understanding of some commonly used probabilistic and other statistical models and how they are used for practical NLP tasks;
  • know how to tackle some NLP applications by combining existing approaches to their subtasks;
  • understand how recent machine learning methods (such as deep learning) can be applied to linguistic tasks;
  • know how NLP systems and components are typically evaluated and understand good practices in evaluation and data handling;
    be aware of some key open research questions and unsolved problems in NLP.

Spring term, 2019 only, intensive period

After Introduction to Machine Learning

This course will give an introduction to the field of Natural Language Processing (NLP), covering central concepts, example applications and the application of modern machine learning (ML) techniques to NLP problems. It will go into more detail on some particular applications, showing how they have been tackled, and what component sub-tasks they involve.

NLP is a broad field, including a large number of sub-tasks and applications. We begin with an overview of the field, covering the classic natural language understanding (NLU) pipeline and its components. We then cover the other other side of NLP, natural language generation (NLG), including a comparison of classic rule-based systems and recent applications of deep learning and other machine learning techniques.

We will look at some modern statistical approaches to NLU tasks, including how neural networks and deep learning can be applied to linguistic analysis. Then we will consider how the pipeline components can be combined for one important current application: information extraction. Finally, a look at the topics of semantics and pragmatics will highlight some key unsolved problems in the field and show why it remains an active and challenging area for research

The course will primarily follow the follow two textbooks:

Speech and Language Processing. Jurafsky & Martin. 2nd edition, 2009. Pearson Education
Natural Language Processing. Jacob Eisenstein. Draft textbook, Nov 13 2018. Available on Github.
We will also refer to the following textbook, in particular in relation to the practical assignments:

Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit. Bird, Klein & Loper. 2nd edition. Available online.
Specific references to these textbooks will provided in lectures.

An additional reading list, including recommended reading prior to the course and suggested reading to refresh background knowledge (see Prerequisites above), will be provided later, before the course.

  • Lectures
  • Practical lab sessions, with teacher/TA support
  • Daily assessed assignments
  • Example code and other materials provided online
  • Submission of code, programme output, solutions and written answers

All links, slides and other materials will be made available online, via the course webpage.

The following components will be assessed:

  • Completion of daily assignments (minimum 80% of days completed)
  • An subset of assignments graded on a 1-5 scale (average of 3 or more)
  • Attendance of lectures (all days, unless exception agreed with lecturer)
  • Participation in discussions (some active participation observed by lecturer/TAs)

Contact teaching only.

This is an intensive course: daily participation over the full two-week period is necessary so as not to miss important content.

Mornings will be filled with lectures and discussion, introducing and exploring the material.

Afternoons will be used for lab sessions, which will include completion of assessed assignments relating to the day's lecture material.

Full participation in both morning and afternoon sessions is expected. Any anticipated exceptions should be discussed with the lecturer before signing up for the course.

Mark Granroth-Wilding