Kaisa_2012_3_photo by Veikko Somerpuro

9.12.2019 at 09:00 - 27.2.2020 at 23:59


Here is the course’s teaching schedule. Check the description for possible other schedules.

Thu 16.1.2020
09:15 - 11:00
Thu 23.1.2020
09:15 - 11:00
Thu 30.1.2020
09:15 - 11:00
Thu 6.2.2020
09:15 - 11:00
Thu 13.2.2020
09:15 - 11:00
Thu 20.2.2020
09:15 - 11:00
Thu 27.2.2020
09:15 - 11:00

Other teaching

16.01. - 27.02.2020 Thu 09.15-11.00
Khalid Alnajjar, Mark Granroth-Wilding, Leo Leppänen, Lidia Pivovarova, Eliel Soisalon-Soininen, Elaine Zosa
Teaching language: English


Master's programme in Data Science is responsible for the course.

The course is available to students from other degree programmes.

Prerequisite courses: DATA11002 Introduction to Machine Learning; TKT20005 Models of Computation

The student should have at least a basic familiarity with the following topics before the course starts.

  • Supervised vs unsupervised learning

  • Overfitting and regularization

  • Dimensionality reduction

  • Mathematics of simple probabilistic models and estimation

  • Concepts of classification and regression

  • Formal languages: in particular finite state automata and transducers, and context-free grammars
    (Covered by TKT20005 Models of Computation)
  • Programming:

    • Basic abilities in Python

    • Familiarity with Numpy recommended

Suggested reading on these topics will be provided before the course, to help students to fill in any gaps in their knowledge or revise the concepts.

Programming assignments will be completed in Python, so at least some previous experience of Python programming is essential.

Experience in linguistics / language processing is not required. However, a basic familiarity with some linguistic concepts will make it easier to follow the course. Links to recommended reading material will be provided before the start of the course.

By the end of the course, the student will:

  • have an understanding of the basic linguistic concepts underlying typical approaches to NLP;
  • be familiar with traditional pipeline approaches to NLP systems;
  • be aware of the main subtasks and typical components in such pipelines;
  • have a good understanding of some commonly used probabilistic and other statistical models and how they are used for practical NLP tasks;
  • know how to tackle some NLP applications by combining existing approaches to their subtasks;
  • understand how recent machine learning methods (such as deep learning) can be applied to linguistic tasks;
  • know how NLP systems and components are typically evaluated and understand good practices in evaluation and data handling;
    be aware of some key open research questions and unsolved problems in NLP.

Spring term 2020, period 3.

This course will give an introduction to the field of Natural Language Processing (NLP), covering central concepts, example applications and the application of modern machine learning (ML) techniques to NLP problems. It will go into more detail on some particular applications, showing how they have been tackled, and what component sub-tasks they involve.

NLP is a broad field, including a large number of sub-tasks and applications. We begin with an overview of the field, covering the classic natural language understanding (NLU) pipeline and its components. Then we look in more detail at various specific areas, including finite-state methods, syntax and parsing, lexical and compositional semantics, vector-space models and document-level analysis. We will look at some modern statistical methods, including how neural networks and deep learning can be applied to linguistic analysis.

We then cover the other other side of NLP, natural language generation (NLG), including a comparison of classic rule-based systems and recent applications of deep learning and other machine learning techniques. We will also see how components can be combined for an important current application: information extraction.

Finally, a look at the topics of semantics and pragmatics will highlight some key unsolved problems in the field and show why it remains an active and challenging area for research.

The course will primarily follow two textbooks:

Speech and Language Processing. Jurafsky & Martin. 2nd edition, 2009. Pearson Education
Natural Language Processing. Jacob Eisenstein. Draft textbook, Nov 13 2018. Available on Github.

We will also refer to the following textbook, in particular in relation to the practical assignments:

Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit. Bird, Klein & Loper. 2nd edition. Available online.
Specific references to these textbooks will provided in lectures.

An additional reading list, including recommended reading prior to the course and suggested reading to refresh background knowledge (see Prerequisites above), will be provided later, before the course.

  • Lectures
  • Practical lab sessions, with teacher/TA support
  • Weekly assessed assignments
  • Example code and other materials provided online
  • Submission of code, programme output, solutions and written answers

All links, slides and other materials will be made available online, via the course webpage.

The following components will be assessed:

  • Assignments graded on a 1-5 scale (average of 3 or more).
  • Final individual project, report submitted shortly after course.
  • Attendance of all lectures (unless exception agreed with lecturer)
  • Participation in discussions during lectures (some active participation observed by lecturers)

Contact teaching only.

Two lectures per week (see timetable), mandatory.

One lab session per week (optional), to support completion of assessed assignments relating to the lecture material.

Full participation in lectures is expected. Any anticipated exceptions should be discussed with the lecturer before signing up for the course.

Mark Granroth-Wilding