General design principles of algorithms. Examples of central problems and typical solutions. Reductions and analysis methods.

Enrol
Moodle
Log in to view the registration key for Moodle.

Messages

Veli Mäkinen's picture

Veli Mäkinen

Published, 7.11.2017 at 17:23

Course grading is now completed. See Results section for statistics, model solutions, and grounds for grading. Feedback session is on Monday 13.11 at 14-15 D234. There you can see your answer sheets and check the points against the grounds for grading.

Exercise points stay valid for the renewal exam.

Veli Mäkinen's picture

Veli Mäkinen

Published, 19.10.2017 at 10:48

Check the end of Course syllabus section for exam instructions.

Alexandru Tomescu's picture

Alexandru Tomescu

Published, 21.9.2017 at 22:15

Model max flow reductions for the two problems seen at Lecture 3 can be found in the Course syllabus section. I also added a link to the running time proof of the Edmonds-Karp algorithm.

Veli Mäkinen's picture

Veli Mäkinen

Published, 15.9.2017 at 19:54

Solutions for study group 2 problems can be found in the Course syllabus section.

Veli Mäkinen's picture

Veli Mäkinen

Published, 14.9.2017 at 11:35

Exercise 2 published. See Tasks section.

Veli Mäkinen's picture

Veli Mäkinen

Published, 14.9.2017 at 11:32

Model solutions for first week study group problems can be found in the Course syllabus section.

Veli Mäkinen's picture

Veli Mäkinen

Published, 14.9.2017 at 11:30

Removed the messages here and added the same material to section Course syllabus. We will be adding there plans for the lectures / study groups, and afterwards detailing what was actually taken place.

Timetable

Here is the course’s teaching schedule. Check the description for possible other schedules.

DateTimeLocation
Wed 6.9.2017
14:15 - 16:00
Wed 13.9.2017
14:15 - 16:00
Wed 20.9.2017
14:15 - 16:00
Wed 27.9.2017
14:15 - 16:00
Wed 4.10.2017
14:15 - 16:00
Wed 11.10.2017
14:15 - 16:00
Wed 18.10.2017
14:15 - 16:00

Other teaching

14.09. - 19.10.2017 Thu 12.15-14.00
Dmitrii Kosolobov
Teaching language: English
13.09. - 18.10.2017 Wed 10.15-12.00
Bella Zhukova
Teaching language: English
12.09. - 17.10.2017 Tue 14.15-16.00
Serra Valenzuela
Teaching language: English
08.09. - 20.10.2017 Fri 10.15-12.00
Alexandru Tomescu
Teaching language: English
08.09. - 20.10.2017 Fri 12.15-14.00
Veli Mäkinen
Teaching language: English
07.09. - 19.10.2017 Thu 14.15-16.00
Alexandru Tomescu
Teaching language: English

Material

Lecture material

Video

Tasks

Exercise 1

Exercise 2

Exercise 3

Exercise 4

Exercise 5

Exercise 6

Course syllabus

Here is a recommended study rhythm for the course:
- Browse the weekly slide set (Materials section)
- Wednesday lecture 14-16 CK112
- Study groups (pick one):
-- Thursday 14-16 B222
-- Friday 10-12 B222
-- Friday 12-14 B222
- Check from weekly video lectures the rest of weekly content (Videos section)
- Exercises (see Tasks section for the exercise sheet)
-- Tue 14-16 B222
-- Wed 10-12 DK116
-- Thu 12-14 CK111

WEEK 1

Wednesday demonstration lecture (VM): Quicksort best case analysis = Quicksort worst case analysis when median is used as pivot. From recursive algorithm to running time recurrence (important skill to learn). Solving T(n)=2T(n/2)+n-1 (number of comparisons) with recursion tree method. Explaining why and when n=2^k, k>0, can be assumed. Checking the solution by substitution method. Subtracting lower-order term technique with T(n)=4T(n/2)+n<=cn^2 as an example. T(n)=2T(n/2)+n/log as a more advanced recursion tree method example. Ends up with \sum_{i=0}^{\log n-1} (n/i)=O(n log log n) using TCS cheat sheet formula \sum_{i=1}^k (1/k)=Theta(log k). Master theorem statement and case 1 demonstrated by T(n)=9T(n/3)+n^{3/2} (ran out of time for other cases). Defined Cartesian tree by an example, showing its O(n^2) construction. Mentioning a linear time solution is explained in video lecture (among amortized analysis examples).

Addendum: 1) n=2^k cannot always be assumed. E.g. T(n)=T(n-1)+T(n/2)+f(n). If not directly stated, you can choose to take floor or ceiling: our recurrences are on integers; 2) Base case is not always defined. If you know the algorithm related to the running time recurrence, define it based on what happens with small n (you may slightly modify the algorithm to give a pleasant base case). If just recurrence is given, cut the recursion on some small n, assume only f(n) part matters.

Study groups (AT & VM): Solving recurrences: https://www.dropbox.com/s/mz8n7o76cyoqy00/daa17_studygroup1.pdf. These were roughly last year exercise 1 questions 2,3,4, and 6. The model solutions for those are available at address https://www.cs.helsinki.fi/en/node/85346. In addition, in some study groups we tried to apply Master method to recurrence T(n)=2T(n/2)+n/log n, which was already analysed at the lecture to be O(n log log n) with recursion tree method. Master method does not apply to this recurrence: Although n/log n < n, there is no epsilon with n/log n=O(n^{1-epsilon}). That is, there is an exponential gap in the cases of the Master theorem, and there are cases where it cannot be applied even if the recurrence would look suitable.

Amortized analysis techniques we are so far covering only superficially. At this point, understanding the concept and few selected examples will be enough.

On your own: Check at least introductory slides of the course (Materials section). The video on amortized analysis and especially on linear time construction of Cartesian tree (and associated pdf for theory) are useful for the last assignment in the exercise sheet.

WEEK 2

Wednesday demonstration lecture (AT): Overview of general structure of divide and conquer algorithms. Warm-up example: tromino tiling, then quick recall of Quicksort, Mergesort and Binary search and how they fit into the divide and conquer scheme. The maximum subarray problem solved with divide and conquer, the recurrence is the same as for Quicksort, O(n log n). Multiplication of n-bit integers, faster algorithm than quadratic, namely O(n^{log 3}) = O(n^{1.58...}), and analysis with the Master method. This is a simpler instance than Strassen's algorithm for matrix multiplication (see the slides and videos) of when we can regroup computation to avoid a recursive step. Linear-time selection, developed in stages: 1) linear-time selection using a black-box for median finding, 2) linear-time selection using a black-box for approximate median finding, 3) approximate median finding using linear-time selection.

Study groups (VM): Problem solving using divide and conquer: https://www.dropbox.com/s/1oa7vy6ykzbiy2g/daa17_studygroup2.pdf. Two of the problems we studied (median from two sorted arrays and min-queues through min-stacks) were pretty much last year Exercise 2 questions 5 and 6, see model solutions here: https://www-edit.cs.helsinki.fi/en/node/85346. The difference is that we studied min-queues while the model solution considers a bit harder case of min-deques. Solution for our third problem of counting inversions can be found here: http://www.cs.princeton.edu/~wayne/kleinberg-tardos/pdf/05DivideAndConqu.... For our 4-th problem of k-th quantiles Quicksort, here is a solution sketch: The goal is to partition an array A[1..n] into k almost equal size parts - with lengths x and y=x+1 - such that first part contains the chunk of smallest elements, second part the chunk of next smallest elements, and so on. This should be done in O(n log k) time. With x=floor(n/k) and y= ceil(n/k) we can use n-kx parts of length y and k-(n-kx) parts of length x. We can then precompute a table of pivots: P[0]=0; for (i=1;i<=k-(n-kx);i++) P[i]=P[i-1]+x; for (i=k-(n-kx)+1;i<=k;i++) P[i]=P[i-1]+y (this is with small chunks of length x first followed by chunks of length y). Now do like in advanced Quicksort but instead of median use Select(A,P[ceil(k/2)]) as the first pivot, to have elements in parts 1..ceil(k/2) going left and elements is parts ceil(k/2)...k going right. Continuing recursion like this until all parts are of length x or y gives the requested partitioning. E.g. with n=27, k=5, x=5, y=6, P=5,10,15,21,27. The first pivot is 15-th smallest element of A, sending elements of first 3 chucks of size 5 to the left and elements of last 2 chucks to the right. Continuing like this gives the requested partitioning to first 5 smallest elements, followed by 5 next smallest elements, etc. Notice that inside each chunk the elements are not guaranteed to be sorted respective to each others. This algorithm is useful for parallel / distributed sorting, as now each chunk can be sorted independently.

On your own: Check the Strassen algorithm from the slides / video.

WEEK 3

On your own: Shortest path in a DAG. Bellman-Ford derived as a reduction to shortest path in a layered DAG. Check these from the weekly slides / video. Find out how topological sorting works.

Wednesday demonstration lecture (AT): We start by defining basic notions: flow, flow value, flow network, the maximum flow problem. We then show two applications: vertex-disjoint source-to-sink paths and maximum cardinality matching. Write-ups of these reductions are available here: https://www.dropbox.com/s/4hektu4kfakdwtu/model%20reductions.pdf?dl=0 We continue with the residual network, the bottleneck of a path in the residual network, the notion of flow augmented with a path. We then give the Ford-Fulkerson algorithm and show (1) f* is a max flow <=> G_f* has no augmenting paths (we argue only the => implication, the reverse is more complex and requires the notion of min cut) (2) each augmenting path can be found in O(E) time (3) since we assume integral capacities, then there are at most |f*| steps. We give an example that this bad running time can be achieved in practice. We then consider the Edmonds-Karp algorithm, that always chooses the "shortest" augmenting path (in terms of edges, found by BFS). We gave a sketch of the proof that the Edmonds-Karp algorithm finishes after O(VE) steps. You can see this proof written down here: http://www.cs.cornell.edu/courses/cs4820/2011sp/handouts/edmondskarp.pdf (the proof is on the second page)

Study groups (AT & VM): Practising reductions: https://www.dropbox.com/s/c4ushyzjh23ad03/daa17_studygroup3.pdf. Solution sketches: 1) Backbone of the DAG is string itself. Create arc (i,i+LEN) with cost c1(DIST)+c2(LEN) iff S[i..i+LEN-1] occurs at S[i-DIST..i-DIST+LEN-1]. 2) Sort A into A'. Remove duplicates from A' to create array A''. LCS(A,A'')=LIS(A). Note: This works for LIS meaning longest *strictly* increasing subsequence. 3) Find an augmenting path and assign its weight to the minimum flow value (bottleneck) along that path. Remove this bottleneck weight from all arcs along that path and remove arcs with zero value. Repeat the process. 4) Make a bipartite graph with white squares as (white) nodes on the left and black squares as (black) nodes on the right. Connect white nodes to all black nodes corresponding to adjacencies of the corresponding squares. Use the reduction to maximum flow seen on the lecture to observe that if the maximum flow value equals the number of white squares, then there is a placement of the tiles induced by the matched edges.

WEEK 4

Wednesday demonstration lecture (VM): Dynamic programming with Excel. Longest increasing subsequence in O(n^2) time (induction proof, optimal substructure property, see addendum below). Segmentation of A[1..n] to k pieces in O(n^2k) time: S[j,k]="segmentation of A[1..j] to k pieces" = min_{i < j} S[i,k-1]+c(i+1,j). Subset sum in O(nt) time, where t in the target value: S[i,w]="is there a subset of i first weights w_1 w_2 ...w_i summing up to w"=S[i-1,w] OR S[i-1,w-w_i]. BFS-style and DFS-style algorithms for topological ordering in linear time (after all, this was not on extra time as we had a break...).

Addendum: Induction proof did not go so smoothly, so check from weekly slides a correct yet sketchy version. Here is with more details. L[j]="length of LIS of A[1..j] ending with A[j]". L[0]=0. A[0]=-\infty. Claim L[j]=max_{i: i \in [0..j-1], A[i] < A[j]} L[i]+1. Proof. Assume by induction all L[i] are correct for i < j. If L[j] < max_{i: i \in [0..j-1], A[i] < A[j]} L[i]+1, then there is a subsequence of length L[j] (ending with some A[i]) where A[j] can be added to form an increasing subsequence of length L[j]+1. This is contradiction with the definition of L[j]. So L[j] >= max_{i: i \in [0..j-1], A[i] < A[j]} L[i]+1. To show that identity holds, it suffices to prove that L[j] > max_{i: i \in [0..j-1], A[i] < A[j]} L[i]+1 also yields a contradiction. Indeed remove A[j] from any longest increasing subsequence ending at A[j]. Then one obtains an increasing subsequence ending at some A[i], i < j, of length L[j]-1. That is, we have an estimate (a) L[i] >= L[j]-1. Since we assumed L[j]>max_{i: i \in [0..j-1], A[i] < A[j]} L[i]+1 and it holds max_{i: i \in [0..j-1], A[i] < A[j]} L[i]+1 >= L[i]+1, we have another estimate (b) L[j]-1 > L[i]. Combining (a) and (b) we get a contradiction L[i] >= L[j]-1 > L[i]. So the identity holds with the definition and the maximization formula, given also that in the base case L[1]=1 is obtained using both.

On your own: LIS computation reduced to shortest path problem, 1d clustering, and optimal chain of matrix multiplications from weekly slides / video.

Study groups (AT & VM): Problem solving using dynamic programming: https://www.dropbox.com/s/7b5vhceymd7d2br/daa17_studygroup4.pdf.
Solution sketches: 1) L[j,k]="Length of LIS of A[1..j] segmented to k pieces such that last piece ends in A[j]"=max(if A[j-1] < A[j] then L[j-1,k]+1 else -\infty, max_{i: 0 <= i < j, A[i] < A[j]} L[i,k-1]+1). Define A[0]=-\infty. Initialize L[0,0]=0 and all impossible L[i,k]=-\infty (e.g. k > i). 2) Add L[0]=0,L[m+1]=n+1. Then recursion C(a,b)=L[b]-L[a]+min_{k: a < k < b} C(a,k)+C(k,b) called with a=0, b=n+1 gives the optimum cost. This can be implemented in O(n^3) time using e.g. memoization (that is, storing each C(a,b) into c[a,b] after first call, and then returning directly c[a,b] on later calls; see an analogous solution in the lecture slides for chain of matrix multiplications). 3) V[w]="maximum value of a knapsack of weight w"=max_{i: w_i <= w} V[w-w_i]+v_i. Start with V[0]=0. 4) See model solution for last year Exercise 4 / question 3 at https://www.cs.helsinki.fi/en/node/85346 . There you kind find also more thorough solutions to some of the problems above.

WEEK 5

Wednesday demonstration lecture (VM): Examples of NP-hardness reductions: Few simple ones, one a bit harder, and one where implication works only one direction and the reduction fails. Example of an approximation algorithm (metric TSP). We covered SAT<=_p 3-CNF-SAT (idea only). 3-CNF-SAT <=_p CLIQUE [3-CNF-SAT <=_p VERTEX-COVER left as an exercise]. Subset-sum <=_p Exact-walk-length (wrong way and correct way). A simple 2 approximation of metric TSP. The main idea of Christofides 3/2 approximation to the same problem. http://www.cs.cornell.edu/courses/cs681/2007fa/Handouts/christofides.pdf

Addendum (answering a question from the audience). Our proof for 3-CNF-SAT <=_p CLIQUE included a non-constructive component, but was easily transformed into fully constructive: given variable assignments evaluating the formula to true you can construct a clique of size k, and vice versa. The reduction L' <=_p L itself needs to be constructed in polynomial time, but then it is enough to prove that L'(x')="yes" if and only of L(x)="yes" on all inputs x' of L', where x=f(x') and f() is the polynomial time conversion of x' into input x of L. This proof may be non-constructive. Yet, if it turns out that P=NP, it would be handy to have a constructive proof. Many times a non-constructive proof can be used as black-box to derive a constructive proof: we study one such case in this week study groups!

On your own: Definitions and basic properties related to NP-hardness, SAT to 3-CNF-SAT reduction, and approximation of vertex cover from weekly slides / video.

Study groups (AT & VM): Practising NP-hardness reductions: https://www.dropbox.com/s/zzp3pz2inxf46ib/daa17_studygroup5.pdf? These were Exercise 5 questions 1-4 from last year. The model solutions for those are here: https://www.cs.helsinki.fi/en/node/85346 . For 2(b) many found an alternative solution that picked one vertex at the time (not the whole neighbourhood of it like in the model solution): After finding maximum independent size k by trying all sizes, find a vertex whose removal makes the black box return still "yes". Continue with this removal, to find the next vertex like this, and so on, until k vertices are left.

WEEK 6

Wednesday demonstration lecture (VM): Most popular vote [https://doodle.com/poll/678e9qduyk3chac3 ] was: "An example of dynamic programming and amortized analysis combined". One less votes got "potential method of amortized analysis", "teaser to string algorithms", and "SAT is NP-complete". These gave a good agenda for the lecture. We studied 1d clustering under \ell^\infty norm together with capacity constraints and ended up with a recurrence S[j,k]=min_{j-t < i < j} S[i,k-1]+(A[j]-A[i+1])/2 for finding a clustering of 1d point set A[1] < A[2] <... < A[n]. After taking A[j]/2 out from minimization we observed that one can use sliding window minima to get each value computed in amortized constant time (plugging in an exercise solution from first week). The potential method was introduced, and dynamic array with insertions used as an example of its use (this is also in the first week lecture slides). Suffix sorting was used as a teaser to string algorithms. We went through the main idea of Ko-Aluru algorithm: dividing by small suffixes (i:T[i...n] < T[i+1...n]), mentioning that the order of small suffixes is determined by the order of suffixes of the lexicographic ranks of substrings playing the role of small suffixes, so one obtains a recursive procedure, where before going down in recursion you first have to sort a set of string of overall size n (with n halving each time, if resorting to the mirror case of large suffixes when they are fewer). When coming up, the order of small suffixes is known, and the order of large suffixes can be induced from this information in linear time. Finally, with some overtime we briefly looked at the slide set on proving SAT NP-complete.

On your own: Check what is known about the relationship between RAM model and Turing machine model; the slide (and video) on equivalence of Turing model with RAM model updated in the weekly slides reflecting solution to Exercise 6.2. Encodings, pseudo-polynomiality from weekly slides / video.

Study groups (AT & VM): Hardness of problems with pseudo-polynomial solutions (subset sum). Practicing encodings (subset sum). Fast implementation of an approximation algorithm by dynamic programming. Scheduling and flows. https://www.dropbox.com/s/zzp3pz2inxf46ib/daa17_studygroup5.pdf. For the hardness of subset sum see Section 34.5.5 from the book. Encoding is from weekly slide set (restricted to subset sum). The approximation algorithm can be implemented by modifying the shortest path algorithm: Let notcovered(v)=1 for all nodes. Computing C(v)=max_{w \in N^-(v)} C(w)+notcovered(v) in topological order gives C(t)=#vertices covered by first s-t path, when initialized C(s)=notcovered(s). Mark notcovered(v)=0 for each node v in a traceback path (forming the first path in the cover). Repeat until C(t)=0.

WEEK 7

Wednesday summary lecture (AT): Revisiting earlier topics in the light of randomized algorithms. We started by defining the worst-case, best-case, average-case of a deterministic algorithmic. We then introduced the notion of expected case running time for a randomized algorithm. The first example was RandomizedSelect (choosing the pivot at random). The intuition why this runs in expected linear time: if we break a bar of length n in two parts, the expected size of the larger part is 3/4n (i.e., a fraction fraction of n). We did the formal analysis as in the slides, using indicator variables. We then analyzed the problem of generating a random s-t path in a DAG with unique source s and unique source t. The dummy problem of generating a random leaf of a tree can be reduced to it. We described the algorithm using #paths(v) = number of v-t paths, and proven it is correct. We finished with the Max3SAT problem: we considered the simple algorithm assigning each variable T/F with equal 1/2 probability. We showed that the expected number of clauses satisfied by it is 7/8m (where m is the number of clauses in input), and thus it is a 7/8-approximation algorithm.

Study groups (AT & VM): Studying an article that uses many of the techniques we learnt: https://doi.org/10.1007/10719839_9

On your own: Prepare to the exam: Wed 25.10 at 9.00 A111. There is 2.5 hours time to answer 4 questions (or 3 if you are happy with your exercise points: see Conduct of the course section for grading). Bring your own pens/pencils/erasers etc. Empty sheets will be provided.
More instructions: https://www.cs.helsinki.fi/en/exams/course-exams Exam date: https://www.cs.helsinki.fi/kokeet/kkokeets17.html
Year 2016 course exam (click grounds for grading): https://www.cs.helsinki.fi/en/node/85676
Year 2015 course exam https://www.dropbox.com/s/i8velogn8mnfewb/exam.pdf
Year 2015 grounds for grading https://www.dropbox.com/s/rf0nlcvk35zvu4r/groundforgrading.pdf

Conduct of the course

To pass the course you need to earn more than 30 points. For grade 5 you should earn 50 points. (These are tentative. Limits depend on the difficulty of the exam.) Course exam has 4 questions with point distribution 17+17+17+9. The last question can be (partially) replaced by exercise points (max 9 points): The total points are calculated as min(9,points from last exam question+exercise points)+points from three other exam questions. To attend the course exam, you should earn at least one point from exercises. Otherwise, you can always attend a separate exam. To get exercise points you need to either (a) show up in an exercise session and describe your solution if asked, or (b) upload your answers as pdf to moodle before the first exercise session of the week. The points from the exercises come through a linear scale. We expect to have 6*5=30 assignments; in that case the scale is 4 -> 1 p, 7 -> 2p, 10 -> 3p, 13 -> 4p, 16 -> 5p, 19 -> 6p, 22 -> 7 p, 25 -> 8 p, 28 -> 9 p.

Feedback

"Course content was great. Although everything is very challenging, but it is expected in this top program."
"...this is so far easily my favourite course at HY."
"The slides are a mess."

Standard questions (comment)
Objectives: 3,6
Material: 3,6
Activities: 3,9
Assessment: 4,1 (this was asked before exam...)
Laboriouness: 4,7 (amortized analysis and flow algorithms to be dropped next year to make the course more reasonable for 5 cr; flows will still be used as target of reductions)
Overall: 3,8

Specific questions (comment)
Blackboard: 3,3 (trying to improve next year)
Study groups: 4,3 (this system works nicely, so planning to reward activity next year)
Exercises: 4,3 (we have invested quite a lot of time in the design of useful exercise questions, happy to see it paid off)
Slides: 3,5 (mostly useless for self-studying, luckily some parts of course book are more readable)
Videos: 3,8 (voice over helped a bit, but yes there should be more energy to make these less boring)

Description

Master's Programme in Computer Science is responsible for the course.

Modules the course belongs to: CSM100000 / Core courses package and Discrete Algorithms (elective) CSM22100

The course is available to students from other degree programmes.

Data Structures and Algorithms course or equivalent knowledge.

Courses in the Discrete Algorithms study module.

Is familiar with the most important principles of algorithm design. Solves simple recurrences. Can derive running time recurrence given a recursive algorithm. Can give examples of advanced recursive algorithms. Describes examples of amortized analysis in deriving worst case complexity of algorithms and amortized complexity of a data structure. Reduces related graph problems to maximum flow and describes the principles of how maximum flows are solved. Applies basic dynamic programming paradigm to shortest paths-alike problems and to variations of other problems used as typical examples during the course. Derives dynamic programming solutions given a recurrence as a hint. Derives the complexity of simple combinatorial algorithms using an appropriate analysis method. Describes some basic theorems related to NP-completeness. Can explain the P=NP problem. Knows many examples of NP-hard problems. Forms simple polynomial reductions between problems. Can give examples of approximation algorithms and randomized algorithms. Can explain the difference between best case, average case, expected case, and worst case complexities.

Recommended time/stage of studies for completion:

First Autumn

Term/teaching period when the course will be offered:

Yearly in Autumn / first period

General design principles of algorithms. Examples of central problems and typical solutions. Design and analysis of recursive algorithms. Amortized analysis. Dynamic programming. Flows. Reductions. NP-completeness. Introduction to other core algorithm design concepts.

Course book: T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein: Introduction to Algorithms, 3rd ed., MIT Press, 2009.

Chapters 1, 2, 3, 22, 23, 24, 25 of the course book are recommended as background material before (during) attending the course.

The following is the current plan for Autumn 2017: teaching methods evolve from year to year.

There will be online lecture material covering the main concepts of each week; this material is assumed to be studied before attending the weekly contact teaching (lecture+study group+exercise).

The weekly contact lecture focuses on examples around the week's topic, and derivations and proofs not suitable for online lectures.

The weekly study groups are mostly online problem solving in small groups; the students teach each other the solutions found.

The weekly exercises work as self-evaluations and test the problem solving and analysis skills acquired. In the first weeks, one can replace problem solving and analysis questions by implementing the studied algorithms; this is aimed at students with insufficient background in the required mathematical skills to catch up.

The following is the current plan for Autumn 2017: assessment practices evolve from year to year.

The grading scale is 1...5.

The course exam has one question that tests general knowledge on the course topics. Points gathered from the exercises can replace this question.

One should obtain 30 points to pass the course with grade 1. Grade 5 is obtained with 50 points out of the maximum 60. A linear scale is applied for other grades.

Deviations from the scheme are possible depending on the difficulty of the exam.

There will be lectures, online lectures, study groups, and exercises, or some subcombination of these and possibly other forms of teaching.

Some activity during the course (e.g. in the form of gathering enough exercise points) is required to attend the course exam.

A separate exam can be taken with independent study.