Info
UPDATE: We have bigger rooms now! Note the changes below.
Wednesday 16:15 - 18:00, LFW C 11 LFW C 1 (right next to our previous room)
Friday 10:15 - 12:00, IFW C 31 (next to Haldenegg) ML J 34.1
You can reach me via mail or on discord @jonas.too
If you want me to go over certain topics in more detail or would like to give me feedback you can do so here: Feedback
You can find the weekly quizzes on the course moodle.
Resources
Course webpage
The Art of Multiprocessor Programming: Great book for the second part of the course. See below for an overview of the chapters.
Exam Collection - A collection of old exams and summaries provided by the VIS
PVW Scripts - “PVW Skripts - Prüfungsvorbereitungsworkshop Skripts” provided by the VIS
Java Language Specification - Chapter 17 “Threads and Locks” of the Java Language Specification.
Excellent Summary of PProg.
Week 1 - 19.02.25 & 21.02.25
Introduction to the course. I gave an overview of parallel progamming and why it is needed in our world. Finally, we set up our work environments for the semester.
Slides
Parallel Sum Code
Week 2 - 26.02.25 & 28.02.25
We discussed threads - their properties and how to use them in Java. I also explained the life cycle of a thread. We saw a first example of what a bad interleaving is and how to prevent it. I explained why we can usually only expect a sub linear speedup in parallel programs. We finished the session with a quiz.
Slides
Bad Interleavings Code
Week 3 - 05.03.25 & 07.03.25
We reviewed the previous exercise and went over common mistakes. We explored synchronization in Java, learning how to use locks effectively. We also looked at wait and notify, working through a practical example of the producer-consumer problem to understand why they are needed. Additionally, we briefly discussed Exercise 3. To finish, we solved some past exam questions.
Slides
Wait Notify Code Example
Week 4 - 12.03.25 & 14.03.25
In this session, we reviewed last week’s exercise and revisited key concepts such as synchronization, atomic classes, and locking. Additionally, we discussed pipelining, worked through some exam problems, and wrapped up with a Kahoot quiz. I also provided a brief introduction to the new exercise sheet.
Slides
Exercise 3 Example Solution and Various Counter Implementations
Week 5 - 19.03.25 & 21.03.25
In this session, I began with a recap of last week’s pipelining exercise. We then discussed divide-and-conquer approaches to parallel algorithms and explored how to use the Fork/Join framework effectively. With this knowledge, we are now equipped to solve the first part of the exam. I also provided a brief overview of all the exam tasks and gave a short preview of the new assignment.
Slides
Fibonacci with ExecutorService and ForkJoin Code Examples
Week 6 - 26.03.25 & 28.03.25
In this session, we began with a brief recap of Amdahl’s Law, Gustafson’s Law, and task graphs, discussing their implications for parallel performance. I then demonstrated how to solve last week’s exercise using the Fork/Join framework.
Next, we explored ReentrantLock, highlighting common pitfalls such as deadlocks caused by incorrect lock ordering, using the bank account example. We examined best practices for using locks and discussed key properties such as mutual exclusion, liveness, and fairness. We then covered race conditions, distinguishing between high-level and low-level race conditions. Finally, I explained parallel patterns like packs and scans, including how prefix sum computations work.
Slides
Synchronized Banksystem Code Example
Example Solution to Assignment 5 (no master solution!)
Week 7 - 02.04.25 & 04.04.25
In this session, we reviewed all topics covered in the first part of the lecture in preparation for the second part. I also introduced the volatile keyword, Dekker’s lock, Peterson’s lock, and lock implementations using atomics. Additionally, we went through some exam questions and wrapped up the session with a Kahoot quiz.
Slides
Deckers Lock & Petersons Lock in Java
Kahoot by @aellison
Example Solution to Assignment 6 (no master solution!)
Week 8 - 09.04.25 & 11.04.25
We’ve now kicked off the second part of the course, where things get a bit more in-depth. We’re focusing on how to actually build locks, reason about them in a structured way, and start working with concurrent data structures. In today’s class, we talked about what properties a critical section has, took a closer look at the Java Memory Model, and discussed what the volatile keyword does and why it matters. We also explored some classic lock implementations like Decker’s and Peterson’s locks to see how these ideas come together in practice.
Slides
Livelock Code, Weird Volatile Code, Locks in Java
Week 9 - 16.04.25
We explored the happens-before relation in the Java Memory Model and how it ensures thread safety. Discussed atomic operations like CAS and TAS and how they’re used to build locks. Noted that TAS locks suffer from bus contention and are inefficient under load. Covered concurrency algorithms like Decker’s Lock (and a tricky interleaving issue), Filter Lock, and Bakery Lock—all approaches to mutual exclusion in software.
Slides
Week 10 - 30.04.25 & 02.05.25
Today we began with a recap of bus contention, discussing why TAS (Test-and-Set) and TATAS (Test-and-Test-and-Set) locks perform poorly under high contention due to excessive memory traffic. We then introduced semaphores and explored how they can be used to implement barriers in Java. Finally, we examined monitors and lock conditions, learning how to use them effectively for the case of the producer consumer problem.
Slides
Barrier Implementation using Semaphores and using synchronized
The Art of Multiprocessor Programming Overview
Overall, The Art of Multiprocessor Programming is an excellent resource for this part of the course. It’s well-written and covers almost all topics we discuss in the lectures in great depth. Here’s a guide to the most relevant chapters:
Chapter 2 (p. 21)
Covers the mutual exclusion algorithms from the lecture, including formal proofs.Chapter 7, Sections 7.1–7.4 (p. 141)
Discusses test-and-set (TAS), test-and-test-and-set (TATAS), and exponential backoff locks.Chapter 8 (p. 177)
Introduces monitors, semaphores, and the readers-writers lock.
Note: the approach differs slightly from the lecture.Chapter 9 (p. 195)
Provides the foundation for our discussion on lock granularity.
The lecture closely follows this chapter. It’s also a great reference for algorithm implementations.Chapter 10, Sections 10.5–10.6 (p. 230)
Covers the implementation of a lock-free unbounded queue and introduces the ABA problem, using the queue as a practical example.Chapter 3, Sections 3.1–3.6 (p. 45)
Explains sequential consistency and linearizability (also quiescent consistency, which is not exam-relevant).
The proofs from Section 3.6.2 onward were not covered in the lecture and are also not relevant for the exam.
Good to know: The book consistently highlights linearization points in code examples in later chapters.Chapter 5 (excluding Sections 5.1.1 and 5.5–5.7) (p. 99)
Focuses on consensus. Some proofs, like Theorem 5.2.1 and 5.4.1, are beyond the scope of the exam.Chapter 18 (p. 417)
Introduces transactional memory. While some implementation details go deeper than the lecture, this chapter provides a helpful overview.
Thanks to @aellison for this summary!