Concurrency: State Models & Design Patterns. Q&A Session Week 13

Similar documents
Concurrency: State Models & Design Patterns

Concurrency: State Models & Design Patterns

CONCURRENT PROGRAMMING - EXAM

Synchronization in Concurrent Programming. Amit Gupta

Lecture 10: Introduction to Semaphores

CSE 153 Design of Operating Systems

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002

CPS 310 midterm exam #1, 2/19/2016

Concurrent & Distributed Systems Supervision Exercises

CS Final Exam Review Suggestions

CSE Traditional Operating Systems deal with typical system software designed to be:

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

CPS 110 Midterm. Spring 2011

Multiple Inheritance. Computer object can be viewed as

CPS 310 second midterm exam, 11/6/2013

COMP 3361: Operating Systems 1 Final Exam Winter 2009

AST: scalable synchronization Supervisors guide 2002

Real-Time and Concurrent Programming Lecture 4 (F4): Monitors: synchronized, wait and notify

OS06: Monitors in Java

CSCI 5828: Foundations of Software Engineering

3C03 Concurrency: Starvation and Deadlocks

Threads and Parallelism in Java

Race Conditions & Synchronization

6.001 Notes: Section 15.1

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

Concurrent Processes Rab Nawaz Jadoon

CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers

CMSC 330: Organization of Programming Languages

LTSA User Manual.

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

Models of concurrency & synchronization algorithms

Synchronization II: EventBarrier, Monitor, and a Semaphore. COMPSCI210 Recitation 4th Mar 2013 Vamsi Thummala

Midterm Exam Amy Murphy 6 March 2002

Week 7. Concurrent Programming: Thread Synchronization. CS 180 Sunil Prabhakar Department of Computer Science Purdue University

CS 153 Design of Operating Systems Winter 2016

CS390 Principles of Concurrency and Parallelism. Lecture Notes for Lecture #5 2/2/2012. Author: Jared Hall

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Last Class: Deadlocks. Today

COMP 3361: Operating Systems 1 Midterm Winter 2009

Monitors & Condition Synchronization

CPS 310 second midterm exam, 11/14/2014

CMSC 330: Organization of Programming Languages. Threads Classic Concurrency Problems

JAVA CONCURRENCY FRAMEWORK. Kaushik Kanetkar

POSIX / System Programming

CS11 Java. Fall Lecture 7

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID:

Dealing with Issues for Interprocess Communication

Advances in Programming Languages

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002

5. Liveness and Guarded Methods

CPS 310 first midterm exam, 10/6/2014

CHAPTER 6: PROCESS SYNCHRONIZATION

Practical 2: Ray Tracing

Module 5: Concurrent and Parallel Programming

CSE 153 Design of Operating Systems Fall 2018

Problems with Concurrency. February 19, 2014

CS61C Machine Structures. Lecture 4 C Pointers and Arrays. 1/25/2006 John Wawrzynek. www-inst.eecs.berkeley.edu/~cs61c/

Race Conditions & Synchronization

Homework #3. CS 318/418/618, Fall Ryan Huang Handout: 10/01/2018

CS 162 Midterm Exam. October 18, This is a closed book examination. You have 60 minutes to answer as many questions

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Lecture #7: Implementing Mutual Exclusion

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004

printf( Please enter another number: ); scanf( %d, &num2);

CS 160: Interactive Programming

Concurrent Programming

Midterm Exam. October 20th, Thursday NSC

wait with priority An enhanced version of the wait operation accepts an optional priority argument:

Lecture 9: Midterm Review

Only one thread can own a specific monitor

CSE 374 Programming Concepts & Tools

Concurrent Programming: The Java Programming Language

CIS 1.5 Course Objectives. a. Understand the concept of a program (i.e., a computer following a series of instructions)

RACE CONDITIONS AND SYNCHRONIZATION

Midterm Exam, October 24th, 2000 Tuesday, October 24th, Human-Computer Interaction IT 113, 2 credits First trimester, both modules 2000/2001

CONTENTS: What Is Programming? How a Computer Works Programming Languages Java Basics. COMP-202 Unit 1: Introduction

Exam Concurrent and Real-Time Programming

Homework #3. CS 318/418/618, Fall Handout: 10/09/2017

Monitors & Condition Synchronization

Lesson 3 Transcript: Part 1 of 2 - Tools & Scripting

Midterm Exam Amy Murphy 19 March 2003

Monitors; Software Transactional Memory

COMP 3430 Robert Guderian

Concurrent Programming. Copyright 2017 by Robert M. Dondero, Ph.D. Princeton University

MITOCW watch?v=0jljzrnhwoi

Java Monitors. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

CS5460: Operating Systems

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Java Threads. COMP 585 Noteset #2 1

CS Exam 1 Review Suggestions

C340 Concurrency: Semaphores and Monitors. Goals

Mobile Computing Professor Pushpendra Singh Indraprastha Institute of Information Technology Delhi Java Basics Lecture 02

OpenACC Course. Office Hour #2 Q&A

Curriculum Map Grade(s): Subject: AP Computer Science

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Transcription:

Concurrency: State Models & Design Patterns Q&A Session Week 13

Exercises 12 Discussion

Exercise 12 - Task 1 Answer the following questions: a) What is a Software Architecture? What is its benefit? A Software Architecture defines a system in terms of computational components and interactions amongst those components. It breaks the overall complexity down to several smaller blocks that are easier to handle. This improves the maintainability and ultimately the modularity. b) What are the potential disadvantages when using layered architectures? It increases the development overhead in small projects and it restricts the freedom of choice in terms of coding styles.

Exercise 12 - Task 1 Answer the following questions: c) Provide an example in which the pattern Specialist Parallelism could be a legitimate architectural choice. Justify your answer! Specialist Parallelism makes very much sense for different workloads that require different computation strategies, as initiated by web-servers (networking, data analysis and management, storage,...). d) The concepts Result Parallelism, Specialist Parallelism and Agenda Parallelism represent three ways of thinking about the problem. Can you tell on what they focus? Provide one sentence for each one of them. Result Parallelism focuses on the shape of the finished product, Specialist P. on the makeup of the work crew and Agenda P. on the list of tasks to be performed.

Exercise 12 - Task 1 Answer the following questions: e) What is a Flow Architecture? What are Blackboard Architectures? In Flow Architecture architectural patterns (e.g. Unix pipes) synchronization is ensured by the linear processing in which information only flows in one direction from sources over filters to sinks. Blackboard Architecture architectural patterns (e.g. Producer/Consumer) perform all synchronization in a coordination medium where agents can exchange messages. f) Which blackboard style should be preferred when we have multiple processors? Why? The Agenda Parallelism should be selected, as it allows you to instantiate as many threads as CPU cores exist. The tasks thereof are typically independent and the workers identical.

Exercise 12 - Task 1 Answer the following questions: g) What are Unix pipes and how do you use them? Unix pipes (established through the symbol) are bounded buffers that connect producer and consumer processes. They can be used in shell commands (e.g. ls -f wc -l) to connect different input / output streams together to easily perform complex data manipulations.

Exercise 12 - Task 2 Now we change roles: It is your turn to send me questions of topics you are still not familiar with for the next practical session. The best questions will be presented in front of you. I will try my best to answer all the questions you submit. For each question you ask you will retrieve a point (maximum of 3 points, no bonus this time :).

Exercise 12 - Task 2 7 questions Safety / Liveness / Fairness 8 questions FSP 6 questions Synchronization Techniques 5 questions Petri Nets 2 questions Conceptual 6 questions Language Specifics 3 questions Concurrency Architectures 1 question Organizational Affairs Totally 38 Questions in 8 Categories

Exercise 12 - Task 2 Safety / Liveness / Fairness

Exercise 12 - Task 2 - Q001 What is the difference between safeness and liveness? Short: Long: Safeness is related to corruption (logical error states / crashes). Liveness is related to progress and deadlocks. Both can relate to each other as deadlocks could be a result of corruptions. Safety can be verified with traversion graphs. Liveness cannot be verified easily. (Halting problem)

Exercise 12 - Task 2 - Q002 How would you manually check a safety property? - Is an error state reachable? (- Is the error state logically correct?) A = (a -> x -> ERROR).

Exercise 12 - Task 2 - Q003 What is the difference between tracking state and using state-tracking variables? It's just about activeness / passiveness. Tracking state (passive approach): We get notified when external state changes (somewhere). Tracking state variables (active approach): We observe a (local) state variable; if it changes we notify appropriate components ourselves.

Exercise 12 - Task 2 - Q004 How would you manually detect a waits-for cycle and manually check a progress property? Waits-for cycle: Graph construction, apply fancy algos on graph Progress-property: Ping all processes and use response timeouts to determine if progress happened within a time slot

Exercise 12 - Task 2 - Q005 What are the dangers in letting the (Java) scheduler choose which writer may enter a critical section? It is biased by the randomness of the system pseudo-randomnumber-generator (PRNG). However, if the implementation is sound no issues (deadlocks) must occur, but starvation could still happen.

Exercise 12 - Task 2 - Q006 What assumptions do nested monitors violate? They especially could violate Liveness assumptions. (deadlock, nested monitor lockout)

Exercise 12 - Task 2 - Q007 #1 Liveness and Guarded Methods, slides 4 and 5: What is the difference between liveness and progress property? Are starvation, dormancy, premature termination and deadlock all livelocks? Progress has stronger constraints than liveness. You can model most liveness properties as progress properties. "Most" means not all: [enter] -> [random] -> [exit] is not possible in progress properties, as you have to define all processes by name. https://stackoverflow.com/questions/3816280/what-is-the-difference-between-a-liveness-and-a-progress-property

Exercise 12 - Task 2 - Q007 #2 Liveness and Guarded Methods, slides 4 and 5: What is the difference between liveness and progress property? Are starvation, dormancy, premature termination and deadlock all livelocks? No, livelock is a special case of resource starvation. All involved processes constantly change states but achieve no progress. For example, two processes constantly try to let the other run to achieve some progress. Both would end up to only change the active process, but do no progress at all.

Exercise 12 - Task 2 LTS/FSP

Exercise 12 - Task 2 - Q008 I m still a little bit confused about the parallel composition operator. How does it work exactly? Step 1: Search common actions of involved processes Step 2: Start building graph with common action(s) Step 3: Rebuild before / after action(s) of both processes Step 4: Repeat step 3 until all potential actions are included see next slide...

Exercise 12 - Task 2 - Q008

Exercise 12 - Task 2 - Q009 The example below is from the LTSA documentation. [PIC1.png] What I don t understand are the two unlabeled transitions from 0 to 2 and from 2 to 3. Otherwise the example would perfectly make sense for me (since we always need an a and a b action before we can execute the x action). see next slide...

Exercise 12 - Task 2 - Q009 Flawed diagram. Cannot be reproduced.

Exercise 12 - Task 2 - Q010 To what extend are we expected to write down FSPs in the exam? Can you provide some more sample questions to practice on? Does the syntax have to be 100% correct in the exam? You should be able to understand and adapt the FSPs seen in the exercises, as some FSPs can be easily adapted to other environments/parameters. You should have a look at such details. Syntax has to be 100% correct for all the points (but I will perhaps introduce an "all-inclusive" deduction for whatever syntactical errors exist).

Exercise 12 - Task 2 - Q011 Can you explain the calculation of traces in FSP again? That highly depends on the FSP. You should try to map it onto a different structure (tree, graph, grid,...) and calculate the possibilities based on reliable algorithms. The example based on Race5K can be easily mapped to a grid. X-axis represents tortoise turns, y-axis hare turns. Then all paths from bottom left to top right have to be found.

Exercise 12 - Task 2 - Q012 How can we model general Blackboard Architecture case using FSP? This is just a high-level conceptual implementation, with no concurrency support! WORKER_A = (geta -> thinka -> puta -> WORKER_A). WORKER_B = (getb -> thinkb -> putb -> WORKER_B). Composition BLACKBOARD = (geta -> puta -> BLACKBOARD getb -> putb -> BLACKBOARD).

Exercise 12 - Task 2 - Q013 Consider the following FSP, how do you guarantee fairness? COIN = (toss -> heads -> COIN toss -> tails -> COIN ). It is a high-level concept that has to be implemented yourself in FSP. HEADS = (tossh -> heads -> HEADS). TAILS = (tosst -> tails -> TAILS). FAIRNESS = (tossh -> heads -> tosst -> tails -> FAIRNESS). You can verify it with these statements: progress HEADS = {heads} progress TAILS = {tails}

Exercise 12 - Task 2 - Q014 Series 04, Ex 3: I do not see, why value 8 (9?) for i is not allowed? Because LIFT[8] will lead to LIFT[9] and this is an action within property, so it should be ok... of course I have checked the LTSA and I have seen that it is not the case... but I am just confused why not... You specify the bounds of i with LIFT[i:0..8]. So i cannot take the value 9.

Exercise 12 - Task 2 - Q015 Series 07 (05?), Ex 4: What does the 7 in the last line ( GET OUT = MAZE(7) ) mean? I have changed the number, but the LTSA stays the same... When you compose the MAZE "7" is the start state for the composition.

Exercise 12 - Task 2 Synchronization Techniques

Exercise 12 - Task 2 - Q016 How would we implement fair semaphores? With a FIFO queue into which threads are put according to their order of arrival and taken out the same way? Exactly! That's one valid solution.

Exercise 12 - Task 2 - Q017 Besides of its disadvantages (it causes starvation) is there any advantage of busy-wait (except it is easy to implement)? No. However, in some programming languages you may have no other options.

Exercise 12 - Task 2 - Q018 Is there any case in which using notifyall() should be avoided? Not in general. Speed improvements could be a reason for avoiding notifyall(), but that's only important in REALLY large-scale systems.

Exercise 12 - Task 2 - Q019 Could you again explain "implement the monitor using semaphore" and vice-versa example? I think a lot of students (me included) had some difficulties when we first discussed the solution and I'm still not 100% sure if I'd do it correctly. I'd be nice to look back at it now that we have some more skills :) see next slide

Exercise 12 - Task 2 - Q019 Semaphore with Monitor public class Semaphore { private int s = 1; // P(s), delay until s > 0 public synchronized void enter() while (s <= 0) { try { wait();... } } } s = s - 1; } catch (InterruptedException e) { e.printstacktrace(); } // V(s), executes s := s + 1 public synchronized void leave() { s = s + 1; notifyall(); }

Exercise 12 - Task 2 - Q019 Monitor with Semaphores public class Monitor { Lock lock = new BinarySemaphore(0); Lock lock2 = new BinarySemaphore(1); // can also be ordinary semaphore // can also be ordinary semaphore } public void wait() { lock.acquire(1); } public void notify() { lock2.acquire(1); // MUTEX: if (lock.hasqueuedthreads()) { // only one lock has to be lock.release(1); // released per condition } // evaluation lock2.release(1); }

Exercise 12 - Task 2 - Q020 Are binary semaphores as good as counting semaphores? Binary semaphores are less flexible than counting semaphores. Binary semaphores are like on/off switches whereas counting semaphores can maintain various states (e.g. serve up to 10 clients,...).

Exercise 12 - Task 2 - Q021 Is it better when message passing is synchronous or asynchronous? (I think it depends on the circumstance where we use message passing, since both synchronous and asynchronous have their individual pros and cons, but not sure with my answer). It depends on the use case, but since it often includes another transfer medium (network,...) it is in many cases mandatory to be asynchronous.

Exercise 12 - Task 2 Petri Nets

Exercise 12 - Task 2 - Q022 Does mean all transitions live that all transitions are live at once or each of the transitions can be live at one time? Definition of Liveness: A transition is live if it can never deadlock. So transitions are either live or not. It is not dependent on some firing events. It is given by the Petri and its initial marking.

Exercise 12 - Task 2 - Q023 Should we able to write the regular expression of the defined petri net? Is this important? Or is it enough for us to know how to define a petri net? No, the ability to derive regular expressions from a Petri net is not necessary. It is important to know how to define a Petri net, though.

Exercise 12 - Task 2 - Q024 Petri Nets, slide 5: For me the marking is rather a set than a function. Why is it defined as a function from places to Nat? 1) It is not a set because it can contain the same item multiple times. 2) It should be a function, but relay on another input value: m: C List of places

Exercise 12 - Task 2 - Q025 How can a (bad) implementation of a Petri net deadlock even though there are enabled transitions? Definition deadlock: A transition is deadlocked if it can never fire. So a Petri net cannot deadlock as long as there are any enabled transitions, but some individual transitions possibly can.

Exercise 12 - Task 2 - Q026 What constraints could you put on a Petri net to make it fair? There are no easy constraints. You need to model this high-level concept on your own. Similar to the FSP example.

Exercise 12 - Task 2 Conceptual

Exercise 12 - Task 2 - Q027 With the buzz of asynchronous programming, is it still a good idea to assign one thread per client (for example in the context of an HTTP server)? Why? It makes sense for smaller workloads. If the workloads increase, it may makes sense to split the work caused by each client to multiple threads to handle the work most efficiently. There exist production-grade web servers using either one or the other approach.

Exercise 12 - Task 2 - Q028 In what tasks using multithreading is justified? What number of threads should we choose for the particular task? It is justified for all tasks for which the expected speed gain is worth it. The number of threads really correlates with the tasks and the system. Rule of thumb: To achieve maximum efficiency establish a 1:1 relationship between threads and physical CPU cores.

Exercise 12 - Task 2 Language Specifics

Exercise 12 - Task 2 - Q029 In Java, are there primitive data types that are inherently thread-safe, for example the int type? If yes, why are there concurrent counterparts like the AtomicInteger data type designed explicitly for use in concurrent applications? This originates from the CPU registers, because old CPUs could only write one 32 bit value at a time. Consequently all <= 32 bit primitive values are thread-safe (int, byte, boolean,...), the larger ones > 32 bit are not (long, double). Out of convenience for inexperienced developers?

Exercise 12 - Task 2 - Q030 In the exercises we were expected to answer some Java specific questions. I had to google most of the stuff since I don't have much experience in concurrent Java programming. To what extend should we prepare Java concurrency nitty-gritties for the exam? We don't ask any Java nitty-gritties in the exam, but of course, you should know the concepts and their application, benefits,...

Exercise 12 - Task 2 - Q031 Why is the Java Slot class so much more complex than the FSP Slot specification? You just see the benefit of highly specialized languages. They can achieve very compact code, but this code is usually hard to interpret. Another important factor is the feature availability. LTS does not have a fully-fledged type (int, String,...) or visibility (public, private,...) system, but Java does.

Exercise 12 - Task 2 - Q032 What is the use of class invariants and how/why are they typically used in Java? They are used as reference holders for immutable object references or immutable values (configuration parameters).

Exercise 12 - Task 2 - Q033 In the slides CP-03-Safety on page 10 (15?). What does adding VarAlpha to the Inc process actually do and especially how? It's an alphabet extension. It is sometimes useful to extend the alphabet of a process with actions that it does not engage in and consequently actions that are not used in its definition. This may be done to prevent another process executing the action. with extension without extension

Exercise 12 - Task 2 - Q034 When should you use synchronize(this) rather than synchronize(someobject)? You should always use synchronize(someobject). When you use synchronized(this) other code could steal your monitor through accidental synchronized usages in the class.

Exercise 12 - Task 2 Concurrency Architectures

Exercise 12 - Task 2 - Q035 In which case we should prefer Flow Architecture over Blackboard Architecture and vice versa? Blackboard introduces more overhead, but is more flexible. In the end it's a tradeoff between modularity, scalability and speed. I would prefer the flow architecture for more constrained local problems (Unix pipe like) and the blackboard architecture for big data preparation and processing.

Exercise 12 - Task 2 - Q036 Can Futures be applied in Agenda or Specialist Parallelism to speed up? Technically yes, but does that make any sense? As long as the result is not yet calculated, the blackboard doesn't need to be aware of that, because the worker himself will publish the work as soon as it gets ready. Even worse, when the worker dies, the ticket would be lost as well, so that wouldn't improve anything, too. This would just add an additional layer of unnecessary complexity.

Exercise 12 - Task 2 - Q037 In which situations does Specialized Parallelism make sense? In all situations where we can achieve a high benefit from highly specialized workers. Imagine hardware accelerated GPUs that support only matrix calculations. Then a specific type of workers would only be responsible for these calculations. All the other tasks would be processed by other workers that do not rely on GPUs.

Exercise 12 - Task 2 Organizational Affairs

Exercise 12 - Task 2 - Q038 Which materials are important for the exam? Lecture slides, exercises, Git code samples, Petri net examples, this slides. Exercises about slides of chapter 13 (actors) or the lab won't appear in the exam.

... still more questions?

Next Time: Exam 60 minutes / CLOSED BOOK 20-Dec-2017, 10:15am until 11:15am 1. Arrive on time 2. Don't forget your student ID 3. Don't forget a blue or black ball pen 4. Not allowed: pencils, internet, notebook, books, any printouts, additional blank pages, pocket calculators, mobiles, smart anything,...

Good Luck!