Highlights. - Making threads. - Waiting for threads

Similar documents
Highlights. - Making threads. - Waiting for threads. - Review (classes, pointers, inheritance)

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

CS4411 Intro. to Operating Systems Exam 1 Fall points 9 pages

CS510 Advanced Topics in Concurrency. Jonathan Walpole

EE458 - Embedded Systems Lecture 8 Semaphores

Dealing with Issues for Interprocess Communication

Concurrency: a crash course

Operating Systems. Synchronization

Lecture 8: September 30

ECE 462 Object-Oriented Programming using C++ and Java. Scheduling and Critical Section

Midterm Exam Answers

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Parallel Programming: Background Information

Parallel Programming: Background Information

HY345 - Operating Systems

SSC - Concurrency and Multi-threading Java multithreading programming - Synchronisation (II)

EECS 482 Introduction to Operating Systems

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

CSE 153 Design of Operating Systems

CS 153 Design of Operating Systems Winter 2016

PROVING THINGS ABOUT PROGRAMS

EECS 482 Introduction to Operating Systems

POSIX Threads: a first step toward parallel programming. George Bosilca

C09: Process Synchronization

CS 220: Introduction to Parallel Computing. Introduction to CUDA. Lecture 28

Safety SPL/2010 SPL/20 1

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

Processes and Threads

HY 345 Operating Systems

Precept 2: Non-preemptive Scheduler. COS 318: Fall 2018

Fall 2004 CS414 Prelim 1

CS533 Concepts of Operating Systems. Jonathan Walpole

Threads Cannot Be Implemented As a Library

Operating Systems Structure

Suggested Solutions (Midterm Exam October 27, 2005)

Parallel Programming Languages COMP360

The University of Texas at Arlington

CSE 374 Programming Concepts & Tools

COMP6771 Advanced C++ Programming

1 /10 2 /15 3 /20 4 /15 total /60. CSE 120 Fall 2007 Midterm Examination. Name. Student ID

Data Races and Deadlocks! (or The Dangers of Threading) CS449 Fall 2017

Parallel Programming using OpenMP

CS 450 Exam 2 Mon. 4/11/2016

CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers

Stacks and Queues

Parallel Programming using OpenMP

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

MultiThreading 07/01/2013. Session objectives. Introduction. Introduction. Advanced Java Programming Course

Concurrent Programming in C++ Venkat

Concurrent Processes Rab Nawaz Jadoon

Reminder from last time

Advanced Java Programming Course. MultiThreading. By Võ Văn Hải Faculty of Information Technologies Industrial University of Ho Chi Minh City

5. Synchronization. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

1 Introduction. 2 Managing contexts. Green, green threads of home. Johan Montelius HT2018

Concurrent Server Design Multiple- vs. Single-Thread

High Performance Computing Course Notes Shared Memory Parallel Programming

The University of Texas at Arlington

COMP 3361: Operating Systems 1 Final Exam Winter 2009

PROCESS SYNCHRONIZATION

Synchronization for Concurrent Tasks

Chenyu Zheng. CSCI 5828 Spring 2010 Prof. Kenneth M. Anderson University of Colorado at Boulder

CS 153 Lab4 and 5. Kishore Kumar Pusukuri. Kishore Kumar Pusukuri CS 153 Lab4 and 5

CIS Operating Systems Application of Semaphores. Professor Qiang Zeng Spring 2018

Synchronising Threads

Synchronization COMPSCI 386

Writing Parallel Programs COMP360

Before we can talk about concurrency, we need to first understand what happens under the hood when the interpreter executes a Python statement.

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

G52CON: Concepts of Concurrency

Threads Tuesday, September 28, :37 AM

Process Synchronization - I

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

Total Score /20 /20 /20 /25 /15 Grader

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

Concurrent & Distributed Systems Supervision Exercises

A mutex is a global object that allows you to control code execution order and to prevent conflicts in a multi-thread application. Basically, a mutex

(b) External fragmentation can happen in a virtual memory paging system.

Midterm Exam Amy Murphy 19 March 2003

PREPARING FOR PRELIM 1

Distributed Transaction Management 2003

IT 540 Operating Systems ECE519 Advanced Operating Systems

Course Introduction & Foundational Concepts

What is uop Cracking?

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Programming Languages

C++ Threading. Tim Bailey

SWEN-220 Mathematical Models of Software. Process Synchronization Critical Section & Semaphores

CS 537 Lecture 8 Monitors. Thread Join with Semaphores. Dining Philosophers. Parent thread. Child thread. Michael Swift

CPSC/ECE 3220 Summer 2017 Exam 2

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

MCS-378 Intraterm Exam 1 Serial #:

CSE 160 Lecture 7. C++11 threads C++11 memory model

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL. Chap. 2.2 Interprocess Communication

Introducing Shared-Memory Concurrency

Java Threads and intrinsic locks

Transcription:

Parallel processing

Highlights - Making threads - Waiting for threads

Terminology CPU = area of computer that does thinking Core = processor = a thinking unit Cores CPU front/back Program = code = instructions on what to do Thread = parallel process = an independent part of the program/code Program = string, thread = 1 part of that

Review: CPUs

Review: CPUs In the 2000s, computing too a major turn: multi-core processors (CPUs)

Review: CPUs

Review: CPUs The major reason is due to heat/energy density

Review: CPUs

Review: CPUs This trend will almost surely not reverse There will be new major advancements in computing eventually (quantum computing?) But cloud computing, which has programs that run across multiple computers are going nowhere anytime soon

Parallel: how So far our computer programs have run through code one line at a time To get multiple parts running at the same time, you must create a new thread and give it a function to start running: Need: #include <thread> starts another thread at foo

Parallel: how If the function wants arguments, just add them after the function in the thread constructor: This will start function say with first input as hello (see: createthreads.cpp)

Parallel: basics The major drawback of distributed computing (within a single computer or between) is resource synchronization (i.e. sharing info) This causes two types of large problems: 1. Conflicts when multiple threads want to use the same resource 2. Logic errors due to parts of the program having different information

1. Resource conflict Siblings anyone?

1. Resource conflict Public bathroom? All your programs so far have had 1 restroom, but some parts of your program could be sped up by making 2 lines(as long as no issues)

1. Resource conflict We will actually learn how to cause minor resource conflicts to ensure no logic errors This is similar to a cost of calling your forgetful relative to remind them of something This only needs to be done for the important matters that involve both of you (e.g. when the family get-together is happening)

2. Different information If you and another person try to do something together, but not coordinated... disaster

2. Different information Each part of the computer has its own local set of information, much like separate people Suppose we handed out tally counters and told two people to count the amount of people

2. Different information However, two people could easily tally the number entering this room... Simply stand one by each door and add them Our goal is to design programs that have these two separate parts that can be done simultaneously (which tries to avoid sharing parts)

Parallel: how However, main() will keep moving on without any regard to what these threads are doing If you want to synchronize them at some later point, you can run the join() function This tells the code to wait here until the thread is done (i.e. returns from the function)

Parallel: how Consider this: The start.join() stops main until the peek() function returns (see: waitforthreads.cpp)

Parallel: advanced None of these fix our counting issue (this is, in fact, not something we want to parallelize) I only have 4 cores in my computer, so if I have more than 3 extra threads (my normal program is one) they fight over thinking time Each thread speeds along, and my operating system decides which thread is going to get a turn and when (semi-random)

Parallel: advanced We can force threads to not fall all over themselves by using a mutex (stands for mutual exclusion ) Mutexes have two functions: 1. lock 2. unlock After one thread locks this mutex, no others can pass their locks until it is unlocked

Parallel: advanced Lock Unlock

Parallel: advanced These mutex locks are needed if we are trying to share memory between threads Without this, there can be miscommunications about the values of the data if one thread is trying to change while another is reading A very simple example of this is having multiple threads go: x++ (see: sharingbetweenthreads.cpp)

Parallel: advanced You have to be careful when locking a mutex, as if that thread crashes or you forget to unlock... then your program is in an infinite loop There are way around this: - Timed locks - atomic operations instead of mutex The important part is deciding what parts can be parallelized and writing code to achieve this