Concurrency: what, why, how
|
|
- Georgina Eaton
- 5 years ago
- Views:
Transcription
1 Concurrency: what, why, how Oleg Batrashev February 10, 2014
2 what this course is about? additional experience with try out some language concepts and techniques Grading java.util.concurrent.future? agents, software transactional memory (STM) ~5 lab assignments: 20% (1 Java + 4 Clojure) home assignments: 2 15% (2 Clojure) declarative /agents... STM/distributed program... exam: 50% (5-10 questions/exercises for 1-2 hours) Languages: Clojure, Oz, Haskell, Erlang, Scala, Go Concurrency: what, why, how 2 / 35
3 Lecture about everything and nothing in particular Explain basic idea (pseudo) vs. List reasons for using Present briefly different classifications approaches models and languages Concurrency: what, why, how 3 / 35
4 Basic idea Dependency Some terms (1) Some terms (2) Some terms (3) Concurrency: what, why, how 4 / 35
5 Basic idea Basic idea Dependency Some terms (1) Some terms (2) Some terms (3) intuitively simultaneous execution of instructions (with CPU pipelines) actions (functions within a program) programs (distributed application) what is simultaneous? physically at the same time? nearly at the same time? 2 threads on a one single-core CPU? Concurrency: what, why, how 5 / 35
6 Dependency Basic idea Dependency Some terms (1) Some terms (2) Some terms (3) If 2 actions do not need result of each other (independent) do not interfere otherwise e.g. do not write to the same variable/file then the order of their execution does not matter: a = c+d b = c+e may be run in parallel (concurrent) excessive optimism (about action dependency/interference) leads to troubles are a and e aliased? Language semantics do matter! Concurrency: what, why, how 6 / 35
7 Some terms (1) Basic idea Dependency Some terms (1) Some terms (2) Some terms (3) Not a rule, just to extend the understanding of. Parallel - execute simultaneously - the order of execution does not matter From wikipedia: Parallel computing is a form of computation in which many calculations are carried out simultaneously. computing is a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel. sometimes referred to as pseudoparallel. Concurrency: what, why, how 7 / 35
8 Some terms (2) Basic idea Dependency Some terms (1) Some terms (2) Some terms (3) Haskell community variants. Parallel - deterministic data crunching simultaneous execution of the same type tasks - non-deterministic execution of unrelated communicating processes From Chapter 24. and multicore : A concurrent program needs to perform several possibly unrelated tasks at the same time. In contrast, a parallel program solves a single problem. Concurrency: what, why, how 8 / 35
9 Some terms (3) Basic idea Dependency Some terms (1) Some terms (2) Some terms (3) Programming Clojure, 5.1 State, Concurrency, Parallelism, and Locking A concurrent program models more than one thing happening simultaneously. A parallel program takes an operation that could be sequential and chooses to break it into separate pieces that can execute concurrently to speed overall execution. Concurrency: what, why, how 9 / 35
10 List of reasons Some real life analogies Faster programs Hiding latency Better structure Concurrency: what, why, how 10 / 35
11 List of reasons List of reasons Some real life analogies Faster programs Hiding latency Better structure Faster programs running on several cores/cpus/computers More responsive programs GUI interface hiding disk/network latency Programs with natural distributed programs (client-server, etc) Fault tolerant programs using redundancy Better structured programs Concurrency: what, why, how 11 / 35
12 Some real life analogies List of reasons Some real life analogies Faster programs Hiding latency Better structure Speed of a process With 1 axe one friend can chop wood and the other collect it (conveier) With 2 axes both friends can chop wood in parallel Hiding latency When we turn on a kettle we do not wait until it boils e.g. we go and take out cups from cupboard then return to the kettle Better structured doing ironing and cooking concurrently is messy assign to different people Concurrency: what, why, how 12 / 35
13 Faster programs List of reasons Some real life analogies Faster programs Hiding latency Better structure Calculate elements of an array in parallel Perform calculations on several processors/nodes Serving youtube videos from multiple servers End of Moore s law The number of transistors that can be placed inexpensively on an integrated circuit has increased exponentially, doubling approximately every two years Every new laptop comes with (at least) dual core technology usually stuck with 50% CPU usage Concurrency: what, why, how 13 / 35
14 Hiding latency List of reasons Some real life analogies Faster programs Hiding latency Better structure disk/network take time either if a thread blocks while waiting for HTTP response, no other action can happen no other network request not even UI events work asynchronously leads to sliced programs and pyramids of doom dedicated thread asking for synchronization troubles.. leads to race conditions Concurrency: what, why, how 14 / 35
15 Better structure List of reasons Some real life analogies Faster programs Hiding latency Better structure Assign different threads to unrelated tasks (if reasonable) Data sharing server vertically, one thread per request typical web processing (excl. database access) horizontally (conveier) dedicated thread(s) for reading requests dedicated thread(s) for searching data new thread for sending data Mixing tasks of all threads in one thread asynchronous behavior structural nightmare Concurrency: what, why, how 15 / 35
16 Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Concurrency: what, why, how 16 / 35
17 Task and data Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Task : different operations concurrently calculate g and h in f(g(x), h(y)) concurrently threads in the same program several programs running on the same computer Data : same operation for different data (SIMD) loop operations: forall i=1..n do a[i]=a[i]+1 vectorised operations: MMX, SSE, etc A program may benefit from both! Concurrency: what, why, how 17 / 35
18 Coarse and fine grained Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Ratio of computation and communication coarse-grain parallel programs compute most of the time e.g distribute data, calculate, collect result (Google MapReduce) fine-grain parallel programs communicate frequently lots of dependencies between distributed data medium-grained DOUG: lots of computation interchange with lots of communication Concurrency: what, why, how 18 / 35
19 High and low level Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Different granularity (unit of ) instruction-level conveiers and pipelines in CPU; MMX expression level run expression in separate thread function level process level Source of confusion: this sometimes referred as fine/coarse grained. Question of terminology? However, it is possible to have two processes that communicate very frequently.. Concurrency: what, why, how 19 / 35
20 (1) Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Models and languages for Parallel Computation; David B. Skillicorn, Domenico Talia; 1998 Parallelism explicit (hints for possible ) Loops: forall i in 1..N do a[i]=i Fortran 90 matrix sum: C=A+B Decomposition explicit (specify parallel pieces) Mapping explicit (map pieces to processors) Communication explicit (specify sends/recvs) Synchronization explicit (handle details of message-passing) Concurrency: what, why, how 20 / 35
21 (2) Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Possibilities 1. nothing explicit, (OBJ, P3L) 2. explicit, decomposition Loops - Fortran variants, Id, APL, NESL 3. decomposition explicit, mapping (BSP, LogP) 4. mapping explicit, communication (Linda) 5. communication explicit, synchronization Actors, smalltalk 6. everything explicit PVM, MPI, fork Concurrency: what, why, how 21 / 35
22 Formalizations Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model How to desribe (concurrent) computations? operational semantics describe operations in Virtual Machine (VM) Oz way reasoning for a programmer denotational semantics describe algebraic rules concurrent lambda calculus, Pi calculus, CSP, Petri nets, DDA (Data Dependency Algebra) reasoning for a matematician axiomatic semantics describe logical rules TLA (Temporal Logic of Actions) reasoning for a machine (a prover) Concurrency: what, why, how 22 / 35
23 By application areas Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model Scientific computing High-Performance Computing (HPC) High-Throughput Computing (HTC) Distributed applications clients, servers P2P telephone stations (Erlang PL) Desktop applications responsive user interfaces utilizing multiple cores Concurrency: what, why, how 23 / 35
24 By computation model Task and data Coarse and fine grained High and low level (1) (2) Formalizations By application areas By computation model What style of is supported? Declarative concurrent model (pure) functional logical Message-passing model synchronous, asynchronous, RPC active objects, passive objects Shared-state (shared memory) model locks transactions Concurrency: what, why, how 24 / 35
25 Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Concurrency: what, why, how 25 / 35
26 Why language? Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Why not just library? cleaner syntax safer semantics forces usage patterns control over compilation process In 198x there were hundreds PLs for concurrent, now there are thousands. the following slides describe some languages Concurrency: what, why, how 26 / 35
27 Oz Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB roots in logic dataflow variables (logical variables with suspension) multiparadigm (advertises different styles of ) functional object oriented constraint (logic) explicit task (thread statement) explicit and communication (through dataflow variables) for distributed and desktop Concurrency: what, why, how 27 / 35
28 Erlang Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Ericsson project from ~1990 for telecom applications Concurrency handle thousands of phone calls robustness, distribution processes with message-passing (actors) focus on fault tolerance Concurrency: what, why, how 28 / 35
29 Scala Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB 2008 year hot topic interoperable with Java (runs on JVM) syntax similar to Java mix of object oriented and functional static typing, type inference, type parameters,... Concurrency in general quite sophisticated task processes with message-passing (actors) not main focus, thus not mature Akka library for actors Concurrency: what, why, how 29 / 35
30 Clojure Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB 2008 year hot topic targets the Java Virtual Machine Lisp syntax functional, macro Concurrency 4 reference models (atoms,vars,refs,agents) task reactive Agent system software transactional memory Concurrency: what, why, how 30 / 35
31 High-Performance Fortran since 1993, extension of Fortran 90 Concurrency data A lot of extensions but no success Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Concurrency: what, why, how 31 / 35
32 NESL Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB since 1995 available only on rare platforms a way to handle nested data Concurrency sparse matrice storage in quicksort algorithm nested data Concurrency: what, why, how 32 / 35
33 and Parallel Haskell Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Parallel Haskell with par and pseq deterministic speculative execution Haskell with forkio locks, monitors, etc synchronization variables MVars STM (software transactional memory) with atomically and more: mhaskell Data Parallel Haskell with parallel arrays NDP (nested data ) Concurrency: what, why, how 33 / 35
34 Intel TBB Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Intel Thread Building Blocks recent C++ library Concurrency task Concurrency: what, why, how 34 / 35
35 ArBB Why language? Oz Erlang Scala Clojure High-Performance Fortran NESL and Parallel Haskell Intel TBB ArBB Intel Array Building Blocks compiler beta version Concurrency immutable data (declarative model) (nested) data Concurrency: what, why, how 35 / 35
Concurrency: what, why, how
Concurrency: what, why, how May 28, 2009 1 / 33 Lecture about everything and nothing Explain basic idea (pseudo) vs. Give reasons for using Present briefly different classifications approaches models and
More informationDeclarative concurrency. March 3, 2014
March 3, 2014 (DP) what is declarativeness lists, trees iterative comutation recursive computation (DC) DP and DC in Haskell and other languages 2 / 32 Some quotes What is declarativeness? ness is important
More informationTrends and Challenges in Multicore Programming
Trends and Challenges in Multicore Programming Eva Burrows Bergen Language Design Laboratory (BLDL) Department of Informatics, University of Bergen Bergen, March 17, 2010 Outline The Roadmap of Multicores
More informationAll routines were built with VS2010 compiler, OpenMP 2.0 and TBB 3.0 libraries were used to implement parallel versions of programs.
technologies for multi-core numeric computation In order to compare ConcRT, OpenMP and TBB technologies, we implemented a few algorithms from different areas of numeric computation and compared their performance
More informationMulticore programming in Haskell. Simon Marlow Microsoft Research
Multicore programming in Haskell Simon Marlow Microsoft Research A concurrent web server server :: Socket -> IO () server sock = forever (do acc
More informationParadigms of computer programming
Paradigms of computer programming Louv1.1x and Louv1.2x form a two-course sequence Together they teach programming as a unified discipline that covers all programming languages Second-year university level:
More informationSeminar on Languages for Scientific Computing Aachen, 6 Feb Navid Abbaszadeh.
Scientific Computing Aachen, 6 Feb 2014 navid.abbaszadeh@rwth-aachen.de Overview Trends Introduction Paradigms, Data Structures, Syntax Compilation & Execution Concurrency Model Reference Types Performance
More informationWHY PARALLEL PROCESSING? (CE-401)
PARALLEL PROCESSING (CE-401) COURSE INFORMATION 2 + 1 credits (60 marks theory, 40 marks lab) Labs introduced for second time in PP history of SSUET Theory marks breakup: Midterm Exam: 15 marks Assignment:
More informationProblems with Concurrency. February 19, 2014
with Concurrency February 19, 2014 s with concurrency interleavings race conditions dead GUI source of s non-determinism deterministic execution model 2 / 30 General ideas Shared variable Access interleavings
More informationMessage Passing. Advanced Operating Systems Tutorial 7
Message Passing Advanced Operating Systems Tutorial 7 Tutorial Outline Review of Lectured Material Discussion: Erlang and message passing 2 Review of Lectured Material Message passing systems Limitations
More informationParallelism. CS6787 Lecture 8 Fall 2017
Parallelism CS6787 Lecture 8 Fall 2017 So far We ve been talking about algorithms We ve been talking about ways to optimize their parameters But we haven t talked about the underlying hardware How does
More informationCom S 541. Programming Languages I
Programming Languages I Lecturer: TA: Markus Lumpe Department of Computer Science 113 Atanasoff Hall http://www.cs.iastate.edu/~lumpe/coms541.html TR 12:40-2, W 5 Pramod Bhanu Rama Rao Office hours: TR
More informationParallelism. Master 1 International. Andrea G. B. Tettamanzi. Université de Nice Sophia Antipolis Département Informatique
Parallelism Master 1 International Andrea G. B. Tettamanzi Université de Nice Sophia Antipolis Département Informatique andrea.tettamanzi@unice.fr Andrea G. B. Tettamanzi, 2014 1 Lecture 5, Part a Languages
More informationA Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004
A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into
More informationSerial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing
CIT 668: System Architecture Parallel Computing Topics 1. What is Parallel Computing? 2. Why use Parallel Computing? 3. Types of Parallelism 4. Amdahl s Law 5. Flynn s Taxonomy of Parallel Computers 6.
More informationCourse II Parallel Computer Architecture. Week 2-3 by Dr. Putu Harry Gunawan
Course II Parallel Computer Architecture Week 2-3 by Dr. Putu Harry Gunawan www.phg-simulation-laboratory.com Review Review Review Review Review Review Review Review Review Review Review Review Processor
More informationCMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)
CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC Guest Lecturer: Sukhyun Song (original slides by Alan Sussman) Parallel Programming with Message Passing and Directives 2 MPI + OpenMP Some applications can
More informationParallelism Marco Serafini
Parallelism Marco Serafini COMPSCI 590S Lecture 3 Announcements Reviews First paper posted on website Review due by this Wednesday 11 PM (hard deadline) Data Science Career Mixer (save the date!) November
More informationGeneral Overview of Mozart/Oz
General Overview of Mozart/Oz Peter Van Roy pvr@info.ucl.ac.be 2004 P. Van Roy, MOZ 2004 General Overview 1 At a Glance Oz language Dataflow concurrent, compositional, state-aware, object-oriented language
More informationParallel Computing Why & How?
Parallel Computing Why & How? Xing Cai Simula Research Laboratory Dept. of Informatics, University of Oslo Winter School on Parallel Computing Geilo January 20 25, 2008 Outline 1 Motivation 2 Parallel
More informationExecutive Summary. It is important for a Java Programmer to understand the power and limitations of concurrent programming in Java using threads.
Executive Summary. It is important for a Java Programmer to understand the power and limitations of concurrent programming in Java using threads. Poor co-ordination that exists in threads on JVM is bottleneck
More informationIntroduction. A. Bellaachia Page: 1
Introduction 1. Objectives... 2 2. Why are there so many programming languages?... 2 3. What makes a language successful?... 2 4. Programming Domains... 3 5. Language and Computer Architecture... 4 6.
More informationFunctional Programming Lecture 13: FP in the Real World
Functional Programming Lecture 13: FP in the Real World Viliam Lisý Artificial Intelligence Center Department of Computer Science FEE, Czech Technical University in Prague viliam.lisy@fel.cvut.cz 1 Mixed
More informationThe Actor Model. Towards Better Concurrency. By: Dror Bereznitsky
The Actor Model Towards Better Concurrency By: Dror Bereznitsky 1 Warning: Code Examples 2 Agenda Agenda The end of Moore law? Shared state concurrency Message passing concurrency Actors on the JVM More
More informationOnline Course Evaluation. What we will do in the last week?
Online Course Evaluation Please fill in the online form The link will expire on April 30 (next Monday) So far 10 students have filled in the online form Thank you if you completed it. 1 What we will do
More informationHigh Performance Computing on GPUs using NVIDIA CUDA
High Performance Computing on GPUs using NVIDIA CUDA Slides include some material from GPGPU tutorial at SIGGRAPH2007: http://www.gpgpu.org/s2007 1 Outline Motivation Stream programming Simplified HW and
More informationParallel Functional Programming Lecture 1. John Hughes
Parallel Functional Programming Lecture 1 John Hughes Moore s Law (1965) The number of transistors per chip increases by a factor of two every year two years (1975) Number of transistors What shall we
More informationThinking parallel. Decomposition. Thinking parallel. COMP528 Ways of exploiting parallelism, or thinking parallel
COMP528 Ways of exploiting parallelism, or thinking parallel www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Thinking parallel
More informationProgramming Models for Supercomputing in the Era of Multicore
Programming Models for Supercomputing in the Era of Multicore Marc Snir MULTI-CORE CHALLENGES 1 Moore s Law Reinterpreted Number of cores per chip doubles every two years, while clock speed decreases Need
More informationWorkloads Programmierung Paralleler und Verteilter Systeme (PPV)
Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment
More informationIntroduction to High-Performance Computing
Introduction to High-Performance Computing Simon D. Levy BIOL 274 17 November 2010 Chapter 12 12.1: Concurrent Processing High-Performance Computing A fancy term for computers significantly faster than
More informationProgramming Paradigms
PP 2017/18 Unit 15 Concurrent Programming with Erlang 1/32 Programming Paradigms Unit 15 Concurrent Programming with Erlang J. Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE PP
More informationIntroduction to Concurrent Software Systems. CSCI 5828: Foundations of Software Engineering Lecture 08 09/17/2015
Introduction to Concurrent Software Systems CSCI 5828: Foundations of Software Engineering Lecture 08 09/17/2015 1 Goals Present an overview of concurrency in software systems Review the benefits and challenges
More informationCSC630/COS781: Parallel & Distributed Computing
CSC630/COS781: Parallel & Distributed Computing Algorithm Design Chapter 3 (3.1-3.3) 1 Contents Preliminaries of parallel algorithm design Decomposition Task dependency Task dependency graph Granularity
More informationIntroduction to Parallel Computing
Portland State University ECE 588/688 Introduction to Parallel Computing Reference: Lawrence Livermore National Lab Tutorial https://computing.llnl.gov/tutorials/parallel_comp/ Copyright by Alaa Alameldeen
More informationParallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides)
Parallel Computing 2012 Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides) Parallel Algorithm Design Outline Computational Model Design Methodology Partitioning Communication
More informationIssues in Parallel Processing. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University
Issues in Parallel Processing Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University Introduction Goal: connecting multiple computers to get higher performance
More informationHigh-Performance Scientific Computing
High-Performance Scientific Computing Instructor: Randy LeVeque TA: Grady Lemoine Applied Mathematics 483/583, Spring 2011 http://www.amath.washington.edu/~rjl/am583 World s fastest computers http://top500.org
More informationParallel Languages: Past, Present and Future
Parallel Languages: Past, Present and Future Katherine Yelick U.C. Berkeley and Lawrence Berkeley National Lab 1 Kathy Yelick Internal Outline Two components: control and data (communication/sharing) One
More informationA Survey of Concurrency Constructs. Ted Leung Sun
A Survey of Concurrency Constructs Ted Leung Sun Microsystems ted.leung@sun.com @twleung 16 threads 128 threads Today s model Threads Program counter Own stack Shared Memory Locks Some of the problems
More informationScalable Shared Memory Programing
Scalable Shared Memory Programing Marc Snir www.parallel.illinois.edu What is (my definition of) Shared Memory Global name space (global references) Implicit data movement Caching: User gets good memory
More informationIntroduction to Concurrent Software Systems. CSCI 5828: Foundations of Software Engineering Lecture 12 09/29/2016
Introduction to Concurrent Software Systems CSCI 5828: Foundations of Software Engineering Lecture 12 09/29/2016 1 Goals Present an overview of concurrency in software systems Review the benefits and challenges
More informationParallel and Distributed Computing (PD)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Parallel and Distributed Computing (PD) The past decade has brought explosive growth in multiprocessor computing, including multi-core
More informationOverview. Distributed Computing with Oz/Mozart (VRH 11) Mozart research at a glance. Basic principles. Language design.
Distributed Computing with Oz/Mozart (VRH 11) Carlos Varela RPI March 15, 2007 Adapted with permission from: Peter Van Roy UCL Overview Designing a platform for robust distributed programming requires
More informationAn Introduction to Parallel Programming
An Introduction to Parallel Programming Ing. Andrea Marongiu (a.marongiu@unibo.it) Includes slides from Multicore Programming Primer course at Massachusetts Institute of Technology (MIT) by Prof. SamanAmarasinghe
More informationParallel Algorithm Engineering
Parallel Algorithm Engineering Kenneth S. Bøgh PhD Fellow Based on slides by Darius Sidlauskas Outline Background Current multicore architectures UMA vs NUMA The openmp framework and numa control Examples
More informationIntroduction to parallel computers and parallel programming. Introduction to parallel computersand parallel programming p. 1
Introduction to parallel computers and parallel programming Introduction to parallel computersand parallel programming p. 1 Content A quick overview of morden parallel hardware Parallelism within a chip
More informationLinux multi-core scalability
Linux multi-core scalability Oct 2009 Andi Kleen Intel Corporation andi@firstfloor.org Overview Scalability theory Linux history Some common scalability trouble-spots Application workarounds Motivation
More informationProblems with Concurrency
with Concurrency February 14, 2012 1 / 27 s with concurrency race conditions deadlocks GUI source of s non-determinism deterministic execution model interleavings 2 / 27 General ideas Shared variable Shared
More informationCMSC Computer Architecture Lecture 12: Multi-Core. Prof. Yanjing Li University of Chicago
CMSC 22200 Computer Architecture Lecture 12: Multi-Core Prof. Yanjing Li University of Chicago Administrative Stuff! Lab 4 " Due: 11:49pm, Saturday " Two late days with penalty! Exam I " Grades out on
More informationOhua: Implicit Dataflow Programming for Concurrent Systems
Ohua: Implicit Dataflow Programming for Concurrent Systems Sebastian Ertel Compiler Construction Group TU Dresden, Germany Christof Fetzer Systems Engineering Group TU Dresden, Germany Pascal Felber Institut
More informationAdvances in Programming Languages
O T Y H Advances in Programming Languages APL5: Further language concurrency mechanisms David Aspinall (including slides by Ian Stark) School of Informatics The University of Edinburgh Tuesday 5th October
More informationECE/CS 250 Computer Architecture. Summer 2016
ECE/CS 250 Computer Architecture Summer 2016 Multicore Dan Sorin and Tyler Bletsch Duke University Multicore and Multithreaded Processors Why multicore? Thread-level parallelism Multithreaded cores Multiprocessors
More informationConcurrent ML. John Reppy January 21, University of Chicago
Concurrent ML John Reppy jhr@cs.uchicago.edu University of Chicago January 21, 2016 Introduction Outline I Concurrent programming models I Concurrent ML I Multithreading via continuations (if there is
More informationIntroduction to Parallel Computing
Introduction to Parallel Computing Chris Kauffman CS 499: Spring 2016 GMU Goals Motivate: Parallel Programming Overview concepts a bit Discuss course mechanics Moore s Law Smaller transistors closer together
More informationExpressing Parallel Com putation
L1-1 Expressing Parallel Com putation Laboratory for Computer Science M.I.T. Lecture 1 Main St r eam Par allel Com put ing L1-2 Most server class machines these days are symmetric multiprocessors (SMP
More informationThe Rise and Rise of Dataflow in the JavaVerse
The Rise and Rise of Dataflow in the JavaVerse @russel_winder russel@winder.org.uk https://www.russel.org.uk 1 The Plan for the Session Stuff. More stuff. Even more stuff possibly. Summary and conclusions
More informationParallel Programming Programowanie równoległe
Parallel Programming Programowanie równoległe Lecture 1: Introduction. Basic notions of parallel processing Paweł Rzążewski Grading laboratories (4 tasks, each for 3-4 weeks) total 50 points, final test
More informationParallel Programming with OpenMP
Parallel Programming with OpenMP Parallel programming for the shared memory model Christopher Schollar Andrew Potgieter 3 July 2013 DEPARTMENT OF COMPUTER SCIENCE Roadmap for this course Introduction OpenMP
More informationDistributed Programming
Distributed Programming Marcel Heinz & Ralf Lämmel Software Languages Team University of Koblenz-Landau Motivation How can we achieve better performance? How can we distribute computations? How can we
More informationReview of previous examinations TMA4280 Introduction to Supercomputing
Review of previous examinations TMA4280 Introduction to Supercomputing NTNU, IMF April 24. 2017 1 Examination The examination is usually comprised of: one problem related to linear algebra operations with
More informationIntroduction to Parallel Computing. CPS 5401 Fall 2014 Shirley Moore, Instructor October 13, 2014
Introduction to Parallel Computing CPS 5401 Fall 2014 Shirley Moore, Instructor October 13, 2014 1 Definition of Parallel Computing Simultaneous use of multiple compute resources to solve a computational
More informationArchitectural Styles. Software Architecture Lecture 5. Copyright Richard N. Taylor, Nenad Medvidovic, and Eric M. Dashofy. All rights reserved.
Architectural Styles Software Architecture Lecture 5 Copyright Richard N. Taylor, Nenad Medvidovic, and Eric M. Dashofy. All rights reserved. Object-Oriented Style Components are objects Data and associated
More informationThe State of Parallel Programming. Burton Smith Technical Fellow Microsoft Corporation
The State of Parallel Programming Burton Smith Technical Fellow Microsoft Corporation 1 Parallel computing is mainstream Uniprocessors are reaching their performance limits More transistors per core increases
More informationParallel Programming: Background Information
1 Parallel Programming: Background Information Mike Bailey mjb@cs.oregonstate.edu parallel.background.pptx Three Reasons to Study Parallel Programming 2 1. Increase performance: do more work in the same
More informationParallel Programming: Background Information
1 Parallel Programming: Background Information Mike Bailey mjb@cs.oregonstate.edu parallel.background.pptx Three Reasons to Study Parallel Programming 2 1. Increase performance: do more work in the same
More informationCS 426 Parallel Computing. Parallel Computing Platforms
CS 426 Parallel Computing Parallel Computing Platforms Ozcan Ozturk http://www.cs.bilkent.edu.tr/~ozturk/cs426/ Slides are adapted from ``Introduction to Parallel Computing'' Topic Overview Implicit Parallelism:
More informationAll you need is fun. Cons T Åhs Keeper of The Code
All you need is fun Cons T Åhs Keeper of The Code cons@klarna.com Cons T Åhs Keeper of The Code at klarna Architecture - The Big Picture Development - getting ideas to work Code Quality - care about the
More informationProcessor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>
Processor speed 6.037 - Structure and Interpretation of Computer Programs Mike Phillips Massachusetts Institute of Technology http://en.wikipedia.org/wiki/file:transistor_count_and_moore%27s_law_-
More informationAdministration. Prerequisites. Website. CSE 392/CS 378: High-performance Computing: Principles and Practice
CSE 392/CS 378: High-performance Computing: Principles and Practice Administration Professors: Keshav Pingali 4.126 ACES Email: pingali@cs.utexas.edu Jim Browne Email: browne@cs.utexas.edu Robert van de
More informationMulti-core Parallelization in Clojure - a Case Study
Multi-core Parallelization in Clojure - a Case Study Johann M. Kraus and Hans A. Kestler AG Bioinformatics and Systems Biology Institute of Neural Information Processing University of Ulm 29.06.2009 Outline
More informationCS558 Programming Languages
CS558 Programming Languages Fall 2016 Lecture 7a Andrew Tolmach Portland State University 1994-2016 Values and Types We divide the universe of values according to types A type is a set of values and a
More informationFunctional Programming
The Meta Language (ML) and Functional Programming Daniel S. Fava danielsf@ifi.uio.no Department of informatics University of Oslo, Norway Motivation ML Demo Which programming languages are functional?
More informationCS 242. Fundamentals. Reading: See last slide
CS 242 Fundamentals Reading: See last slide Syntax and Semantics of Programs Syntax The symbols used to write a program Semantics The actions that occur when a program is executed Programming language
More informationAdministrivia. Minute Essay From 4/11
Administrivia All homeworks graded. If you missed one, I m willing to accept it for partial credit (provided of course that you haven t looked at a sample solution!) through next Wednesday. I will grade
More informationThe Problem with Threads
The Problem with Threads Author Edward A Lee Presented by - Varun Notibala Dept of Computer & Information Sciences University of Delaware Threads Thread : single sequential flow of control Model for concurrent
More informationCS 475: Parallel Programming Introduction
CS 475: Parallel Programming Introduction Wim Bohm, Sanjay Rajopadhye Colorado State University Fall 2014 Course Organization n Let s make a tour of the course website. n Main pages Home, front page. Syllabus.
More informationTensorFlow: A System for Learning-Scale Machine Learning. Google Brain
TensorFlow: A System for Learning-Scale Machine Learning Google Brain The Problem Machine learning is everywhere This is in large part due to: 1. Invention of more sophisticated machine learning models
More informationConcurrency: Past and Present
Concurrency: Past and Present Implications for Java Developers Brian Goetz Senior Staff Engineer, Sun Microsystems brian.goetz@sun.com About the speaker Professional software developer for 20 years > Sr.
More informationCS4961 Parallel Programming. Lecture 4: Data and Task Parallelism 9/3/09. Administrative. Mary Hall September 3, Going over Homework 1
CS4961 Parallel Programming Lecture 4: Data and Task Parallelism Administrative Homework 2 posted, due September 10 before class - Use the handin program on the CADE machines - Use the following command:
More informationMoore s Law. Computer architect goal Software developer assumption
Moore s Law The number of transistors that can be placed inexpensively on an integrated circuit will double approximately every 18 months. Self-fulfilling prophecy Computer architect goal Software developer
More informationCS671 Parallel Programming in the Many-Core Era
CS671 Parallel Programming in the Many-Core Era Lecture 1: Introduction Zheng Zhang Rutgers University CS671 Course Information Instructor information: instructor: zheng zhang website: www.cs.rutgers.edu/~zz124/
More informationStreamBox: Modern Stream Processing on a Multicore Machine
StreamBox: Modern Stream Processing on a Multicore Machine Hongyu Miao and Heejin Park, Purdue ECE; Myeongjae Jeon and Gennady Pekhimenko, Microsoft Research; Kathryn S. McKinley, Google; Felix Xiaozhu
More informationHigh Performance Computing. University questions with solution
High Performance Computing University questions with solution Q1) Explain the basic working principle of VLIW processor. (6 marks) The following points are basic working principle of VLIW processor. The
More informationParallel and High Performance Computing CSE 745
Parallel and High Performance Computing CSE 745 1 Outline Introduction to HPC computing Overview Parallel Computer Memory Architectures Parallel Programming Models Designing Parallel Programs Parallel
More informationParallelism and runtimes
Parallelism and runtimes Advanced Course on Compilers Spring 2015 (III-V): Lecture 7 Vesa Hirvisalo ESG/CSE/Aalto Today Parallel platforms Concurrency Consistency Examples of parallelism Regularity of
More informationWelcome to. Instructor Marc Pomplun CS 470/670. Introduction to Artificial Intelligence 1/26/2016. Spring Selectivity in Complex Scenes
Welcome to CS 470/670 Introduction to Artificial Intelligence Office: Lab: Instructor Marc Pomplun S-3-171 S-3-135 Office Hours: Tuesdays 4:00pm 5:30pm Thursdays 7:00pm 8:30pm Spring 2016 Instructor: Marc
More informationInformal Semantics of Data. semantic specification names (identifiers) attributes binding declarations scope rules visibility
Informal Semantics of Data semantic specification names (identifiers) attributes binding declarations scope rules visibility 1 Ways to Specify Semantics Standards Documents (Language Definition) Language
More informationComputer Science Curricula 2013
Computer Science Curricula 2013 Curriculum Guidelines for Undergraduate Degree Programs in Computer Science December 20, 2013 The Joint Task Force on Computing Curricula Association for Computing Machinery
More informationDesigning for Scalability. Patrick Linskey EJB Team Lead BEA Systems
Designing for Scalability Patrick Linskey EJB Team Lead BEA Systems plinskey@bea.com 1 Patrick Linskey EJB Team Lead at BEA OpenJPA Committer JPA 1, 2 EG Member 2 Agenda Define and discuss scalability
More informationMessage Passing. Frédéric Haziza Summer Department of Computer Systems Uppsala University
Message Passing Frédéric Haziza Department of Computer Systems Uppsala University Summer 2009 MultiProcessor world - Taxonomy SIMD MIMD Message Passing Shared Memory Fine-grained Coarse-grained
More informationAbstraction: Distributed Ledger
Bitcoin 2 Abstraction: Distributed Ledger 3 Implementation: Blockchain this happened this happened this happen hashes & signatures hashes & signatures hashes signatu 4 Implementation: Blockchain this happened
More informationReminder from last time
Concurrent systems Lecture 5: Concurrency without shared data, composite operations and transactions, and serialisability DrRobert N. M. Watson 1 Reminder from last time Liveness properties Deadlock (requirements;
More informationIBM Power Multithreaded Parallelism: Languages and Compilers. Fall Nirav Dave
6.827 Multithreaded Parallelism: Languages and Compilers Fall 2006 Lecturer: TA: Assistant: Arvind Nirav Dave Sally Lee L01-1 IBM Power 5 130nm SOI CMOS with Cu 389mm 2 2GHz 276 million transistors Dual
More informationRevisiting the Past 25 Years: Lessons for the Future. Guri Sohi University of Wisconsin-Madison
Revisiting the Past 25 Years: Lessons for the Future Guri Sohi University of Wisconsin-Madison Outline VLIW OOO Superscalar Enhancing Superscalar And the future 2 Beyond pipelining to ILP Late 1980s to
More informationShared-Memory Programming Models
Shared-Memory Programming Models Parallel Programming Concepts Winter Term 2013 / 2014 Dr. Peter Tröger, M.Sc. Frank Feinbube Cilk C language combined with several new keywords Different approach to OpenMP
More informationDeterministic Concurrency
Candidacy Exam p. 1/35 Deterministic Concurrency Candidacy Exam Nalini Vasudevan Columbia University Motivation Candidacy Exam p. 2/35 Candidacy Exam p. 3/35 Why Parallelism? Past Vs. Future Power wall:
More informationShared state model. April 3, / 29
Shared state April 3, 2012 1 / 29 the s s limitations of explicit state: cells equivalence of the two s programming in limiting interleavings locks, monitors, transactions comparing the 3 s 2 / 29 Message
More informationOverview of the Course
Overview of the Course Critical Facts Welcome to CISC 471 / 672 Compiler Construction Topics in the design of programming language translators, including parsing, semantic analysis, error recovery, code
More informationLecture 9: MIMD Architectures
Lecture 9: MIMD Architectures Introduction and classification Symmetric multiprocessors NUMA architecture Clusters Zebo Peng, IDA, LiTH 1 Introduction A set of general purpose processors is connected together.
More information