Shared memory parallel computing. Intel Threading Building Blocks

Size: px
Start display at page:

Download "Shared memory parallel computing. Intel Threading Building Blocks"

Transcription

1 Shared memory parallel computing Intel Threading Building Blocks

2 Introduction & history Threading Building Blocks (TBB) cross platform C++ template lib for task-based shared memory parallel programming v1.0 created in 2006 one year after Intel Pentium D v4.4 is currently the most recent version

3 Introduction & history Although Intel proprietary, it s worth looking at because: it has an Open Source licensed version it s supported, maintained & documented on a wide range of platforms and operating systems (Windows, *NIX, Android) it uses some interesting & important concepts (ideas from Cilk, task-based parallelism, work-stealing scheduler,...) it actually works pretty well

4 Introduction & history Must be copyconstructible! Implicit barrier 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 5 using namespace std; 6 using namespace tbb; 7 8 /** 9 * Functor that represents the task being parallelized 10 */ 11 struct Func { 12 void operator()(const blocked_range<size_t>& range) const { 13 cout << "Hello world!" << endl; 14 } 15 }; int main(int argc, char* argv[]) { 18 // Define the task 19 Func f; 20 // Define the range to work on 21 blocked_range<size_t> rng(0, 10); 22 // Run task in parallel 23 parallel_for(rng, f); return 0; 26 } size_t is used to represent the maximum size of any object (including arrays) in the particular implementation. It is used as the return type of the sizeof operator. Must not modify Func since it can be copied! Half open interval [0,10) i.e. values [0,...,9]

5 Introduction & history 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 5 using namespace std; 6 using namespace tbb; 7 8 int main(int argc, char* argv[]) { 9 // Define the range to work on 10 blocked_range<size_t> rng(0, 10); 11 // Run task in parallel 12 parallel_for(rng, [](const blocked_range<size_t>& range){ 13 cout << "Hello world!" << endl; 14 }); return 0; 17 }

6 Upcoming topics Common parallel algorithms in TBB Less technical Synchronisation Concurrent containers Work-stealing task scheduler Memory allocation issues Practical remarks More technical

7 Common parallel algorithms parallel_for 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 5 using namespace std; 6 using namespace tbb; 7 8 class Work { 9 public: 10 Work(double* data) : m_data(data) {} 11 void operator()(const blocked_range<size_t>& r) const { 12 for (size_t idx = r.begin(); idx!= r.end(); ++idx) { 13 // Do some work on data[idx] 14 m_data[idx] = 2 * m_data[idx]; 15 } 16 } 17 private: 18 double* m_data; 19 }; int main(int argc, char* argv[]) { 22 // Define the data set to work on 23 const int n = 1000; 24 double* data = new double[n]; 25 const int grain_size = 10; 26 blocked_range<size_t> data_rng(0, n, grain_size); 27 // Do work in parallel 28 parallel_for(data_rng, Work(data)); 29 // Clean up 30 delete[] data; 31 return 0; 32 } To implement the concept of a parallel_for body, the class Work must implement: Work::Work(const Work&); Work::~Work(); void Work::operator()(Range& r) const;

8 Common parallel algorithms parallel_for 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 5 using namespace std; 6 using namespace tbb; 7 8 class Work { 9 public: 10 Work(double* data) : m_data(data) {} 11 void operator()(const blocked_range<size_t>& r) const { 12 for (size_t idx = r.begin(); idx!= r.end(); ++idx) { 13 // Do some work on data[idx] 14 m_data[idx] = 2 * m_data[idx]; 15 } 16 } 17 private: 18 double* m_data; 19 }; int main(int argc, char* argv[]) { 22 // Define the data set to work on 23 const int n = 1000; 24 double* data = new double[n]; 25 const int grain_size = 10; 26 blocked_range<size_t> data_rng(0, n, grain_size); 27 // Do work in parallel 28 parallel_for(data_rng, Work(data)); 29 // Clean up 30 delete[] data; 31 return 0; 32 } There s also a blocked_range2d blocked_range3d or your_own_range_concept

9 1 #include <iostream> 2 #include Common <cstdlib> parallel algorithms 3 #include <tbb/parallel_reduce.h> 4 #include <tbb/blocked_range.h> 5 6 using namespace std; parallel_reduce (imperative) 7 using namespace tbb; 8 9 struct Sum { 10 Sum(double* data) : m_data(data), m_sum(0.0) {} // Constructor 11 Sum(Sum& that, split) : m_data(that.m_data), m_sum(0.0) {} // Split constructor 12 // Local part of reduction (NOTE: might be called more than once!!!) 13 void operator()(const blocked_range<size_t>& r) { 14 for (size_t idx = r.begin(); idx!= r.end(); ++idx) { 15 m_sum += m_data[idx]; 16 } 17 } 18 void join(const Sum& that) { m_sum += that.m_sum; } // Join the work of that into this 19 double* m_data; // Pointer to array to reduce 20 double m_sum; // Local sum 21 }; int main(int argc, char* argv[]) { 24 if (argc!= 2) { 25 cout << argv[0] << " n" << endl; 26 return 1; 27 } 28 // Define the data set to work on 29 const int n = atoi(argv[1]); 30 double* data = new double[n]; 31 for (int idx = 0; idx < n; ++idx) { 32 data[idx] = 1 + idx; 33 } // Run the reduction task in parallel 36 Sum sum(data); 37 parallel_reduce(blocked_range<size_t>(0, n), sum); cout << "sum_{i=1}^{" << n << "}(i) = " << sum.m_sum << endl; // Clean up 42 delete[] data; 43 return 0; 44 } To implement the concept of a parallel_reduce body, the class Sum must implement: Sum::Sum(Sum& that, split); Sum::~Sum(); void Sum::operator()(Range& r); void Sum::join(Sum& that); split is a dummy argument defined in the library to distinguish split ctor from regular copy ctor

10 Common parallel algorithms parallel_reduce (functional) See website for an example!

11 Common parallel algorithms 3 #include <tbb/pipeline.h> 1 #include <iostream> 2 #include <algorithm> 4 #include <tbb/concurrent_queue.h> 5 6 tbb::concurrent_queue<std::string*> bag; 7 tbb::concurrent_queue<std::string*> shelf; 8 9 struct ApplyButter : public tbb::filter { 10 ApplyButter() : tbb::filter(tbb::filter::parallel) {} 11 void* operator()(void* sandwich) { 12 static_cast<std::string*>(sandwich)->append(" with butter"); 13 return sandwich; 14 } 15 }; struct AddCheese : public tbb::filter { 18 AddCheese() : tbb::filter(tbb::filter::parallel) {} 19 void* operator()(void* sandwich) { 20 static_cast<std::string*>(sandwich)->append(" and cheese"); 21 return sandwich; 22 } 23 }; struct TakeSandwichFromBag : public tbb::filter { 26 TakeSandwichFromBag() : tbb::filter(tbb::filter::parallel) {} 27 void* operator()(void* dummy) { 28 std::string* sandwich; 29 if (bag.try_pop(sandwich)) { 30 return sandwich; 31 } else { 32 return NULL; 33 } 34 } 35 }; struct PutSandwichOnShelf : public tbb::filter { 38 PutSandwichOnShelf() : tbb::filter(tbb::filter::parallel) {} 39 void* operator()(void* sandwich) { 40 shelf.push(static_cast<std::string*>(sandwich)); 41 return NULL; 42 } 43 }; pipeline A pipeline represents pipelined application of a series of filters to a stream of items. 45 int main(int argc, char* argv[]) { 46 for (int idx = 0; idx < 10; ++idx) 47 bag.push(new std::string("sandwich")); TakeSandwichFromBag tsfb; 50 ApplyButter ab; 51 AddCheese ac; 52 PutSandwichOnShelf psos; tbb::pipeline sandwich_machine; 55 sandwich_machine.add_filter(tsfb); 56 sandwich_machine.add_filter(ab); 57 sandwich_machine.add_filter(ac); 58 sandwich_machine.add_filter(psos); 59 sandwich_machine.run(4); 60 sandwich_machine.clear(); return 0; 63 }

12 Common parallel algorithms parallel_do For parallel iteration over iteration spaces without an a priori known size. Does NOT scale well but might give a speedup in some cases. parallel_scan parallel_sort Guess what?

13 Synchronisation mutex 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 #include <tbb/mutex.h> 5 6 using namespace std; 7 using namespace tbb; 8 9 int main(int argc, char* argv[]) { 10 mutex m; parallel_for(blocked_range<size_t>(0, 10), [&](blocked_range<size_t>& r){ 13 m.lock(); 14 cout << "Doing some critical work on [" << r.begin() << ", " << r.end() << "]" << endl; 15 m.unlock(); 16 }); return 0; 19 }

14 Synchronisation mutex::scoped_lock 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 #include <tbb/mutex.h> 5 6 using namespace std; 7 using namespace tbb; 8 9 int main(int argc, char* argv[]) { 10 mutex m; parallel_for(blocked_range<size_t>(0, 10), [&](blocked_range<size_t>& r){ 13 mutex::scoped_lock my_lock(m); 14 cout << "Doing some critical work on [" << r.begin() << ", " << r.end() << "]" << endl; 15 }); return 0; 18 } Named object A temporary would release the lock immediately!

15 Synchronisation spin_mutex 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 #include <tbb/spin_mutex.h> 5 6 using namespace std; 7 using namespace tbb; 8 9 int main(int argc, char* argv[]) { 10 spin_mutex m; parallel_for(blocked_range<size_t>(0, 10), [&](blocked_range<size_t>& r){ 13 spin_mutex::scoped_lock my_lock(m); 14 cout << "Doing some critical work on [" << r.begin() << ", " << r.end() << "]" << endl; 15 }); return 0; 18 }

16 Synchronisation null_mutex 1 #include <iostream> 2 #include <tbb/parallel_for.h> 3 #include <tbb/blocked_range.h> 4 #include <tbb/null_mutex.h> 5 6 using namespace std; 7 using namespace tbb; 8 9 int main(int argc, char* argv[]) { 10 null_mutex m; parallel_for(blocked_range<size_t>(0, 10), [&](blocked_range<size_t>& r){ 13 null_mutex::scoped_lock my_lock(m); 14 cout << "Doing some critical work on [" << r.begin() << ", " << r.end() << "]" << endl; 15 }); return 0; 18 } DDDoDooioiininngngg g s ssosoomommemee e c ccrcrririitittitiiciccacaalall l w wwowoororrkrkk k o oononn 2,,, D 86o3]]i] n gdd Doosoiioinnmnggeg sscsoorommimeete i ccccrrariilitt tiiwiccocaarallkl wwowoonorr rkk[k 1 oo,onn 3,,, D 97o4]]i] n gd osionmge scormiet iccrailt iwcoarlk woonr k[ 4o,n 5[]9, 10]

17 1 #include <cstdlib> 2 #include <iostream> 3 #include <tbb/atomic.h> 4 #include <tbb/parallel_for.h> 5 #include <tbb/blocked_range.h> 6 7 using namespace std; 8 using namespace tbb; 9 10 int main(int argc, char* argv[]) { 11 if (argc!= 2) { 12 cout << argv[0] << " n" << endl; 13 return 1; 14 } 15 int n = atoi(argv[1]); atomic<int> sum_atomic; 18 sum_atomic = 0; 19 int sum = 0; 20 Synchronisation atomic<t> 21 parallel_for(blocked_range<size_t>(0, n), [&](blocked_range<size_t>& r){ 22 sum_atomic += 1; 23 }); parallel_for(blocked_range<size_t>(0, n), [&](blocked_range<size_t>& r){ 26 sum += 1; 27 }); cout << "sum = " << sum << endl; 30 cout << "sum_atomic = " << sum_atomic << endl; return 0; 33 } Example result: $./ex_atomic 100 sum = 95 sum_atomic = 100

18 Concurrent containers Typical STL containers are NOT thread-safe! (i.e.: watch out for simultaneous access to a shared container from different threads) TBB has a few that ARE thread-safe: Consider using them! (They are probably much more efficient than mutexing your own containers) concurrent_hash_map concurrent_unordered_map concurrent_unordered_set concurrent_queue concurrent_bounded_queue concurrent_priority_queue concurrent_vector

19 Work-stealing task scheduler Algorithms / concepts provide a high-level logical view of parallelism Low-level task creation is also possible

20 Work-stealing task scheduler 1 #include <ctime> 2 #include <cstdlib> 3 #include <iostream> 4 #include <tbb/task.h> 5 6 struct TaskA : public tbb::task { 7 tbb::task* execute() { 8 std::cout << " Doing the laundry" << std::endl; 9 sleep(2); 10 std::cout << " Laundry is done" << std::endl; 11 return 0; 12 } 13 }; struct TaskB : public tbb::task { 16 tbb::task* execute() { 17 std::cout << " Doing the dishes" << std::endl; 18 sleep(1); 19 std::cout << " Dishes are clean" << std::endl; 20 return 0; 21 } 22 }; struct TaskRoot : public tbb::task { 25 tbb::task* execute() { 26 std::cout << "Doing household tasks" << std::endl; 27 TaskA* t_a = new(tbb::task::allocate_child()) TaskA; 28 TaskB* t_b = new(tbb::task::allocate_child()) TaskB; 29 set_ref_count(3); // Number of children we spawn (including the wait) 30 spawn(*t_a); 31 spawn(*t_b); 32 wait_for_all(); 33 std::cout << "All tasks finished; let's go to the pub" << std::endl; 34 return 0; 35 } 36 }; int main(int argc, char* argv[]) { 39 TaskRoot* t_r = new(tbb::task::allocate_root()) TaskRoot; 40 tbb::task::spawn_root_and_wait(*t_r); 41 return 0; 42 } Doing household tasks Doing the dishes Doing the laundry Dishes are clean Laundry is done All tasks finished; let's go to the pub

21 1 #include <ctime> 2 #include <cstdlib> Work-stealing task scheduler 3 #include <iostream> 4 #include <tbb/task.h> 5 #include <tbb/task_group.h> 6 7 void householdtasks() { 8 std::cout << "Doing household tasks" << std::endl; 9 10 // Create group of child tasks and spawn them 11 tbb::task_group g; 12 g.run([](){ 13 std::cout << " Doing the laundry" << std::endl; 14 sleep(2); 15 std::cout << " Laundry is done" << std::endl; Much easier to work 16 }); 17 g.run([](){ 18 std::cout << " Doing the dishes" << std::endl; with! 19 sleep(1); 20 std::cout << " Dishes are clean" << std::endl; 21 }); 22 // Wait for children to finish 23 g.wait(); std::cout << "All tasks finished; let's go to the pub" << std::endl; 26 } int main(int argc, char* argv[]) { 29 // Spawn root task and wait 30 tbb::task_group g; 31 g.run_and_wait(&householdtasks); return 0; 34 }

22 Work-stealing task scheduler A tree of parent / child tasks is created recursively The scheduler tries to evaluate this tree with a balance of depth-first and breadth-first evaluation. Each thread has a double-ended queue of tasks:... which is used as a stack: New tasks spawned by a thread are pushed on its stack. Ready for task: pop task from stack (hot cache!)... which is used as a queue: When local stack is empty: steal oldest task from another thread

23 Work-stealing task scheduler The task scheduler's fundamental strategy is: "breadth-first theft and depth-first work". The breadth-first theft rule raises parallelism sufficiently to keep threads busy. The depth-first work rule keeps each thread operating efficiently once it has sufficient work to do. (hot cache...)

24 Work-stealing task scheduler This is a pretty scalable strategy: There is no global dispatch i.e.: no global task-list that would require locking Idle threads steal work from busy threads i.e.: dynamic load balancing All high level algorithms use this scheduler. However, there are cases where it makes more sense to keep work a little bit more local to the thread... Affinity!

25 Work-stealing task scheduler A partitioner decides how the iteration range of a parallel_for/reduce is split up over the threads. (default)

26 Common parallel algorithms 2 tbb::flow::graph Fully introduced in TBB 4 Build dataflow algorithms that can be parallelized automatically Well-suited for streaming data apps Connect nodes with edges See website for an example!

27 Common parallel algorithms 2 tbb::flow::graph

28 Memory allocation issues Some issues with regular memory allocation Scalability: many threads / cores can compete for memory allocation at the same time False sharing: cores working on neighbouring data can invalidate each other s caches Memory distance: non-numa-aware allocation results in non-optimal location of data vs code

29 Memory allocation issues scalable_allocator<t> 1 #include <tbb/scalable_allocator.h>... 6 // Allocator object with own memory pool 7 tbb::scalable_allocator<double> alloc; // Allocate memory for n doubles 13 double* data = alloc.allocate(n); // Free memory 39 alloc.deallocate(data, n); This template can improve the performance of programs that rapidly allocate and free memory

30 Memory allocation issues cache_aligned_allocator<t> 1 #include <tbb/cache_aligned_allocator.h>... 6 // Allocator object with own memory pool 7 tbb::cache_aligned_allocator<double> alloc; // Allocate memory for n doubles 13 double* data = alloc.allocate(n); // Free memory 39 alloc.deallocate(data, n); Two objects allocated by cache_aligned_allocator are guaranteed to not have false sharing. In other cases, the use of (thread-)local variables can help.

31 Memory allocation issues NUMA As of now TBB does not offer NUMA-aware memory allocation. Although, affinity scheduling might help in some cases! You can try to play around with the Linux tool numactl and libnuma

32 Practical remarks Timing: 1 #include <tbb/tick_count.h>... 6 tick_count t_start = tick_count::now(); 7 // Go to restroom... 8 tick_count t_stop = tick_count::now(); 9 double time_spent_in_restroom = (t_stop - t_start).seconds(); Debugging TBB programs: Define: -DTBB_USE_DEBUG=1 Link against debug versions of the library: libtbb_debug.so & libtbbmalloc_debug.so

33 Practical remarks Number of threads: 1 #include <tbb/task_scheduler_init.h> // Force task_scheduler initialization with n threads 14 tbb::task_scheduler_init t_sched(n);...

34 README! "Tutorial" is a pretty extensive document of 90 pages; easy read but VERY useful!!! Reference is the real API manual The Foundations for Scalable Multi-Core Software in Intel TBB Collection of short articles with use-cases for parallel programming challenges Not specifically for TBB

Intel Thread Building Blocks

Intel Thread Building Blocks Intel Thread Building Blocks SPD course 2015-16 Massimo Coppola 08/04/2015 1 Thread Building Blocks : History A library to simplify writing thread-parallel programs and debugging them Originated circa

More information

Intel Thread Building Blocks

Intel Thread Building Blocks Intel Thread Building Blocks SPD course 2017-18 Massimo Coppola 23/03/2018 1 Thread Building Blocks : History A library to simplify writing thread-parallel programs and debugging them Originated circa

More information

Parallel Programming Principle and Practice. Lecture 7 Threads programming with TBB. Jin, Hai

Parallel Programming Principle and Practice. Lecture 7 Threads programming with TBB. Jin, Hai Parallel Programming Principle and Practice Lecture 7 Threads programming with TBB Jin, Hai School of Computer Science and Technology Huazhong University of Science and Technology Outline Intel Threading

More information

Tasks and Threads. What? When? Tasks and Threads. Use OpenMP Threading Building Blocks (TBB) Intel Math Kernel Library (MKL)

Tasks and Threads. What? When? Tasks and Threads. Use OpenMP Threading Building Blocks (TBB) Intel Math Kernel Library (MKL) CGT 581I - Parallel Graphics and Simulation Knights Landing Tasks and Threads Bedrich Benes, Ph.D. Professor Department of Computer Graphics Purdue University Tasks and Threads Use OpenMP Threading Building

More information

Intel Thread Building Blocks, Part II

Intel Thread Building Blocks, Part II Intel Thread Building Blocks, Part II SPD course 2013-14 Massimo Coppola 25/03, 16/05/2014 1 TBB Recap Portable environment Based on C++11 standard compilers Extensive use of templates No vectorization

More information

Task-based Data Parallel Programming

Task-based Data Parallel Programming Task-based Data Parallel Programming Asaf Yaffe Developer Products Division May, 2009 Agenda Overview Data Parallel Algorithms Tasks and Scheduling Synchronization and Concurrent Containers Summary 2 1

More information

Table of Contents. Cilk

Table of Contents. Cilk Table of Contents 212 Introduction to Parallelism Introduction to Programming Models Shared Memory Programming Message Passing Programming Shared Memory Models Cilk TBB HPF Chapel Fortress Stapl PGAS Languages

More information

Intel Threading Building Blocks (TBB)

Intel Threading Building Blocks (TBB) Intel Threading Building Blocks (TBB) SDSC Summer Institute 2012 Pietro Cicotti Computational Scientist Gordon Applications Team Performance Modeling and Characterization Lab Parallelism and Decomposition

More information

Threaded Programming. Lecture 9: Alternatives to OpenMP

Threaded Programming. Lecture 9: Alternatives to OpenMP Threaded Programming Lecture 9: Alternatives to OpenMP What s wrong with OpenMP? OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming

More information

Case Study: Parallelizing a Recursive Problem with Intel Threading Building Blocks

Case Study: Parallelizing a Recursive Problem with Intel Threading Building Blocks 4/8/2012 2:44:00 PM Case Study: Parallelizing a Recursive Problem with Intel Threading Building Blocks Recently I have been working closely with DreamWorks Animation engineers to improve the performance

More information

Threads: either under- or over-utilised

Threads: either under- or over-utilised Threads: either under- or over-utilised Underutilised: limited by creation speed of work Cannot exploit all the CPUs even though there is more work Overutilised: losing performance due to context switches

More information

Intel Thread Building Blocks, Part II

Intel Thread Building Blocks, Part II Intel Thread Building Blocks, Part II SPD course 2014-15 Massimo Coppola 5/05/2015 1 TBB Recap Portable environment Based on C++11 standard compilers Extensive use of templates No vectorization support

More information

Efficiently Introduce Threading using Intel TBB

Efficiently Introduce Threading using Intel TBB Introduction This guide will illustrate how to efficiently introduce threading using Intel Threading Building Blocks (Intel TBB), part of Intel Parallel Studio XE. It is a widely used, award-winning C++

More information

On the cost of managing data flow dependencies

On the cost of managing data flow dependencies On the cost of managing data flow dependencies - program scheduled by work stealing - Thierry Gautier, INRIA, EPI MOAIS, Grenoble France Workshop INRIA/UIUC/NCSA Outline Context - introduction of work

More information

Intel(R) Threading Building Blocks

Intel(R) Threading Building Blocks Getting Started Guide Intel Threading Building Blocks is a runtime-based parallel programming model for C++ code that uses threads. It consists of a template-based runtime library to help you harness the

More information

Parallelization on Multi-Core CPUs

Parallelization on Multi-Core CPUs 1 / 30 Amdahl s Law suppose we parallelize an algorithm using n cores and p is the proportion of the task that can be parallelized (1 p cannot be parallelized) the speedup of the algorithm is assuming

More information

Intel(R) Threading Building Blocks

Intel(R) Threading Building Blocks Getting Started Guide Intel Threading Building Blocks is a runtime-based parallel programming model for C++ code that uses threads. It consists of a template-based runtime library to help you harness the

More information

Optimize an Existing Program by Introducing Parallelism

Optimize an Existing Program by Introducing Parallelism Optimize an Existing Program by Introducing Parallelism 1 Introduction This guide will help you add parallelism to your application using Intel Parallel Studio. You will get hands-on experience with our

More information

Linked List using a Sentinel

Linked List using a Sentinel Linked List using a Sentinel Linked List.h / Linked List.h Using a sentinel for search Created by Enoch Hwang on 2/1/10. Copyright 2010 La Sierra University. All rights reserved. / #include

More information

POSIX Threads and OpenMP tasks

POSIX Threads and OpenMP tasks POSIX Threads and OpenMP tasks Jimmy Aguilar Mena February 16, 2018 Introduction Pthreads Tasks Two simple schemas Independent functions # include # include void f u n c t i

More information

Moore s Law. Multicore Programming. Vendor Solution. Power Density. Parallelism and Performance MIT Lecture 11 1.

Moore s Law. Multicore Programming. Vendor Solution. Power Density. Parallelism and Performance MIT Lecture 11 1. Moore s Law 1000000 Intel CPU Introductions 6.172 Performance Engineering of Software Systems Lecture 11 Multicore Programming Charles E. Leiserson 100000 10000 1000 100 10 Clock Speed (MHz) Transistors

More information

C++ Exception Handling 1

C++ Exception Handling 1 C++ Exception Handling 1 An exception is a problem that arises during the execution of a program. A C++ exception is a response to an exceptional circumstance that arises while a program is running, such

More information

Guillimin HPC Users Meeting January 13, 2017

Guillimin HPC Users Meeting January 13, 2017 Guillimin HPC Users Meeting January 13, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit

More information

What else is available besides OpenMP?

What else is available besides OpenMP? What else is available besides OpenMP? Christian Terboven terboven@rz.rwth aachen.de Center for Computing and Communication RWTH Aachen University Parallel Programming June 6, RWTH Aachen University Other

More information

Intel Thread Building Blocks, Part IV

Intel Thread Building Blocks, Part IV Intel Thread Building Blocks, Part IV SPD course 2017-18 Massimo Coppola 13/04/2018 1 Mutexes TBB Classes to build mutex lock objects The lock object will Lock the associated data object (the mutex) for

More information

CS 376b Computer Vision

CS 376b Computer Vision CS 376b Computer Vision 09 / 25 / 2014 Instructor: Michael Eckmann Today s Topics Questions? / Comments? Enhancing images / masks Cross correlation Convolution C++ Cross-correlation Cross-correlation involves

More information

Concepts in. Programming. The Multicore- Software Challenge. MIT Professional Education 6.02s Lecture 1 June 8 9, 2009

Concepts in. Programming. The Multicore- Software Challenge. MIT Professional Education 6.02s Lecture 1 June 8 9, 2009 Concepts in Multicore Programming The Multicore- Software Challenge MIT Professional Education 6.02s Lecture 1 June 8 9, 2009 2009 Charles E. Leiserson 1 Cilk, Cilk++, and Cilkscreen, are trademarks of

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 Process creation in UNIX All processes have a unique process id getpid(),

More information

C++ Namespaces, Exceptions

C++ Namespaces, Exceptions C++ Namespaces, Exceptions CSci 588: Data Structures, Algorithms and Software Design http://www.cplusplus.com/doc/tutorial/namespaces/ http://www.cplusplus.com/doc/tutorial/exceptions/ http://www.cplusplus.com/doc/tutorial/typecasting/

More information

C++ TEMPLATES. Templates are the foundation of generic programming, which involves writing code in a way that is independent of any particular type.

C++ TEMPLATES. Templates are the foundation of generic programming, which involves writing code in a way that is independent of any particular type. C++ TEMPLATES http://www.tutorialspoint.com/cplusplus/cpp_templates.htm Copyright tutorialspoint.com Templates are the foundation of generic programming, which involves writing code in a way that is independent

More information

Intel(R) Threading Building Blocks

Intel(R) Threading Building Blocks Intel(R) Threading Building Blocks Reference Manual Copyright 2007 Intel Corporation All Rights Reserved Document Number 315415-001US Revision: 1.6 World Wide Web: http://www.intel.com Document Number

More information

C++ Basics. Data Processing Course, I. Hrivnacova, IPN Orsay

C++ Basics. Data Processing Course, I. Hrivnacova, IPN Orsay C++ Basics Data Processing Course, I. Hrivnacova, IPN Orsay The First Program Comments Function main() Input and Output Namespaces Variables Fundamental Types Operators Control constructs 1 C++ Programming

More information

Homework 4. Any questions?

Homework 4. Any questions? CSE333 SECTION 8 Homework 4 Any questions? STL Standard Template Library Has many pre-build container classes STL containers store by value, not by reference Should try to use this as much as possible

More information

/INFOMOV/ Optimization & Vectorization. J. Bikker - Sep-Nov Lecture 12: Multithreading. Welcome!

/INFOMOV/ Optimization & Vectorization. J. Bikker - Sep-Nov Lecture 12: Multithreading. Welcome! /INFOMOV/ Optimization & Vectorization J. Bikker - Sep-Nov 2018 - Lecture 12: Multithreading Welcome! Today s Agenda: Introduction Hardware Trust No One / An fficient Pattern xperiments Final Assignment

More information

Parallel Processing with the SMP Framework in VTK. Berk Geveci Kitware, Inc.

Parallel Processing with the SMP Framework in VTK. Berk Geveci Kitware, Inc. Parallel Processing with the SMP Framework in VTK Berk Geveci Kitware, Inc. November 3, 2013 Introduction The main objective of the SMP (symmetric multiprocessing) framework is to provide an infrastructure

More information

The following program computes a Calculus value, the "trapezoidal approximation of

The following program computes a Calculus value, the trapezoidal approximation of Multicore machines and shared memory Multicore CPUs have more than one core processor that can execute instructions at the same time. The cores share main memory. In the next few activities, we will learn

More information

Shared memory parallel computing

Shared memory parallel computing Shared memory parallel computing OpenMP Sean Stijven Przemyslaw Klosiewicz Shared-mem. programming API for SMP machines Introduced in 1997 by the OpenMP Architecture Review Board! More high-level than

More information

CSE 333 Lecture smart pointers

CSE 333 Lecture smart pointers CSE 333 Lecture 14 -- smart pointers Hal Perkins Paul G. Allen School of Computer Science & Engineering University of Washington Administrivia New exercise out today, due Wednesday morning Exam Friday

More information

Agenda. The main body and cout. Fundamental data types. Declarations and definitions. Control structures

Agenda. The main body and cout. Fundamental data types. Declarations and definitions. Control structures The main body and cout Agenda 1 Fundamental data types Declarations and definitions Control structures References, pass-by-value vs pass-by-references The main body and cout 2 C++ IS AN OO EXTENSION OF

More information

CPSC 427: Object-Oriented Programming

CPSC 427: Object-Oriented Programming CPSC 427: Object-Oriented Programming Michael J. Fischer Lecture 10 October 1, 2018 CPSC 427, Lecture 10, October 1, 2018 1/20 Brackets Example (continued from lecture 8) Stack class Brackets class Main

More information

CSCI-1200 Data Structures Fall 2017 Lecture 2 STL Strings & Vectors

CSCI-1200 Data Structures Fall 2017 Lecture 2 STL Strings & Vectors Announcements CSCI-1200 Data Structures Fall 2017 Lecture 2 STL Strings & Vectors HW 1 is available on-line through the website (on the Calendar ). Be sure to read through this information as you start

More information

CSE 333 Lecture smart pointers

CSE 333 Lecture smart pointers CSE 333 Lecture 14 -- smart pointers Hal Perkins Department of Computer Science & Engineering University of Washington Administrivia Midterm Friday - Review in sections this week - Closed book; topic list

More information

CPSC 427: Object-Oriented Programming

CPSC 427: Object-Oriented Programming CPSC 427: Object-Oriented Programming Michael J. Fischer Lecture 7 September 21, 2016 CPSC 427, Lecture 7 1/21 Brackets Example (continued) Storage Management CPSC 427, Lecture 7 2/21 Brackets Example

More information

PROGRAMOVÁNÍ V C++ CVIČENÍ. Michal Brabec

PROGRAMOVÁNÍ V C++ CVIČENÍ. Michal Brabec PROGRAMOVÁNÍ V C++ CVIČENÍ Michal Brabec PARALLELISM CATEGORIES CPU? SSE Multiprocessor SIMT - GPU 2 / 17 PARALLELISM V C++ Weak support in the language itself, powerful libraries Many different parallelization

More information

CA341 - Comparative Programming Languages

CA341 - Comparative Programming Languages CA341 - Comparative Programming Languages David Sinclair Dynamic Data Structures Generally we do not know how much data a program will have to process. There are 2 ways to handle this: Create a fixed data

More information

Program template-smart-pointers.cc

Program template-smart-pointers.cc 1 // Illustrate the smart pointer approach using Templates 2 // George F. Riley, Georgia Tech, Spring 2012 3 4 #include 5 #include 6 7 using namespace std; 8 9 // The Ptr class contains

More information

Example Final Questions Instructions

Example Final Questions Instructions Example Final Questions Instructions This exam paper contains a set of sample final exam questions. It is for practice purposes only. You ll most likely need longer than three hours to answer all the questions.

More information

Figure 1. A breadth-first traversal.

Figure 1. A breadth-first traversal. 4.3 Tree Traversals Stepping, or iterating, through the entries of a linearly ordered list has only two obvious orders: from front to back or from back to front. There is no obvious traversal of a general

More information

Lab 1: First Steps in C++ - Eclipse

Lab 1: First Steps in C++ - Eclipse Lab 1: First Steps in C++ - Eclipse Step Zero: Select workspace 1. Upon launching eclipse, we are ask to chose a workspace: 2. We select a new workspace directory (e.g., C:\Courses ): 3. We accept the

More information

CE221 Programming in C++ Part 1 Introduction

CE221 Programming in C++ Part 1 Introduction CE221 Programming in C++ Part 1 Introduction 06/10/2017 CE221 Part 1 1 Module Schedule There are two lectures (Monday 13.00-13.50 and Tuesday 11.00-11.50) each week in the autumn term, and a 2-hour lab

More information

Short Notes of CS201

Short Notes of CS201 #includes: Short Notes of CS201 The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with < and > if the file is a system

More information

LECTURE 11 TREE TRAVERSALS

LECTURE 11 TREE TRAVERSALS DATA STRUCTURES AND ALGORITHMS LECTURE 11 TREE TRAVERSALS IMRAN IHSAN ASSISTANT PROFESSOR AIR UNIVERSITY, ISLAMABAD BACKGROUND All the objects stored in an array or linked list can be accessed sequentially

More information

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song CSCE 313 Introduction to Computer Systems Instructor: Dezhen Song Programs, Processes, and Threads Programs and Processes Threads Programs, Processes, and Threads Programs and Processes Threads Processes

More information

Financial computing with C++

Financial computing with C++ Financial Computing with C++, Lecture 6 - p1/24 Financial computing with C++ LG Gyurkó University of Oxford Michaelmas Term 2015 Financial Computing with C++, Lecture 6 - p2/24 Outline Linked lists Linked

More information

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 4 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

CSE 374 Programming Concepts & Tools. Hal Perkins Fall 2015 Lecture 19 Introduction to C++

CSE 374 Programming Concepts & Tools. Hal Perkins Fall 2015 Lecture 19 Introduction to C++ CSE 374 Programming Concepts & Tools Hal Perkins Fall 2015 Lecture 19 Introduction to C++ C++ C++ is an enormous language: All of C Classes and objects (kind of like Java, some crucial differences) Many

More information

CS201 - Introduction to Programming Glossary By

CS201 - Introduction to Programming Glossary By CS201 - Introduction to Programming Glossary By #include : The #include directive instructs the preprocessor to read and include a file into a source code file. The file name is typically enclosed with

More information

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture) Dept. of Computer Science & Engineering Chentao Wu wuct@cs.sjtu.edu.cn Download lectures ftp://public.sjtu.edu.cn User:

More information

CSCE 313: Intro to Computer Systems

CSCE 313: Intro to Computer Systems CSCE 313 Introduction to Computer Systems Instructor: Dr. Guofei Gu http://courses.cse.tamu.edu/guofei/csce313/ Programs, Processes, and Threads Programs and Processes Threads 1 Programs, Processes, and

More information

Software Engineering Concepts: Invariants Silently Written & Called Functions Simple Class Example

Software Engineering Concepts: Invariants Silently Written & Called Functions Simple Class Example Software Engineering Concepts: Invariants Silently Written & Called Functions Simple Class Example CS 311 Data Structures and Algorithms Lecture Slides Friday, September 11, 2009 continued Glenn G. Chappell

More information

Concurrent Data Structures in C++ CSInParallel Project

Concurrent Data Structures in C++ CSInParallel Project Concurrent Data Structures in C++ CSInParallel Project July 26, 2012 CONTENTS 1 Concurrent Data Structures in C++: Web crawler lab 1 1.1 Your goals................................................ 1 1.2

More information

Shared-Memory Programming

Shared-Memory Programming Shared-Memory Programming 1. Threads 2. Mutual Exclusion 3. Thread Scheduling 4. Thread Interfaces 4.1. POSIX Threads 4.2. C++ Threads 4.3. OpenMP 4.4. Threading Building Blocks 5. Side Effects of Hardware

More information

Chapter 15 - C++ As A "Better C"

Chapter 15 - C++ As A Better C Chapter 15 - C++ As A "Better C" Outline 15.1 Introduction 15.2 C++ 15.3 A Simple Program: Adding Two Integers 15.4 C++ Standard Library 15.5 Header Files 15.6 Inline Functions 15.7 References and Reference

More information

CSE 374 Programming Concepts & Tools. Hal Perkins Spring 2010

CSE 374 Programming Concepts & Tools. Hal Perkins Spring 2010 CSE 374 Programming Concepts & Tools Hal Perkins Spring 2010 Lecture 19 Introduction ti to C++ C++ C++ is an enormous language: g All of C Classes and objects (kind of like Java, some crucial differences)

More information

Lecture-5. STL Containers & Iterators

Lecture-5. STL Containers & Iterators Lecture-5 STL Containers & Iterators Containers as a form of Aggregation Fixed aggregation An object is composed of a fixed set of component objects Variable aggregation An object is composed of a variable

More information

Advanced OpenMP. Other threading APIs

Advanced OpenMP. Other threading APIs Advanced OpenMP Other threading APIs What s wrong with OpenMP? OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming CPU cycles. - cannot

More information

PROGRAMMING IN C++ CVIČENÍ

PROGRAMMING IN C++ CVIČENÍ PROGRAMMING IN C++ CVIČENÍ INFORMACE Michal Brabec http://www.ksi.mff.cuni.cz/ http://www.ksi.mff.cuni.cz/~brabec/ brabec@ksi.mff.cuni.cz gmichal.brabec@gmail.com REQUIREMENTS FOR COURSE CREDIT Basic requirements

More information

Program template-smart-pointers-again.cc

Program template-smart-pointers-again.cc 1 // Illustrate the smart pointer approach using Templates 2 // George F. Riley, Georgia Tech, Spring 2012 3 // This is nearly identical to the earlier handout on smart pointers 4 // but uses a different

More information

Lecture on pointers, references, and arrays and vectors

Lecture on pointers, references, and arrays and vectors Lecture on pointers, references, and arrays and vectors pointers for example, check out: http://www.programiz.com/cpp-programming/pointers [the following text is an excerpt of this website] #include

More information

COMP6771 Advanced C++ Programming

COMP6771 Advanced C++ Programming 1.... COMP6771 Advanced C++ Programming Week 9 Multithreading 2016 www.cse.unsw.edu.au/ cs6771 .... Single Threaded Programs All programs so far this semester have been single threaded They have a single

More information

Efficient Work Stealing for Fine-Grained Parallelism

Efficient Work Stealing for Fine-Grained Parallelism Efficient Work Stealing for Fine-Grained Parallelism Karl-Filip Faxén Swedish Institute of Computer Science November 26, 2009 Task parallel fib in Wool TASK 1( int, fib, int, n ) { if( n

More information

Introduction to C++ Systems Programming

Introduction to C++ Systems Programming Introduction to C++ Systems Programming Introduction to C++ Syntax differences between C and C++ A Simple C++ Example C++ Input/Output C++ Libraries C++ Header Files Another Simple C++ Example Inline Functions

More information

Computer Programming

Computer Programming Computer Programming Dr. Deepak B Phatak Dr. Supratik Chakraborty Department of Computer Science and Engineering Session: Programming using Structures Dr. Deepak B. Phatak & Dr. Supratik Chakraborty, 1

More information

CSE 333 Final Exam June 6, 2017 Sample Solution

CSE 333 Final Exam June 6, 2017 Sample Solution Question 1. (24 points) Some C and POSIX I/O programming. Given an int file descriptor returned by open(), write a C function ReadFile that reads the entire file designated by that file descriptor and

More information

Prepared by Prof. Hui Jiang Process. Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University

Prepared by Prof. Hui Jiang Process. Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University EECS3221.3 Operating System Fundamentals No.2 Process Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University How OS manages CPU usage? How CPU is used? Users use CPU to run

More information

Parallelizing N-Queens with Intel Parallel Composer

Parallelizing N-Queens with Intel Parallel Composer Parallelizing N-Queens with Intel Parallel Composer Sample Code Guide Document Number: 320506-001US Revision: 1.2 World Wide Web: http://www.intel.com/ Parallelizing N-Queens with Intel Parallel Composer

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 The Process Concept 2 The Process Concept Process a program in execution

More information

Process. Prepared by Prof. Hui Jiang Dept. of EECS, York Univ. 1. Process in Memory (I) PROCESS. Process. How OS manages CPU usage? No.

Process. Prepared by Prof. Hui Jiang Dept. of EECS, York Univ. 1. Process in Memory (I) PROCESS. Process. How OS manages CPU usage? No. EECS3221.3 Operating System Fundamentals No.2 Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University How OS manages CPU usage? How CPU is used? Users use CPU to run programs

More information

CS2141 Software Development using C/C++ C++ Basics

CS2141 Software Development using C/C++ C++ Basics CS2141 Software Development using C/C++ C++ Basics Integers Basic Types Can be short, long, or just plain int C++ does not define the size of them other than short

More information

CS333 Intro to Operating Systems. Jonathan Walpole

CS333 Intro to Operating Systems. Jonathan Walpole CS333 Intro to Operating Systems Jonathan Walpole Threads & Concurrency 2 Threads Processes have the following components: - an address space - a collection of operating system state - a CPU context or

More information

4.8 Summary. Practice Exercises

4.8 Summary. Practice Exercises Practice Exercises 191 structures of the parent process. A new task is also created when the clone() system call is made. However, rather than copying all data structures, the new task points to the data

More information

Intel Parallel Studio

Intel Parallel Studio Intel Parallel Studio Product Brief Intel Parallel Studio Parallelism for your Development Lifecycle Intel Parallel Studio brings comprehensive parallelism to C/C++ Microsoft Visual Studio* application

More information

A506 / C201 Computer Programming II Placement Exam Sample Questions. For each of the following, choose the most appropriate answer (2pts each).

A506 / C201 Computer Programming II Placement Exam Sample Questions. For each of the following, choose the most appropriate answer (2pts each). A506 / C201 Computer Programming II Placement Exam Sample Questions For each of the following, choose the most appropriate answer (2pts each). 1. Which of the following functions is causing a temporary

More information

Threads. Threads (continued)

Threads. Threads (continued) Threads A thread is an alternative model of program execution A process creates a thread through a system call Thread operates within process context Use of threads effectively splits the process state

More information

CS420: Operating Systems

CS420: Operating Systems Threads James Moscola Department of Physical Sciences York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Threads A thread is a basic unit of processing

More information

CS 3305 Intro to Threads. Lecture 6

CS 3305 Intro to Threads. Lecture 6 CS 3305 Intro to Threads Lecture 6 Introduction Multiple applications run concurrently! This means that there are multiple processes running on a computer Introduction Applications often need to perform

More information

CSE100. Advanced Data Structures. Lecture 4. (Based on Paul Kube course materials)

CSE100. Advanced Data Structures. Lecture 4. (Based on Paul Kube course materials) CSE100 Advanced Data Structures Lecture 4 (Based on Paul Kube course materials) Lecture 4 Binary search trees Toward a binary search tree implementation using C++ templates Reading: Weiss Ch 4, sections

More information

CSE 333 Lecture 9 - intro to C++

CSE 333 Lecture 9 - intro to C++ CSE 333 Lecture 9 - intro to C++ Hal Perkins Department of Computer Science & Engineering University of Washington Administrivia & Agenda Main topic: Intro to C++ But first: Some hints on HW2 Labs: The

More information

Introduction to C++ Professor Hugh C. Lauer CS-2303, System Programming Concepts

Introduction to C++ Professor Hugh C. Lauer CS-2303, System Programming Concepts Introduction to C++ Professor Hugh C. Lauer CS-2303, System Programming Concepts (Slides include materials from The C Programming Language, 2 nd edition, by Kernighan and Ritchie, Absolute C++, by Walter

More information

A wishlist. Can directly pass pointers to data-structures between tasks. Data-parallelism and pipeline-parallelism and task parallelism...

A wishlist. Can directly pass pointers to data-structures between tasks. Data-parallelism and pipeline-parallelism and task parallelism... A wishlist In-process shared-memory parallelism Can directly pass pointers to data-structures between tasks Sophisticated design-patterns Data-parallelism and pipeline-parallelism and task parallelism...

More information

CMSC 341 Lecture 6 Templates, Stacks & Queues. Based on slides by Shawn Lupoli & Katherine Gibson at UMBC

CMSC 341 Lecture 6 Templates, Stacks & Queues. Based on slides by Shawn Lupoli & Katherine Gibson at UMBC CMSC 341 Lecture 6 Templates, Stacks & Queues Based on slides by Shawn Lupoli & Katherine Gibson at UMBC Today s Topics Data types in C++ Overloading functions Templates How to implement them Possible

More information

CSE 303: Concepts and Tools for Software Development

CSE 303: Concepts and Tools for Software Development CSE 303: Concepts and Tools for Software Development Hal Perkins Autumn 2008 Lecture 24 Introduction to C++ CSE303 Autumn 2008, Lecture 24 1 C++ C++ is an enormous language: All of C Classes and objects

More information

Multithreaded Programming

Multithreaded Programming Multithreaded Programming The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. September 4, 2014 Topics Overview

More information

Exam duration: 3 hours Number of pages: 10

Exam duration: 3 hours Number of pages: 10 INFOMGEP 2011 Retake exam Student name: Student number: Exam duration: 3 hours Number of pages: 10 All the answers have to be written in the corresponding boxes. It is allowed to have: - lecture notes

More information

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits

Threads. What is a thread? Motivation. Single and Multithreaded Processes. Benefits CS307 What is a thread? Threads A thread is a basic unit of CPU utilization contains a thread ID, a program counter, a register set, and a stack shares with other threads belonging to the same process

More information

III. Classes (Chap. 3)

III. Classes (Chap. 3) III. Classes III-1 III. Classes (Chap. 3) As we have seen, C++ data types can be classified as: Fundamental (or simple or scalar): A data object of one of these types is a single object. int, double, char,

More information

Data Structures Lab II. Binary Search Tree implementation

Data Structures Lab II. Binary Search Tree implementation Data Structures Lab II Binary Search Tree implementation Objectives: Making students able to understand basic concepts relating to Binary Search Tree (BST). Making students able to implement Binary Search

More information

Communication With the Outside World

Communication With the Outside World Communication With the Outside World Program Return Code Arguments From the Program Call Aborting Program Calling Other Programs Data Processing Course, I. Hrivnacova, IPN Orsay I. Hrivnacova @ Data Processing

More information

CS93SI Handout 04 Spring 2006 Apr Review Answers

CS93SI Handout 04 Spring 2006 Apr Review Answers CS93SI Handout 04 Spring 2006 Apr 6 2006 Review Answers I promised answers to these questions, so here they are. I'll probably readdress the most important of these over and over again, so it's not terribly

More information