Why Structured Parallel Programming Matters. Murray Cole
|
|
- Laurence McKenzie
- 6 years ago
- Views:
Transcription
1 1 Why Structured Parallel Programming Matters Murray Cole Institute for Computing Systems Architecture School of Informatics University of Edinburgh Murray Cole Why Structured Parallel Programming Matters September 3rd 24
2 Edinburgh 2 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
3 Edinburgh 2 Scotland s sunshine capital! Murray Cole Why Structured Parallel Programming Matters September 3rd 24
4 What is Unstructured Parallel Programming? 3 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
5 What is Unstructured Parallel Programming? 3 Simple parallel programming frameworks (Posix threads, core MPI) are universal. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
6 What is Unstructured Parallel Programming? 3 Simple parallel programming frameworks (Posix threads, core MPI) are universal. They can be used to describe arbitrarily complex and dynamically determined interactions between activities. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
7 What is Unstructured Parallel Programming? 3 Simple parallel programming frameworks (Posix threads, core MPI) are universal. They can be used to describe arbitrarily complex and dynamically determined interactions between activities. Programming is by careful selection and combination of operations drawn from a small, simple set. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
8 What is Unstructured Parallel Programming? 4 It is difficult for programmers, examining such a program statically, to understand the overall pattern involved (if such exists), and to radically revise it. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
9 What is Unstructured Parallel Programming? 4 It is difficult for programmers, examining such a program statically, to understand the overall pattern involved (if such exists), and to radically revise it. It is very difficult for implementation mechanisms (working statically and/or dynamically) to attempt optimisations which work beyond single instances of the primitives. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
10 Patterns in Parallel Computing 5 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
11 Patterns in Parallel Computing 5 Many (most?) parallel applications don t actually involve arbitrary, dynamic interaction patterns. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
12 Patterns in Parallel Computing 5 Many (most?) parallel applications don t actually involve arbitrary, dynamic interaction patterns. Sometimes the pattern is entirely pre-determined. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
13 A Pipeline 6 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
14 A Pipeline 7 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
15 A Pipeline 8 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
16 A Pipeline 9 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
17 A Pipeline 1 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
18 A Pipeline 11 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
19 A Pipeline 12 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
20 A Pipeline 13 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
21 Patterns in Parallel Computing Many (most?) parallel applications don t actually involve arbitrary, dynamic interaction patterns. Sometimes the pattern is entirely pre-determined. 14 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
22 Patterns in Parallel Computing Many (most?) parallel applications don t actually involve arbitrary, dynamic interaction patterns. Sometimes the pattern is entirely pre-determined. Sometimes non-determinism is constrained within a wider pattern. 14 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
23 A Task Farm 15 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
24 A Task Farm 16 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
25 A Task Farm 17 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
26 A Task Farm 18 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
27 A Task Farm 19 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
28 A Task Farm 2 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
29 A Task Farm 21 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
30 A Task Farm 22 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
31 A Task Farm 23 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
32 Patterns in Parallel Computing Many (most?) parallel applications don t actually involve arbitrary, dynamic interaction patterns. Sometimes the pattern is entirely pre-determined. Sometimes non-determinism is constrained within a wider pattern. 24 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
33 Patterns in Parallel Computing Many (most?) parallel applications don t actually involve arbitrary, dynamic interaction patterns. Sometimes the pattern is entirely pre-determined. Sometimes non-determinism is constrained within a wider pattern. The use of an unstructured parallel programming mechanism prevents the programmer from expressing information about the pattern - it remains implicit in the collected uses of the simple primitives. 24 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
34 What is Structured Parallel Programming? 25 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
35 What is Structured Parallel Programming? 25 The structured approach to parallelism proposes that commonly used patterns of computation and interaction should be abstracted as parameterisable library functions, control constructs or similar, so that application programmers can explicitly declare that the application follows one or more such patterns. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
36 What is Structured Parallel Programming? 25 The structured approach to parallelism proposes that commonly used patterns of computation and interaction should be abstracted as parameterisable library functions, control constructs or similar, so that application programmers can explicitly declare that the application follows one or more such patterns. Keywords: skeleton, template, pattern, archetype, higher order function Murray Cole Why Structured Parallel Programming Matters September 3rd 24
37 What is Structured Parallel Programming? 25 The structured approach to parallelism proposes that commonly used patterns of computation and interaction should be abstracted as parameterisable library functions, control constructs or similar, so that application programmers can explicitly declare that the application follows one or more such patterns. Keywords: skeleton, template, pattern, archetype, higher order function This matters because it gives a tractable handle on the issues which make correct, efficient parallel programming hard. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
38 Why Parallel Programming is Hard 26 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
39 Why Parallel Programming is Hard 26 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing Murray Cole Why Structured Parallel Programming Matters September 3rd 24
40 Why Parallel Programming is Hard 26 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing We must maintain high efficiency in practice (not just big O). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
41 Why Parallel Programming is Hard 26 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing We must maintain high efficiency in practice (not just big O). Expressing and optimising interaction is confusing. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
42 Does Structured Parallel Programming Help? 27 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
43 Does Structured Parallel Programming Help? 27 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing Murray Cole Why Structured Parallel Programming Matters September 3rd 24
44 Does Structured Parallel Programming Help? 27 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing No. Skeletons help us express algorithms but we still have to devise them. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
45 Does Structured Parallel Programming Help? 27 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing No. Skeletons help us express algorithms but we still have to devise them. We must maintain high efficiency in practice (not just big O). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
46 Does Structured Parallel Programming Help? 27 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing No. Skeletons help us express algorithms but we still have to devise them. We must maintain high efficiency in practice (not just big O). Yes. The skeleton implementation knows the interaction pattern in advance, and exploits this knowledge at a low level. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
47 Does Structured Parallel Programming Help? 27 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing No. Skeletons help us express algorithms but we still have to devise them. We must maintain high efficiency in practice (not just big O). Yes. The skeleton implementation knows the interaction pattern in advance, and exploits this knowledge at a low level. Expressing and optimising interaction is confusing. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
48 Does Structured Parallel Programming Help? 27 Devising correct, efficient sequential algorithms is hard already. efficient parallelism adds an extra conceptual dimension. Introducing No. Skeletons help us express algorithms but we still have to devise them. We must maintain high efficiency in practice (not just big O). Yes. The skeleton implementation knows the interaction pattern in advance, and exploits this knowledge at a low level. Expressing and optimising interaction is confusing. Yes. The pattern abstracted by the skeleton hides all the interaction. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
49 How Performance Optimisations Work 28 Well known, tried and trusted performance optimisations typically exploit information about the spatial and temporal context in which indvidual operations are performed. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
50 Cache Optimisations 29 for (i=;i<n;i++) for (j=;j<n;j++) for (k=; k<n; k++) c[i][j] += a[i][k]*b[k][j]; Murray Cole Why Structured Parallel Programming Matters September 3rd 24
51 Cache Optimisations 3 for (jj=; jj<n; jj=jj+b) for (kk=; kk<n; kk=kk+b) for (i=; i<n; i++) for (j=jj; j < jj+b; j++) { pa = &a[i][kk]; pb = &b[kk][j]; temp = (*pa++)*(*pb); for (k=kk+1; k < kk+b; k++) { pb = pb+n; temp += (*pa++)*(*pb); } c[i][j] += temp; } Murray Cole Why Structured Parallel Programming Matters September 3rd 24
52 Branch Prediction 31 while (a[i] < b[a[i]]) { c[j+i] += b[j]; if (c[k] < 1) { k++; b[j] = ; } i++; } Performance improved by knowing whether branches are usually taken. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
53 How Performance Optimisations Work 32 Well known, tried and trusted performance optimisations typically exploit information about the spatial and temporal context in which indvidual operations are performed. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
54 How Performance Optimisations Work 32 Well known, tried and trusted performance optimisations typically exploit information about the spatial and temporal context in which indvidual operations are performed. The use of structured parallel programming techniques provides information about future interactions, sometimes for the entire execution, in context. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
55 How Performance Optimisations Work 32 Well known, tried and trusted performance optimisations typically exploit information about the spatial and temporal context in which indvidual operations are performed. The use of structured parallel programming techniques provides information about future interactions, sometimes for the entire execution, in context. Structured Parallelism lets us tell the system what will happen next. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
56 Parallel Performance Optimisations Consider a simple iterated all-pairs structure 33 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
57 Parallel Performance Optimisations Consider a simple iterated all-pairs structure 33 Careful shared memory scheduling is essential. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
58 Parallel Performance Optimisations 34 In a simple pipeline, agglomeration of messages can be crucial. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
59 Parallel Performance Optimisations 35 It will be difficult (often impossible?) to recognise that what we have is an all-pairs or pipeline computation from the equivalent unstructured source. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
60 Parallel Performance Optimisations 35 It will be difficult (often impossible?) to recognise that what we have is an all-pairs or pipeline computation from the equivalent unstructured source. Structured Parallelism matters because it allows us to give the system this information (and as a side-effect, simplifies our programming task). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
61 Higher Level Performance Optimisation 36 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
62 Higher Level Performance Optimisation 36 Performance programming is also about choosing the right algorithm, and being sure that it is correct. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
63 Higher Level Performance Optimisation 36 Performance programming is also about choosing the right algorithm, and being sure that it is correct. Designing algorithms with structured concepts in mind allows us to benefit from high level algorithm restructuring techniques. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
64 Higher Level Performance Optimisation 36 Performance programming is also about choosing the right algorithm, and being sure that it is correct. Designing algorithms with structured concepts in mind allows us to benefit from high level algorithm restructuring techniques. The coarse grain, collective nature of skeletons allows substantial transformations to be made with a small number of steps (in contrast to the same effect achieved with the corresponding unstructured primitives). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
65 Higher Level Performance Optimisation 37 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
66 Higher Level Performance Optimisation 38 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
67 Higher Level Performance Optimisation 39 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
68 Higher Level Performance Optimisation 4 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
69 Higher Level Performance Optimisation 41 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
70 Higher Level Performance Optimisation 42 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
71 Higher Level Performance Optimisation 43 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
72 Higher Level Performance Optimisation 44 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
73 Higher Level Performance Optimisation 45 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
74 Higher Level Performance Optimisation 46 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
75 Higher Level Performance Optimisation 47 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
76 Higher Level Performance Optimisation 48 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
77 Higher Level Performance Optimisation 49 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
78 Higher Level Performance Optimisation 5 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
79 Higher Level Performance Optimisation 51 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
80 Higher Level Performance Optimisation 52 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
81 Higher Level Performance Optimisation 53 Structured Parallelism matters because it allows us to manipulate parallel algorithms at a coarse structural level. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
82 Who cares? 54 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
83 Who cares? 54 Skeletal programming remains a fringe activity Murray Cole Why Structured Parallel Programming Matters September 3rd 24
84 A Pragmatic Manifesto 55 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
85 A Pragmatic Manifesto Minimise Conceptual Disruption Murray Cole Why Structured Parallel Programming Matters September 3rd 24
86 A Pragmatic Manifesto Minimise Conceptual Disruption There are many ways of presenting these ideas. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
87 A Pragmatic Manifesto Minimise Conceptual Disruption There are many ways of presenting these ideas. Parallel programmers are happy with C/Fortran and MPI. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
88 A Pragmatic Manifesto Minimise Conceptual Disruption There are many ways of presenting these ideas. Parallel programmers are happy with C/Fortran and MPI. MPI s collectives are simple skeletons, so build on this. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
89 A Pragmatic Manifesto Integrate Ad-Hoc Parallelism Murray Cole Why Structured Parallel Programming Matters September 3rd 24
90 A Pragmatic Manifesto Integrate Ad-Hoc Parallelism Sometimes parallelism seems inherently unstructured. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
91 A Pragmatic Manifesto Integrate Ad-Hoc Parallelism Sometimes parallelism seems inherently unstructured. Allow contained integration within a structured container. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
92 A Pragmatic Manifesto Integrate Ad-Hoc Parallelism Sometimes parallelism seems inherently unstructured. Allow contained integration within a structured container. Don t overconstrain. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
93 A Pragmatic Manifesto Accommodate Diversity Murray Cole Why Structured Parallel Programming Matters September 3rd 24
94 A Pragmatic Manifesto Accommodate Diversity Well-known concepts are quite slippery. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
95 A Pragmatic Manifesto Accommodate Diversity Well-known concepts are quite slippery. Pipeline stage as function or stage as process? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
96 A Pragmatic Manifesto Accommodate Diversity Well-known concepts are quite slippery. Pipeline stage as function or stage as process? Pipeline stage one-for-one or arbitrary? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
97 A Pragmatic Manifesto Accommodate Diversity Well-known concepts are quite slippery. Pipeline stage as function or stage as process? Pipeline stage one-for-one or arbitrary? Implicit or explicit farmer? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
98 A Pragmatic Manifesto Accommodate Diversity Well-known concepts are quite slippery. Pipeline stage as function or stage as process? Pipeline stage one-for-one or arbitrary? Implicit or explicit farmer? Don t overconstrain. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
99 A Vanilla Pipeline - Image Processing 58 Decreasing Data Size Edges Faces Objects Scene Increasing Content Murray Cole Why Structured Parallel Programming Matters September 3rd 24
100 An Exotic Pipeline - Gaussian Elimination 59 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
101 6 An Exotic Pipeline - Gaussian Elimination!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;;;;;;;;;;;;;;;;;;;;;;; <<<<<<<<<<<<<<<<<<<<<<< ======================= AAAAAAAAAAAAAAAAAAAAAAA BBBBBBBBBBBBBBBBBBBBBBB CCCCCCCCCCCCCCCCCCCCCCC DDDDDDDDDDDDDDDDDDDDDDD EEEEEEEEEEEEEEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF GGGGGGGGGGGGGGGGGGGGGGG HHHHHHHHHHHHHHHHHHHHHHH Murray Cole Why Structured Parallel Programming Matters September 3rd 24
102 61 An Exotic Pipeline - Gaussian Elimination!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : Etc Murray Cole Why Structured Parallel Programming Matters September 3rd 24
103 Elimination Phase 62 for (each row in sequence) { normalise this pivot row; broadcast result to subsequent rows; for (each subsequent row in parallel) { } } eliminate one column using broadcast row Murray Cole Why Structured Parallel Programming Matters September 3rd 24
104 63 Elimination Phase!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;; ;; ;; << << << == == == >> >> >>?????? AA AA AA BB BB BB CC CC CC DD DD DD EE EE EE FF FF FF GG GG GG HH HH HH II II II JJ JJ JJ KK KK KK LL LL LL MM MM MM NN NN NN OO OO OO PP PP PP QQ QQ QQ RR RR RR SS SS SS TT TT TT UU UU UU VV VV VV WW WW WW XX XX XX YYYYYYYYYYYYYYYYYYYYYYY ZZZZZZZZZZZZZZZZZZZZZZZ [[[[[[[[[[[[[[[[[[[[[[[ \\\\\\\\\\\\\\\\\\\\\\\ ]]]]]]]]]]]]]]]]]]]]]]] ^^^^^^^^^^^^^^^^^^^^^^^ ``````````````````````` aaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbb ccccccccccccccccccccccc ddddddddddddddddddddddd eeeeeeeeeeeeeeeeeeeeeee fffffffffffffffffffffff Pivot Row Murray Cole Why Structured Parallel Programming Matters September 3rd 24
105 64 Elimination Phase!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;; ;; ;; << << << == == == >> >> >>?????? AA AA AA BB BB BB CC CC CC DD DD DD EE EE EE FF FF FF GG GG GG HH HH HH II II II JJ JJ JJ KK KK KK LL LL LL MM MM MM NN NN NN OO OO OO PP PP PP QQ QQ QQ RR RR RR SS SS SS TT TT TT UU UU UU VV VV VV WWWWWWWWWWWWWWWWWWWWWWW XXXXXXXXXXXXXXXXXXXXXXX YYYYYYYYYYYYYYYYYYYYYYY ZZZZZZZZZZZZZZZZZZZZZZZ [[[[[[[[[[[[[[[[[[[[[[[ \\\\\\\\\\\\\\\\\\\\\\\ ]]]]]]]]]]]]]]]]]]]]]]] ^^^^^^^^^^^^^^^^^^^^^^^ ``````````````````````` aaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbb ccccccccccccccccccccccc ddddddddddddddddddddddd Normalise Murray Cole Why Structured Parallel Programming Matters September 3rd 24
106 65 Elimination Phase!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;; ;; ;; << << << == == == >> >> >>?????? AA AA AA BB BB BB CC CC CC DD DD DD EE EE EE FF FF FF GG GG GG HH HH HH II II II JJ JJ JJ KK KK KK LL LL LL MM MM MM NN NN NN OO OO OO PP PP PP QQ QQ QQ RR RR RR SS SS SS TT TT TT UU UU UU VV VV VV WW WW WW XX XX XX YY YY YY ZZ ZZ ZZ [[ [[ [[ \\ \\ \\ ]] ]] ]] ^^ ^^ ^^ `` `` `` aa aa aa bb bb bb cc cc cc dd dd dd ee ee ee ff ff ff gg gg gg hh hh hh ii ii ii jj jj jj kk kk kk ll ll ll mm mm mm nn nn nn oo oo oo pp pp pp qq qq qq rr rr rr ss ss ss tt tt tt uu uu uu vv vv vv wwwwwwwwwwwwwwwwwwwwwww xxxxxxxxxxxxxxxxxxxxxxx yyyyyyyyyyyyyyyyyyyyyyy zzzzzzzzzzzzzzzzzzzzzzz {{{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}} ~~~~~~~~~~~~~~~~~~~~~~~ ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Broadcast Murray Cole Why Structured Parallel Programming Matters September 3rd 24
107 66 Elimination Phase!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;; ;; ;; << << << == == == >> >> >>?????? AA AA AA BB BB BB CC CC CC DD DD DD EE EE EE FF FF FF GG GG GG HH HH HH II II II JJ JJ JJ KK KK KK LL LL LL MM MM MM NN NN NN OOOOOOOOOOOOOOOOOOOOOOO PPPPPPPPPPPPPPPPPPPPPPP QQQQQQQQQQQQQQQQQQQQQQQ RRRRRRRRRRRRRRRRRRRRRRR SSSSSSSSSSSSSSSSSSSSSSS TTTTTTTTTTTTTTTTTTTTTTT UUUUUUUUUUUUUUUUUUUUUUU VVVVVVVVVVVVVVVVVVVVVVV WWWWWWWWWWWWWWWWWWWWWWW XXXXXXXXXXXXXXXXXXXXXXX YYYYYYYYYYYYYYYYYYYYYYY ZZZZZZZZZZZZZZZZZZZZZZZ [[[[[[[[[[[[[[[[[[[[[[[ \\\\\\\\\\\\\\\\\\\\\\\ Eliminate Eliminate Eliminate Eliminate Murray Cole Why Structured Parallel Programming Matters September 3rd 24
108 67 Elimination Phase!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;; ;; ;; << << << == == == >> >> >>?????? AA AA AA BB BB BB CC CC CC DD DD DD EE EE EE FF FF FF GG GG GG HH HH HH II II II JJ JJ JJ KK KK KK LL LL LL MM MM MM NN NN NN OOOOOOOOOOOOOOOOOOOOOOO PPPPPPPPPPPPPPPPPPPPPPP QQQQQQQQQQQQQQQQQQQQQQQ RRRRRRRRRRRRRRRRRRRRRRR SSSSSSSSSSSSSSSSSSSSSSS TTTTTTTTTTTTTTTTTTTTTTT UUUUUUUUUUUUUUUUUUUUUUU VVVVVVVVVVVVVVVVVVVVVVV WWWWWWWWWWWWWWWWWWWWWWW XXXXXXXXXXXXXXXXXXXXXXX YYYYYYYYYYYYYYYYYYYYYYY ZZZZZZZZZZZZZZZZZZZZZZZ [[[[[[[[[[[[[[[[[[[[[[[ \\\\\\\\\\\\\\\\\\\\\\\ Pivot Row Murray Cole Why Structured Parallel Programming Matters September 3rd 24
109 68 Elimination Phase!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F G G H H I I J J K K L L Normalise 1 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
110 69 Elimination Phase!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F Broadcast Murray Cole Why Structured Parallel Programming Matters September 3rd 24
111 7 Elimination Phase!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F Eliminate Eliminate Eliminate Murray Cole Why Structured Parallel Programming Matters September 3rd 24
112 71 Elimination Phase!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = Pivot Row Murray Cole Why Structured Parallel Programming Matters September 3rd 24
113 Pipelined Version 72 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
114 Pipelined Version Textbook improvement interleaves broadcast and elimination phases. 72 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
115 Pipelined Version Textbook improvement interleaves broadcast and elimination phases. 72 Processors participate in broadcast begin elimination immediately (before broadcast completes elsewhere) iterations become pipelined Murray Cole Why Structured Parallel Programming Matters September 3rd 24
116 Pipelined Version Textbook improvement interleaves broadcast and elimination phases. 72 Processors participate in broadcast begin elimination immediately (before broadcast completes elsewhere) iterations become pipelined Each iteration would be slower independently, but pipelining across iterations produces an overall gain. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
117 73 Pipelined Version!!!!!! "" "" "" ## ## ## $$ $$ $$ %% %% %% && && && '' '' '' (( (( (( )) )) )) ** ** ** ,,,,,, // // // :: :: :: ;; ;; ;; << << << == == == >> >> >>?????? AA AA AA BB BB BB CC CC CC DD DD DD EE EE EE FF FF FF GG GG GG HH HH HH II II II JJ JJ JJ KK KK KK LL LL LL MM MM MM NN NN NN OO OO OO PP PP PP QQ QQ QQ RR RR RR SS SS SS TT TT TT UU UU UU VV VV VV WWWWWWWWWWWWWWWWWWWWWWW XXXXXXXXXXXXXXXXXXXXXXX YYYYYYYYYYYYYYYYYYYYYYY ZZZZZZZZZZZZZZZZZZZZZZZ [[[[[[[[[[[[[[[[[[[[[[[ \\\\\\\\\\\\\\\\\\\\\\\ ]]]]]]]]]]]]]]]]]]]]]]] ^^^^^^^^^^^^^^^^^^^^^^^ ``````````````````````` aaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbb ccccccccccccccccccccccc ddddddddddddddddddddddd Normalise Murray Cole Why Structured Parallel Programming Matters September 3rd 24
118 74 Pipelined Version!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F G G H H I I J J K K L L M M N N O O P P Q Q R R S S T T U U V V W W X X Y Y Z Z [ [ \ \ ] ] ^ ^ Send Murray Cole Why Structured Parallel Programming Matters September 3rd 24
119 75 Pipelined Version!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F G G H H I I J J K K L L M M N N O O P P Q Q R R S S T T U U V V W W X X Y Y Z Z [ [ \ \ ] ] ^ ^ ` ` a a b b c c d d e e f f Send Murray Cole Why Structured Parallel Programming Matters September 3rd 24
120 76 Pipelined Version!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F G G H H I I J J K K L L M M N N O O P P Q Q R R S S T T U U V V W W X X Y Y Z Z [ [ \ \ ] ] ^ ^ ` ` a a b b c d d Eliminate Send Murray Cole Why Structured Parallel Programming Matters September 3rd 24
121 77 Pipelined Version!! " " # # $ $ % % & & ' ' ( ( ) ) * * + +,, / / : : ; ; < < = = A A B B C C D D E E F F G G H H I I J J K K L L M M N N O O P P Q Q R R S S T T U U V V W W X X Y Y Z Z [ [ \ \ ] ] ^ ^ ` ` Send Eliminate 1 Normalise Murray Cole Why Structured Parallel Programming Matters September 3rd 24
122 Observations 78 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
123 Observations 78 Stages have internal state. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
124 Observations 78 Stages have internal state. There are no external buffers of input or output (both are in the stage state). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
125 Observations 78 Stages have internal state. There are no external buffers of input or output (both are in the stage state). Sequence of interactions is state dependent (activity is different the final time ). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
126 More Pipelining 79 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
127 More Pipelining A further observation is that the back-substitution phase can be pipelined too, but in the other direction. 79 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
128 More Pipelining A further observation is that the back-substitution phase can be pipelined too, but in the other direction ! "! " # $ # $ % & % & ' ( ' ( ) * ) * +, +, / / : 9 : 1 Etc Murray Cole Why Structured Parallel Programming Matters September 3rd 24
129 Algorithm Summary 8 Scatter_data(); Pipeline (top-to-bottom, elimination,...); // standard MPI // skeleton call Pipeline (bottom-to-top, back_substitution,...); // skeleton call Gather_results(); // standard MPI Murray Cole Why Structured Parallel Programming Matters September 3rd 24
130 eskel 81 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
131 eskel 81 The edinburgh Skeleton library Murray Cole Why Structured Parallel Programming Matters September 3rd 24
132 eskel 81 The edinburgh Skeleton library An experimental attempt to address these issues. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
133 eskel 81 The edinburgh Skeleton library An experimental attempt to address these issues. An extension of MPI s collective operation suite. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
134 MPI Key Concepts 82 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
135 MPI Key Concepts 82 A process (not processor) based model. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
136 MPI Key Concepts 82 A process (not processor) based model. Processes identified by ranks within groups (known as communicators ). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
137 MPI Key Concepts 82 A process (not processor) based model. Processes identified by ranks within groups (known as communicators ). Default communicator (all processes) is MPI COMM WORLD. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
138 MPI Key Concepts 82 A process (not processor) based model. Processes identified by ranks within groups (known as communicators ). Default communicator (all processes) is MPI COMM WORLD. Every communication specifies its communicator. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
139 MPI Key Concepts 82 A process (not processor) based model. Processes identified by ranks within groups (known as communicators ). Default communicator (all processes) is MPI COMM WORLD. Every communication specifies its communicator. Allows programmer to reflect logical groupings in an algorithm and to insulate communications within these from outside interference. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
140 MPI Communicators 83 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
141 MPI Communicators MPI_COMM_WORLD Murray Cole Why Structured Parallel Programming Matters September 3rd 24
142 85 MPI Communicators C3 1 2 C C2 MPI_COMM_WORLD Murray Cole Why Structured Parallel Programming Matters September 3rd 24
143 MPI: Collective Operations 86 int MPI Reduce (void sndbuf, void rcvbuf, int count, MPI Datatype dt, MPI Op op, int root, MPI Comm comm) input data buffer (each process contributes) output buffer (note restriction on type) operation to be used group and special roles within it (in this case, root receives the result) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
144 MPI: Collective Operations 87 int MPI Reduce (void sndbuf, void rcvbuf, int count, MPI Datatype dt, MPI Op op, int root, MPI Comm comm) input data buffer (each process contributes) output buffer (note restriction on type) operation to be used group and special roles within it (in this case, root receives the result) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
145 MPI: Collective Operations 88 int MPI Reduce (void sndbuf, void rcvbuf, int count, MPI Datatype dt, MPI Op op, int root, MPI Comm comm) input data buffer (each process contributes) output buffer (note restriction on type) operation to be used group and special roles within it (in this case, root receives the result) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
146 MPI: Collective Operations 89 int MPI Reduce (void sndbuf, void rcvbuf, int count, MPI Datatype dt, MPI Op op, int root, MPI Comm comm) input data buffer (each process contributes) output buffer (note restriction on type) operation to be used group and special roles within it (in this case, root receives the result) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
147 MPI: Collective Operations 9 int MPI Reduce (void sndbuf, void rcvbuf, int count, MPI Datatype dt, MPI Op op, int root, MPI Comm comm) input data buffer (each process contributes) output buffer (note restriction on type) operation to be used group and special roles within it (in this case, root receives the result) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
148 eskel 91 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
149 eskel 91 MPI collective operations like MPI Reduce are already simple skeletons. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
150 eskel 91 MPI collective operations like MPI Reduce are already simple skeletons. We design eskel from this basis to provide minimal conceptual disruption (principle 1) ad-hoc parallelism (principle 2) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
151 eskel Skeletons 92 The current draft of eskel defines five skeletons Murray Cole Why Structured Parallel Programming Matters September 3rd 24
152 eskel Skeletons 92 The current draft of eskel defines five skeletons Pipeline Farm Murray Cole Why Structured Parallel Programming Matters September 3rd 24
153 eskel Skeletons 92 The current draft of eskel defines five skeletons Pipeline Farm Deal HaloSwap Butterfly Murray Cole Why Structured Parallel Programming Matters September 3rd 24
154 Deal 93 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
155 Deal 93 Similar to a farm, but distributes task in cyclic order to workers (no farmer). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
156 Deal 93 Similar to a farm, but distributes task in cyclic order to workers (no farmer). Useful nested in pipelines, to internally replicate a stage. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
157 Deal 94 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
158 HaloSwap 95 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
159 HaloSwap 95 Representative of iterative relaxation algorithms. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
160 HaloSwap 95 Representative of iterative relaxation algorithms. Loop over local update and check for termination. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
161 HaloSwap 95 Representative of iterative relaxation algorithms. Loop over local update and check for termination. Interactions have two components (one from each neighbour). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
162 HaloSwap 95 Representative of iterative relaxation algorithms. Loop over local update and check for termination. Interactions have two components (one from each neighbour). Optional wraparound. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
163 HaloSwap 96 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
164 Butterfly 97 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
165 Butterfly 97 Captures a class of divide-and-conquer algorithms (those based on traversing hypercube dimensions). Murray Cole Why Structured Parallel Programming Matters September 3rd 24
166 Butterfly 97 Captures a class of divide-and-conquer algorithms (those based on traversing hypercube dimensions). A sequence of activities, in groups of different sizes. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
167 Butterfly 97 Captures a class of divide-and-conquer algorithms (those based on traversing hypercube dimensions). A sequence of activities, in groups of different sizes. Constrained to work with 2 d processes. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
168 Butterfly 98 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
169 Butterfly 99 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
170 Butterfly 1 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
171 The Gory Details 11 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
172 The Gory Details Function prototype for the pipeline skeleton: 11 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
173 The Gory Details Function prototype for the pipeline skeleton: 11 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Murray Cole Why Structured Parallel Programming Matters September 3rd 24
174 The Gory Details Function prototype for the pipeline skeleton: 11 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Why do we need fifteen parameters? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
175 The Gory Details Function prototype for the pipeline skeleton: 11 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Why do we need fifteen parameters? Because of the MPI basis and for flexibility. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
176 The Gory Details Function prototype for the pipeline skeleton: 12 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Why do we need fifteen parameters? Pipeline inputs Murray Cole Why Structured Parallel Programming Matters September 3rd 24
177 The Gory Details Function prototype for the pipeline skeleton: 13 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Why do we need fifteen parameters? Pipeline output buffer Murray Cole Why Structured Parallel Programming Matters September 3rd 24
178 The Gory Details Function prototype for the pipeline skeleton: 14 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Why do we need fifteen parameters? Pipeline stage activities Murray Cole Why Structured Parallel Programming Matters September 3rd 24
179 The Gory Details Function prototype for the pipeline skeleton: 15 void Pipeline (int ns, Amode t amode[], eskel molecule t ( stages[])(eskel molecule t ), int col, Dmode t dmode, spread t spr[], MPI Datatype ty[], void in, int inlen, int inmul, void out, int outlen, int outmul, int outbuffsz, MPI Comm comm) Why do we need fifteen parameters? Stage interfaces and modes Murray Cole Why Structured Parallel Programming Matters September 3rd 24
180 Summary 16 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
181 Summary 16 Parallel programming is important, but hard. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
182 Summary 16 Parallel programming is important, but hard. Structured parallel programming can help by allowing the programmer to express meta-knowledge about interaction structure. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
183 Summary 16 Parallel programming is important, but hard. Structured parallel programming can help by allowing the programmer to express meta-knowledge about interaction structure. This information allows the implementation to make macro optimisations, and supports coarse grain algorithm development methodologies. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
184 Summary 16 Parallel programming is important, but hard. Structured parallel programming can help by allowing the programmer to express meta-knowledge about interaction structure. This information allows the implementation to make macro optimisations, and supports coarse grain algorithm development methodologies. To enter the mainstream we have to be pragmatic. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
185 Future Work 17 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
186 Future Work These concepts and arguments are generic, and may be applied wherever parallelism appears: 17 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
187 Future Work These concepts and arguments are generic, and may be applied wherever parallelism appears: 17 Mainstream parallel computing Murray Cole Why Structured Parallel Programming Matters September 3rd 24
188 Future Work These concepts and arguments are generic, and may be applied wherever parallelism appears: 17 Mainstream parallel computing Grid computing? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
189 Future Work These concepts and arguments are generic, and may be applied wherever parallelism appears: 17 Mainstream parallel computing Grid computing? ASIC design? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
190 Future Work These concepts and arguments are generic, and may be applied wherever parallelism appears: 17 Mainstream parallel computing Grid computing? ASIC design? FPGA programming? Murray Cole Why Structured Parallel Programming Matters September 3rd 24
191 Future Work 18 Murray Cole Why Structured Parallel Programming Matters September 3rd 24
192 Future Work 18 I would like someone to give me a large amount of money and substantial resources in order to be able to pursue this programme swiftly and comprehensively. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
193 Future Work 18 I would like someone to give me a large amount of money and substantial resources in order to be able to pursue this programme swiftly and comprehensively. No reasonable offer refused. Murray Cole Why Structured Parallel Programming Matters September 3rd 24
194 Future Work 18 I would like someone to give me a large amount of money and substantial resources in order to be able to pursue this programme swiftly and comprehensively. No reasonable offer refused. Thank you Murray Cole Why Structured Parallel Programming Matters September 3rd 24
Enhancing the performance of Grid Applications with Skeletons and Process Algebras
Enhancing the performance of Grid Applications with Skeletons and Process Algebras (funded by the EPSRC, grant number GR/S21717/01) A. Benoit, M. Cole, S. Gilmore, J. Hillston http://groups.inf.ed.ac.uk/enhance/
More informationBRAND STANDARD GUIDELINES 2014
BRAND STANDARD GUIDELINES 2014 LOGO USAGE & TYPEFACES Logo Usage The Lackawanna College School of Petroleum & Natural Gas logo utilizes typography, two simple rule lines and the Lackawanna College graphic
More informationThe ABC s of Web Site Evaluation
Aa Bb Cc Dd Ee Ff Gg Hh Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Ss Tt Uu Vv Ww Xx Yy Zz The ABC s of Web Site Evaluation by Kathy Schrock Digital Literacy by Paul Gilster Digital literacy is the ability to understand
More informationWisconsin Retirement Testing Preparation
Wisconsin Retirement Testing Preparation The Wisconsin Retirement System (WRS) is changing its reporting requirements from annual to every pay period starting January 1, 2018. With that, there are many
More informationrecruitment Logo Typography Colourways Mechanism Usage Pip Recruitment Brand Toolkit
Logo Typography Colourways Mechanism Usage Primary; Secondary; Silhouette; Favicon; Additional Notes; Where possible, use the logo with the striped mechanism behind. Only when it is required to be stripped
More informationBRAND BOOK. Copyright 2016 WashU Student Union Student Union Brand Guidebook 1
BRAND BOOK 2019 2016 Copyright 2016 WashU Student Union Student Union Brand Guidebook 1 WHY THIS MATTERS While it is easy for the student body to see the hundreds of group events that take place every
More informationBrand Guidelines October, 2014
Brand Guidelines October, 2014 Contents 1 Logo 2 Graphical Elements 3 Icons 4 Typography 5 Colors 6 Stationery 7 Social Media 8 Templates 9 Product Line logos Brand Guidelines Page 2 1) Logo Logo: Overview
More informationPracticeAdmin Identity Guide. Last Updated 4/27/2015 Created by Vanessa Street
PracticeAdmin Identity Guide Last Updated 4/27/2015 Created by Vanessa Street About PracticeAdmin Mission At PracticeAdmin, we simplify the complex process of medical billing by providing healthcare professionals
More informationBRANDING AND STYLE GUIDELINES
BRANDING AND STYLE GUIDELINES INTRODUCTION The Dodd family brand is designed for clarity of communication and consistency within departments. Bold colors and photographs are set on simple and clean backdrops
More informationVisual Identity Guidelines. Abbreviated for Constituent Leagues
Visual Identity Guidelines Abbreviated for Constituent Leagues 1 Constituent League Logo The logo is available in a horizontal and vertical format. Either can be used depending on the best fit for a particular
More informationTranont Mission Statement. Tranont Vision Statement. Change the world s economy, one household at a time.
STYLE GUIDE Tranont Mission Statement Change the world s economy, one household at a time. Tranont Vision Statement We offer individuals world class financial education and training, financial management
More informationBrand Standards. V1 For Internal Use. Newcastle Systems Brand Standards 1
Brand Standards V1 For Internal Use Newcastle Systems Brand Standards 1 Logo In order to ensure that the logo appears consistent in all mediums do not alter it in any way. This includes stretching, changing
More informationMath 96--Radicals #1-- Simplify; Combine--page 1
Simplify; Combine--page 1 Part A Number Systems a. Whole Numbers = {0, 1, 2, 3,...} b. Integers = whole numbers and their opposites = {..., 3, 2, 1, 0, 1, 2, 3,...} c. Rational Numbers = quotient of integers
More informationCommunicating Process Architectures in Light of Parallel Design Patterns and Skeletons
Communicating Process Architectures in Light of Parallel Design Patterns and Skeletons Dr Kevin Chalmers School of Computing Edinburgh Napier University Edinburgh k.chalmers@napier.ac.uk Overview ˆ I started
More informationHow to Register for Summer Camp. A Tutorial
How to Register for Summer Camp A Tutorial 1. Upon arriving at our website (https://flightcamp.ou.edu/), the very first step is logging in. Please click the Login link in the top left corner of the page
More informationMedia Kit & Brand Guidelines
Media Kit & Brand Guidelines Generation XYZ Generation X 1960-1980 Generation Y 1977-2004 Generation Z Early 2001 - Present Named after Douglas Coupland s novel Generation X: Tales from an Accelerated
More informationBrand Standards September 2016 CREATED BY M3 GROUP
Brand Standards September 2016 CREATED BY M3 GROUP CONTENTS NACW as a Brand... 3 NACW Messaging... 3 NACW Logo... 5 Logo Spacing... 6 Color... 7 Color Palette... 8 Logo Misuse... 9 Typography... 10 Marketing
More informationSection 8: Monomials and Radicals
In this section, we are going to learn skills for: NGSS Standards MA.912.A.4.1 Simplify monomials and monomial expressions using the laws of integral exponents. MA.912.A.6.1 Simplify radical expressions.
More informationHow to Register for Summer Camp. A Tutorial
How to Register for Summer Camp A Tutorial Table of Contents 1. Logging In 2 2. OU student faculty, or staff or Previous visitor 3 3. New User Account 4 4. Summer Camp Offerings 5 5. Summer Camp Page 6
More information"Charting the Course... MOC A Planning, Deploying and Managing Microsoft Forefront TMG Course Summary
Description Course Summary The goal of this three-day instructor-led course is to provide students with the knowledge and skills necessary to effectively plan, deploy and manage Microsoft Forefront Threat
More informationBRAND GUIDELINES UPDATED NOVEMBER 2018
BRAND GUIDELINES UPDATED NOVEMBER 2018 National Industries for the Blind Brand Guidelines i 19nI2-1921 TABLE OF CONTENTS 01. Introduction 02. Logo Alignment 03. NIB Logo Specifications 04. NIB Logo Usage
More informationPalatino. Palatino. Linotype. Palatino. Linotype. Linotype. Palatino. Linotype. Palatino. Linotype. Palatino. Linotype
Copyright 2013 Johanna Corsini Arts 79 Typography 1 Sources: http://en.wikipedia.org/wiki/ http://en.wikipedia.org/wiki/typography By Johanna Corsini P a a P o l t a a n L P i l t n a i o a o y l t n n
More informationEdinburgh Research Explorer
Edinburgh Research Explorer Bringing skeletons out of the closet: a pragmatic manifesto for skeletal parallel programming Citation for published version: Cole, M 2004, 'Bringing skeletons out of the closet:
More informationT10/05-233r2 SAT - ATA Errors to SCSI Errors - Translation Map
To: T10 Technical Committee From: Wayne Bellamy (wayne.bellamy@hp.com), Hewlett Packard Date: June 30, 2005 Subject: T10/05-233r2 SAT - s to Errors - Translation Map Revision History Revision 0 (June 08,
More informationUsing eskel to implement the multiple baseline stereo application
1 Using eskel to implement the multiple baseline stereo application Anne Benoit a, Murray Cole a, Stephen Gilmore a, Jane Hillston a a School of Informatics, The University of Edinburgh, James Clerk Maxwell
More informationGrowing Our Own Through Collaboration
NWI INITIATIVE NUCLEAR WORKFORCE Growing Our Own Through Collaboration BRAND STANDARDS reference guide Brand Standards 2011 SRS Community Reuse Organization. All rights reserved. Version 1.0-02.10.2011
More informationTABLE OF CONTENTS. 3 Intro. 4 Foursquare Logo. 6 Foursquare Icon. 9 Colors. 10 Copy & Tone Of Voice. 11 Typography. 13 Crown Usage.
BRANDBOOK TABLE OF CONTENTS 3 Intro 4 Foursquare Logo 6 Foursquare Icon 9 Colors 10 Copy & Tone Of Voice 11 Typography 13 Crown Usage 14 Badge Usage 15 Iconography 16 Trademark Guidelines 2011 FOURSQUARE
More informationLesson 1: Why Move Things Around?
NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 1 8 2 Lesson 1: Why Move Things Around? Classwork Exploratory Challenge a Describe, intuitively, what kind of transformation is required to move the figure
More informationNovember 25, Mr. Paul Kaspar, PE City Engineer City of Bryan Post Office Box 1000 Bryan, Texas 77802
November 25, 213 Mr. Paul Kaspar, PE City Engineer City of Bryan Post Office Box 1 Bryan, Texas 7782 Re: Greenbrier Phase 9 Oversize Participation Request Dear Paul: On behalf of the owner, Carter-Arden
More informationCS3600 SYSTEMS AND NETWORKS
CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 15: Networking overview Prof. (amislove@ccs.neu.edu) What is a network? 2 What is a network? Wikipedia: A telecommunications network is a network
More informationHEL HEL HEL HEL VETIC HEL VETIC HEL HEL VETICA HEL HEL ETICA ETIC VETIC HEL VETIC HEL HEL C VETICA ETI- HEL HEL VETI HEL VETICA VETIC HEL HEL VETICA
CA C C CA C C CA Max Miedinger with Eduard Hoffmann C C CA C CA ETI- ETI- L istory elvetica was developed in 1957 by Max Miedinger with Eduard Hoffmann at the Haas sche Schriftgiesserei of Münchenstein,
More informationLecture 17: Array Algorithms
Lecture 17: Array Algorithms CS178: Programming Parallel and Distributed Systems April 4, 2001 Steven P. Reiss I. Overview A. We talking about constructing parallel programs 1. Last time we discussed sorting
More informationSolano Community College Academic Senate CURRICULUM COMMITTEE AGENDA Tuesday, May 1, :30 p.m., Room 503
Tuesday, 1:30 p.m., Room 503 1. ROLL CALL Robin Arie-Donch, Debra Berrett, Curtiss Brown, Joe Conrad (Chair), Lynn Denham-Martin, Erin Duane, Erin Farmer, Marianne Flatland, Betsy Julian, Margherita Molnar,
More informationMulticast Tree Aggregation in Large Domains
Multicast Tree Aggregation in Large Domains Joanna Moulierac 1 Alexandre Guitton 2 and Miklós Molnár 1 IRISA/University of Rennes I 502 Rennes France 2 Birkbeck College University of London England IRISA/INSA
More informationEureka Math. Grade 7, Module 6. Student File_A. Contains copy-ready classwork and homework
A Story of Ratios Eureka Math Grade 7, Module 6 Student File_A Contains copy-ready classwork and homework Published by the non-profit Great Minds. Copyright 2015 Great Minds. No part of this work may be
More informationI.D. GUIDE Kentucky Campus Version 1.0
I.D. GUIDE 2008-2009 Kentucky Campus Version 1.0 introduction to the identity guidelines Summer 2008 Dear Asbury College community, As we continue our mission of academic excellence and spiritual vitality
More informationThe MPI Message-passing Standard Lab Time Hands-on. SPD Course Massimo Coppola
The MPI Message-passing Standard Lab Time Hands-on SPD Course 2016-2017 Massimo Coppola Remember! Simplest programs do not need much beyond Send and Recv, still... Each process lives in a separate memory
More informationQuadratic Functions Date: Per:
Math 2 Unit 10 Worksheet 1 Name: Quadratic Functions Date: Per: [1-3] Using the equations and the graphs from section B of the NOTES, fill out the table below. Equation Min or Max? Vertex Domain Range
More informationSimilar Polygons Date: Per:
Math 2 Unit 6 Worksheet 1 Name: Similar Polygons Date: Per: [1-2] List the pairs of congruent angles and the extended proportion that relates the corresponding sides for the similar polygons. 1. AA BB
More information"Charting the Course... Teradata Basics Course Summary
Course Summary Description In this course, students will learn the basics of Teradata architecture with a focus on what s important to know from an IT and Developer perspective. Topics The Teradata Architecture
More informationIdentity: Clear Space. Brand Standards. Brand Standards
Identity: Clear Space 6.18.18 Identity: Clear Space Standards Overview This Manual provides the steps and background needed for use of The View On Pullen Circle identity and how it should be applied. It
More informationTwo Fundamental Concepts in Skeletal Parallel Programming
Two Fundamental Concepts in Skeletal Parallel Programming Anne Benoit and Murray Cole School of Informatics, The University of Edinburgh, James Clerk Maxwell Building, The King s Buildings, Mayfield Road,
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms http://sudalab.is.s.u-tokyo.ac.jp/~reiji/pna16/ [ 9 ] Shared Memory Performance Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1. Architecture
More informationCONTENTS 05 DYNICS BRAND 06 LOGO 08 SOFTWARE 12 PRODUCT BRANDS 16 ICONS 17 TYPEFACE 19 E SQUAD 20 CONTACT INFORMATION COPYRIGHT NOTICE
BRANDING GUIDE CONTENTS 05 DYNICS BRAND 06 LOGO 08 SOFTWARE 12 PRODUCT BRANDS 16 ICONS 17 TYPEFACE 19 E SQUAD 20 CONTACT INFORMATION COPYRIGHT NOTICE 2018 DYNICS, Inc. All Rights Reserved No part of this
More informationBangladesh. Rohingya Emergency Response. Early Warning, Alert and Response System (EWARS) Epidemiological Bulletin W6 2018
Bangladesh Rohingya Emergency Response Early Warning, Alert and Response System (EWARS) Epidemiological Bulletin W 0 Ministry of Health and Family Welfare Bangladesh Printed: 0: Tuesday, February 0 UTC
More informationVisit MathNation.com or search "Math Nation" in your phone or tablet's app store to watch the videos that go along with this workbook!
Topic 1: Introduction to Angles - Part 1... 47 Topic 2: Introduction to Angles Part 2... 50 Topic 3: Angle Pairs Part 1... 53 Topic 4: Angle Pairs Part 2... 56 Topic 5: Special Types of Angle Pairs Formed
More informationUnifi 45 Projector Retrofit Kit for SMART Board 580 and 560 Interactive Whiteboards
Unifi 45 Projector Retrofit Kit for SMRT oard 580 and 560 Interactive Whiteboards 72 (182.9 cm) 60 (152.4 cm) S580 S560 Cautions, warnings and other important product information are contained in document
More information1.2 Round-off Errors and Computer Arithmetic
1.2 Round-off Errors and Computer Arithmetic 1 In a computer model, a memory storage unit word is used to store a number. A word has only a finite number of bits. These facts imply: 1. Only a small set
More informationAscenium: A Continuously Reconfigurable Architecture. Robert Mykland Founder/CTO August, 2005
Ascenium: A Continuously Reconfigurable Architecture Robert Mykland Founder/CTO robert@ascenium.com August, 2005 Ascenium: A Continuously Reconfigurable Processor Continuously reconfigurable approach provides:
More informationHandwriting Standards
ANCHOR STANDARDS adapted from the " for Handwriting & Keyboarding" retrieved from www.hw21summit.com HW.1 From legible letters, numerals, and punctuation using manuscript writing, demonstrating an understanding
More informationSequence alignment is an essential concept for bioinformatics, as most of our data analysis and interpretation techniques make use of it.
Sequence Alignments Overview Sequence alignment is an essential concept for bioinformatics, as most of our data analysis and interpretation techniques make use of it. Sequence alignment means arranging
More informationMulti-view object segmentation in space and time. Abdelaziz Djelouah, Jean Sebastien Franco, Edmond Boyer
Multi-view object segmentation in space and time Abdelaziz Djelouah, Jean Sebastien Franco, Edmond Boyer Outline Addressed problem Method Results and Conclusion Outline Addressed problem Method Results
More information"Charting the Course... CA-View Administration. Course Summary
Course Summary Description This course is designed to teach all administrator functions of CA View from the system initialization parameters to integration with CA Deliver. Students will learn implementation
More informationLetter Sound Cut & Paste Activity Sheets
Letter Sound Cut & Paste Activity Sheets The Measured Mom www.themeasuredmom.com My blog has hundreds of free resources for parents and teachers... Click here for more free printables! Thank you for respecting
More information1 Overview. 2 A Classification of Parallel Hardware. 3 Parallel Programming Languages 4 C+MPI. 5 Parallel Haskell
Table of Contents Distributed and Parallel Technology Revision Hans-Wolfgang Loidl School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh 1 Overview 2 A Classification of Parallel
More informationPoint-to-Point Synchronisation on Shared Memory Architectures
Point-to-Point Synchronisation on Shared Memory Architectures J. Mark Bull and Carwyn Ball EPCC, The King s Buildings, The University of Edinburgh, Mayfield Road, Edinburgh EH9 3JZ, Scotland, U.K. email:
More informationBland-Altman Plot and Analysis
Chapter 04 Bland-Altman Plot and Analysis Introduction The Bland-Altman (mean-difference or limits of agreement) plot and analysis is used to compare two measurements of the same variable. That is, it
More information6.2 OPTIONAL SECURITY SERVICES
6.2 OPTIONAL SECURITY SERVICES This section presents Qwest s proposed optional services for the Networx program. In our selection of optional services we applied two criteria: Does the Qwest Team have
More informationMemories. CPE480/CS480/EE480, Spring Hank Dietz.
Memories CPE480/CS480/EE480, Spring 2018 Hank Dietz http://aggregate.org/ee480 What we want, what we have What we want: Unlimited memory space Fast, constant, access time (UMA: Uniform Memory Access) What
More informationParallel Processing. Parallel Processing. 4 Optimization Techniques WS 2018/19
Parallel Processing WS 2018/19 Universität Siegen rolanda.dwismuellera@duni-siegena.de Tel.: 0271/740-4050, Büro: H-B 8404 Stand: September 7, 2018 Betriebssysteme / verteilte Systeme Parallel Processing
More informationLesson 12: Angles Associated with Parallel Lines
Lesson 12 Lesson 12: Angles Associated with Parallel Lines Classwork Exploratory Challenge 1 In the figure below, LL 1 is not parallel to LL 2, and mm is a transversal. Use a protractor to measure angles
More informationHigh Performance Computing Lecture 41. Matthew Jacob Indian Institute of Science
High Performance Computing Lecture 41 Matthew Jacob Indian Institute of Science Example: MPI Pi Calculating Program /Each process initializes, determines the communicator size and its own rank MPI_Init
More informationAdvanced optimizations of cache performance ( 2.2)
Advanced optimizations of cache performance ( 2.2) 30 1. Small and Simple Caches to reduce hit time Critical timing path: address tag memory, then compare tags, then select set Lower associativity Direct-mapped
More informationComplete, Correct, Inter-connected and Current: Making the most of better data in Pure. Thomas Gurney Product Manager, Data Models Oct, 2018
0 Complete, Correct, Inter-connected and Current: Making the most of better data in Pure. Thomas Gurney Product Manager, Data Models Oct, 2018 1 Section 1 Introduction 2 Pure Pure is, and remains, a product
More informationTexture Mapping. Michael Kazhdan ( /467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, , 16.6
Texture Mapping Michael Kazhdan (61.457/467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, 16.4.5, 16.6 Textures We know how to go from this to this J. Birn Textures But what about this to this? J. Birn Textures How
More informationidentity and logo GrapHIc StandardS manual august 2016
identity and logo GrapHIc StandardS manual august 2016 Welcome This Graphic Standards Manual contains the tools necessary for using the Forest Enhancement Society of British Columbia (FESBC) logo and brand
More informationLast Time. Intro to Parallel Algorithms. Parallel Search Parallel Sorting. Merge sort Sample sort
Intro to MPI Last Time Intro to Parallel Algorithms Parallel Search Parallel Sorting Merge sort Sample sort Today Network Topology Communication Primitives Message Passing Interface (MPI) Randomized Algorithms
More informationHigh-Performance Computing: MPI (ctd)
High-Performance Computing: MPI (ctd) Adrian F. Clark: alien@essex.ac.uk 2015 16 Adrian F. Clark: alien@essex.ac.uk High-Performance Computing: MPI (ctd) 2015 16 1 / 22 A reminder Last time, we started
More informationResearch Faculty Summit Systems Fueling future disruptions
Research Faculty Summit 2018 Systems Fueling future disruptions Wolong: A Back-end Optimizer for Deep Learning Computation Jilong Xue Researcher, Microsoft Research Asia System Challenge in Deep Learning
More informationCEU BRAND GUIDELINES
CEU BRAND GUIDELINES THE CEU SEAL THE CEU LOGO The CEU Seal shall be used for highly formal documents and records, which require authentication. (transcript of records, diploma, certificates) The CEU Logo
More informationCOMP Logic for Computer Scientists. Lecture 23
COMP 1002 Logic for Computer cientists Lecture 23 B 5 2 J Admin stuff Assignment 3 extension Because of the power outage, assignment 3 now due on Tuesday, March 14 (also 7pm) Assignment 4 to be posted
More informationMacro O Compensate a single cartridge ActiveEdge tool
Macro O8504 - Compensate a single cartridge ActiveEdge tool Compensates an ActiveEdge tool with one AE cartridge by a specific micron amount on diameter. The unique Tool ID and compensation value are encoded
More informationStacks & Queues. Kuan-Yu Chen ( 陳冠宇 ) TR-212, NTUST
Stacks & Queues Kuan-Yu Chen ( 陳冠宇 ) 2018/10/01 @ TR-212, NTUST Review Stack Stack Permutation Expression Infix Prefix Postfix 2 Stacks. A stack is an ordered list in which insertions and deletions are
More informationAdvanced Caching Techniques
Advanced Caching Approaches to improving memory system performance eliminate memory operations decrease the number of misses decrease the miss penalty decrease the cache/memory access times hide memory
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu Can we identify node groups? (communities, modules, clusters) 2/13/2014 Jure Leskovec, Stanford C246: Mining
More information"Charting the Course... Constructing CA-OPS/MVS Applications Course Summary
Course Summary Description This course is designed for the attendee who understands REXX and is ready to take the next step toward developing CA-OPS/MVS applications. The course will show you how to construct,
More informationConnection Guide (RS-232C)
Machine Automation Controller NJ-series General-purpose Seriarl Connection Guide (RS-232C) OMRON Corporation G9SP Safety Controller P545-E1-01 About Intellectual Property Rights and Trademarks Microsoft
More informationPolynomial Functions I
Name Student ID Number Group Name Group Members Polnomial Functions I 1. Sketch mm() =, nn() = 3, ss() =, and tt() = 5 on the set of aes below. Label each function on the graph. 15 5 3 1 1 3 5 15 Defn:
More informationJoint Structured/Unstructured Parallelism Exploitation in muskel
Joint Structured/Unstructured Parallelism Exploitation in muskel M. Danelutto 1,4 and P. Dazzi 2,3,4 1 Dept. Computer Science, University of Pisa, Italy 2 ISTI/CNR, Pisa, Italy 3 IMT Institute for Advanced
More informationComputer Science Technical Report. High Performance Unified Parallel C (UPC) Collectives For Linux/Myrinet Platforms
Computer Science Technical Report High Performance Unified Parallel C (UPC) Collectives For Linux/Myrinet Platforms Alok Mishra and Steven Seidel Michigan Technological University Computer Science Technical
More informationConcept of Curve Fitting Difference with Interpolation
Curve Fitting Content Concept of Curve Fitting Difference with Interpolation Estimation of Linear Parameters by Least Squares Curve Fitting by Polynomial Least Squares Estimation of Non-linear Parameters
More informationSyntax Analysis Top Down Parsing
Syntax Analysis Top Down Parsing CMPSC 470 Lecture 05 Topics: Overview Recursive-descent parser First and Follow A. Overview Top-down parsing constructs parse tree for input string from root and creating
More informationSection 1: Introduction to Geometry Points, Lines, and Planes
Section 1: Introduction to Geometry Points, Lines, and Planes Topic 1: Basics of Geometry - Part 1... 3 Topic 2: Basics of Geometry Part 2... 5 Topic 3: Midpoint and Distance in the Coordinate Plane Part
More informationSection 6: Triangles Part 1
Section 6: Triangles Part 1 Topic 1: Introduction to Triangles Part 1... 125 Topic 2: Introduction to Triangles Part 2... 127 Topic 3: rea and Perimeter in the Coordinate Plane Part 1... 130 Topic 4: rea
More informationComputer Architecture Spring 2016
Computer Architecture Spring 2016 Lecture 08: Caches III Shuai Wang Department of Computer Science and Technology Nanjing University Improve Cache Performance Average memory access time (AMAT): AMAT =
More informationHow to use this catalog
Font Catalog Copyright 2005 Vision Engraving Systems. All rights reserved. This publication is protected by copyright, and all rights are reserved. No part of this manual may be reproduced or transmitted
More informationCS6716 Pattern Recognition
CS6716 Pattern Recognition Prototype Methods Aaron Bobick School of Interactive Computing Administrivia Problem 2b was extended to March 25. Done? PS3 will be out this real soon (tonight) due April 10.
More informationMEMORY HIERARCHY DESIGN. B649 Parallel Architectures and Programming
MEMORY HIERARCHY DESIGN B649 Parallel Architectures and Programming Basic Optimizations Average memory access time = Hit time + Miss rate Miss penalty Larger block size to reduce miss rate Larger caches
More informationCISC 360. Cache Memories Nov 25, 2008
CISC 36 Topics Cache Memories Nov 25, 28 Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance Cache Memories Cache memories are small, fast SRAM-based
More informationCache Memories October 8, 2007
15-213 Topics Cache Memories October 8, 27 Generic cache memory organization Direct mapped caches Set associative caches Impact of caches on performance The memory mountain class12.ppt Cache Memories Cache
More informationNumerical Algorithms
Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0
More informationwrite-through v. write-back write-through v. write-back write-through v. write-back option 1: write-through write 10 to 0xABCD CPU RAM Cache ABCD: FF
write-through v. write-back option 1: write-through 1 write 10 to 0xABCD CPU Cache ABCD: FF RAM 11CD: 42 ABCD: FF 1 2 write-through v. write-back option 1: write-through write-through v. write-back option
More informationIMPERIAL VALLEY COLLEGE CURRICULUM AND INSTRUCTION COMMITTEE MEETING ADOPTED MINUTES REGULAR MEETING THURSDAY, JUNE 4, :00 P.M.
IMPERIAL VALLEY COLLEGE CURRICULUM AND INSTRUCTION COMMITTEE MEETING ADOPTED MINUTES REGULAR MEETING THURSDAY, JUNE 4, 2009 3:00 P.M. BOARD ROOM Present: Kathy Berry Suzanne Gretz Carol Lee Taylor Ruhl
More informationBRAND GUIDELINES JANUARY 2017
BRAND GUIDELINES JANUARY 2017 GETTING AROUND Page 03 05 06 07 08 09 10 12 14 15 Section 01 - Our Logo 02 - Logo Don ts 03 - Our Colors 04 - Our Typeface 06 - Our Art Style 06 - Pictures 07 - Call to Action
More informationLecture 3.3 Robust estimation with RANSAC. Thomas Opsahl
Lecture 3.3 Robust estimation with RANSAC Thomas Opsahl Motivation If two perspective cameras captures an image of a planar scene, their images are related by a homography HH 2 Motivation If two perspective
More informationEvaluating the Performance of Skeleton-Based High Level Parallel Programs
Evaluating the Performance of Skeleton-Based High Level Parallel Programs Anne Benoit, Murray Cole, Stephen Gilmore, and Jane Hillston School of Informatics, The University of Edinburgh, James Clerk Maxwell
More informationOptimising with the IBM compilers
Optimising with the IBM Overview Introduction Optimisation techniques compiler flags compiler hints code modifications Optimisation topics locals and globals conditionals data types CSE divides and square
More informationIn context with optimizing Fortran 90 code it would be very helpful to have a selection of
1 ISO/IEC JTC1/SC22/WG5 N1186 03 June 1996 High Performance Computing with Fortran 90 Qualiers and Attributes In context with optimizing Fortran 90 code it would be very helpful to have a selection of
More informationPerformance Issues in Parallelization. Saman Amarasinghe Fall 2010
Performance Issues in Parallelization Saman Amarasinghe Fall 2010 Today s Lecture Performance Issues of Parallelism Cilk provides a robust environment for parallelization It hides many issues and tries
More informationIntroduction. Stream processor: high computation to bandwidth ratio To make legacy hardware more like stream processor: We study the bandwidth problem
Introduction Stream processor: high computation to bandwidth ratio To make legacy hardware more like stream processor: Increase computation power Make the best use of available bandwidth We study the bandwidth
More information