Aggressive Scheduling for Numerical Programs

Similar documents
Automated Empirical Optimizations of Software and the ATLAS project* Software Engineering Seminar Pascal Spörri

Performance Analysis of a Family of WHT Algorithms

Program Composition and Optimization: An Introduction

System Demonstration of Spiral: Generator for High-Performance Linear Transform Libraries

SPIRAL Generated Modular FFTs *

Mixed Data Layout Kernels for Vectorized Complex Arithmetic

Generating Parallel Transforms Using Spiral

Think Globally, Search Locally

Formal Loop Merging for Signal Transforms

Automatic Tuning Matrix Multiplication Performance on Graphics Hardware

Administration. Prerequisites. Meeting times. CS 380C: Advanced Topics in Compilers

Empirical Auto-tuning Code Generator for FFT and Trigonometric Transforms

Algorithms and Computation in Signal Processing

SPIRAL: A Generator for Platform-Adapted Libraries of Signal Processing Algorithms

Automatic Generation of the HPC Challenge's Global FFT Benchmark for BlueGene/P

SPIRAL Overview: Automatic Generation of DSP Algorithms & More *

A Comparison of Empirical and Model-driven Optimization

Stochastic Search for Signal Processing Algorithm Optimization

How to Write Fast Numerical Code

HiLO: High Level Optimization of FFTs

Compilers, Hands-Off My Hands-On Optimizations

Joint Runtime / Energy Optimization and Hardware / Software Partitioning of Linear Transforms

Stochastic Search for Signal Processing Algorithm Optimization

Facilitating Compiler Optimizations through the Dynamic Mapping of Alternate Register Structures

How to Write Fast Numerical Code Spring 2012 Lecture 9. Instructor: Markus Püschel TAs: Georg Ofenbeck & Daniele Spampinato

PARALLELIZATION OF WAVELET FILTERS USING SIMD EXTENSIONS

A Fast Fourier Transform Compiler

Matrix Multiplication Specialization in STAPL

How to Write Fast Code , spring rd Lecture, Apr. 9 th

Automatic Performance Tuning. Jeremy Johnson Dept. of Computer Science Drexel University

Cache-oblivious Programming

Auto-tuning a Matrix Routine for High Performance. Rune E. Jensen Ian Karlin Anne C. Elster

Is Search Really Necessary to Generate High-Performance BLAS?

More on Conjunctive Selection Condition and Branch Prediction

Program Generation, Optimization, and Adaptation: SPIRAL and other efforts

An Experimental Comparison of Cache-oblivious and Cache-aware Programs DRAFT: DO NOT DISTRIBUTE

An Empirically Optimized Radix Sort for GPU

CS 426 Parallel Computing. Parallel Computing Platforms

SPIRAL: Code Generation for DSP Transforms

Generic Software pipelining at the Assembly Level

Automatic Tuning of Whole Applications Using Direct Search and a Performance-based Transformation System

CS152 Computer Architecture and Engineering VLIW, Vector, and Multithreaded Machines

Parallel Computer Architecture and Programming Written Assignment 3

A Feasibility Study for Methods of Effective Memoization Optimization

Bindel, Fall 2011 Applications of Parallel Computers (CS 5220) Tuning on a single core

Automating the Modeling and Optimization of the Performance of Signal Transforms

Too large design space?

Statistical Evaluation of a Self-Tuning Vectorized Library for the Walsh Hadamard Transform

Autotuning. John Cavazos. University of Delaware UNIVERSITY OF DELAWARE COMPUTER & INFORMATION SCIENCES DEPARTMENT

Scientific Computing. Some slides from James Lambers, Stanford

Exploring the Optimization Space of Dense Linear Algebra Kernels

HPC VT Machine-dependent Optimization

Automatic Tuning of Discrete Fourier Transforms Driven by Analytical Modeling

LESSON 13: LANGUAGE TRANSLATION

ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation

Intel Parallel Studio XE 2016 Composer Edition for OS X* Installation Guide and Release Notes

Exploitation of instruction level parallelism

A Matrix--Matrix Multiplication methodology for single/multi-core architectures using SIMD

Special Issue on Program Generation, Optimization, and Platform Adaptation /$ IEEE

NUMA-aware Multicore Matrix Multiplication

Automatic Counterflow Pipeline Synthesis

Pipelining and Vector Processing

Learning to Construct Fast Signal Processing Implementations

A New Vectorization Technique for Expression Templates in C++ arxiv: v1 [cs.ms] 6 Sep 2011

1 Motivation for Improving Matrix Multiplication

Optimization of Tele-Immersion Codes

Parallelism in Spiral

Intel Parallel Studio XE 2016 Composer Edition Update 1 for OS X* Installation Guide and Release Notes

Performance of Multicore LUP Decomposition

Automatic Performance Tuning and Machine Learning

Limits of Data-Level Parallelism

X-Ray : Automatic Measurement of Hardware Parameters

Computer Architecture and Engineering CS152 Quiz #4 April 11th, 2012 Professor Krste Asanović

Automatic Performance Programming?

An Optimizing Compiler for Low-Level Floating Point Operations. Lucien William Van Elsen

Introducing coop: Fast Covariance, Correlation, and Cosine Operations

Adaptive Matrix Transpose Algorithms for Distributed Multicore Processors

Dense Arithmetic over Finite Fields with the CUMODP Library

Optimising with the IBM compilers

Outline EEL 5764 Graduate Computer Architecture. Chapter 3 Limits to ILP and Simultaneous Multithreading. Overcoming Limits - What do we need??

Intel Math Kernel Library

Introduction A Parametrized Generator Case Study Real Applications Conclusions Bibliography BOAST

Automated Timer Generation for Empirical Tuning

Automatically Tuned Linear Algebra Software (ATLAS) R. Clint Whaley Innovative Computing Laboratory University of Tennessee.

AN EFFICIENT IMPLEMENTATION OF NESTED LOOP CONTROL INSTRUCTIONS FOR FINE GRAIN PARALLELISM 1

Mapping Vector Codes to a Stream Processor (Imagine)

Tuning High Performance Kernels through Empirical Compilation

Code optimization techniques

CS6303 Computer Architecture Regulation 2013 BE-Computer Science and Engineering III semester 2 MARKS

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #43. Multidimensional Arrays

Visual Amortization Analysis of Recompilation Strategies

Hardware Loop Buffering

PERFORMANCE ANALYSIS OF ALTERNATIVE STRUCTURES FOR 16-BIT INTEGER FIR FILTER IMPLEMENTED ON ALTIVEC SIMD PROCESSING UNIT

A SIMD Vectorizing Compiler for Digital Signal Processing Algorithms Λ

Stanford University Computer Systems Laboratory. Stream Scheduling. Ujval J. Kapasi, Peter Mattson, William J. Dally, John D. Owens, Brian Towles

Computer Organization & Assembly Language Programming

Software Pipelining for Coarse-Grained Reconfigurable Instruction Set Processors

Popularity of Twitter Accounts: PageRank on a Social Network

Martin Kruliš, v

9/5/17. The Design and Implementation of Programming Languages. Compilation. Interpretation. Compilation vs. Interpretation. Hybrid Implementation

Transcription:

Aggressive Scheduling for Numerical Programs Richard Veras, Flavio Cruz, Berkin Akin May 2, 2012 Abstract Certain classes of algorithms are still hard to optimize by modern compilers due to the differences between computer architectures. Numerical algorithms, for example, perform better when taking advantage of features such as software pipelining and loop unrolling. Expert programmers are usually able to get the last drop of performance from the hardware, however, compilers cannot perform the same feat for loss of generality. The SPIRAL system bridges this gap by generating machine specific code from high level algorithmic descriptions. We used the SPIRAL system to implement software pipelining for numerical algorithms in order to bridge this gap. Our results show that by unrolling and applying software pipelining at the C code level, we can improve performance by up to 30% over unrolling alone. 1 Introduction Modern compilers are able to transform relatively slow code into fast code by performing several optimizations. The majority of such optimizations are geared towards general purpose code that is typically full of memory operations and branches. This makes sense, since the code that most programmers write falls into this category. However, there are certain classes of algorithms where this approach falls short in improving the performance of the target code. One good example is numerical code. Expert programmers are able to take a very naïve implementation and then fine tune it to take advantage of the micro-architecture features of the target machine. Such features include things like software pipelining [8] (for improving the performance of Out-Of-Order execution by interleaving loop iterations), unrolling (by knowing how big are the memory caches of the processor) and pre-fetching. Unfortunately, compilers are unable to perform such optimizations due to some loss of generality and also because most optimizations performed by a compiler cannot see the forest for the trees, and thus can t take into account a complete numerical algorithm. This problem is further emphasized by the fact that expert programmers test and tune various permutations of their code and its scheduling of instructions, a task that compilers are usually unable to do because they would have to guess the structure of the input data 1

and then generate and run small instances of the problem to find an optimal solution. Automatic Program Generation Systems like SPIRAL [7] are tools that produce a system dependent and highly optimized implementation of an operation from a very high level mathematical description. This is achieved though three mechanisms: a language that captures the algorithmic details of an operation, a powerful rewriting mechanism that uses that knowledge to map these algorithms to the special features of the hardware (vector instructions and parallelism), and a backend code generator that produces a tuned implementation in a high-level language (usually C). The later component provides a means for SPIRAL to be portable by allowing modern compilers to do the task of transforming C code to assembly. However, because SPIRAL uses C as the target language, this comes at a cost since there is no control over register allocation and scheduling. Scheduling in particular plays a very important role in the performance of numerical code. Even on an Out-of-Order processor long dependency chains can drastically degrade performance. Thus, the goal of our project is to incorporate scheduling, namely Software Pipelining, into SPIRAL in a way that we can produce highly-tuned schedules for target architectures. This way, we can still use the optimizations available in modern compilers while also performing aggressive scheduling optimizations that only experts would be able to do. In this project we explore the problem of scheduling code at the high-level and generate high-performance code that is comparable to an expert. 1.1 Our Approach To approach this problem, we are directly using SPIRAL intermediate code (called ICode) to represent several numerical algorithms. Any optimization performed at this level can benefit lower level code. We apply several optimizations to the ICode representation of our algorithms. For example, for loop unrolling and loop tilling (or loop blocking), we have several parameters that allows us to set how much unrolling will be done in the inner loop and the size of the tile. This gives us great freedom to select the best parameters that will make our final code more performant. In addition to these, we also apply software pipelining to the original code. We create three states, namely prologue, steady state, and epilogue, to setup, run and finish the pipelined computation, respectively. After these transformations, we generate C code from ICode. Of course, the final code will have all the optimizations performed at the ICode level, however we have developed a technique that forces the target C compiler to obey the scheduling we have set up during the software pipelining transformation. Usually, C compilers tend to remove such scheduling and perform their own optimizations, which is not what we want. Our approach gives us several advantages. The first one is that we are able to generate fully optimized code using software pipelining from a high level description. The second and final advantage, is that we are still able to use the 2

nice features provided by the C compiler (specially portability) while tuning the scheduling parameters. 1.2 Related Work Much research has been done to automatically generate the best implementation for a given architecture using tunable parameters. These parameters are related to standard techniques such as loop transformations, loop tilling and loop unrolling. In these research systems, there is a procedure that searches for the best combination of optimizations that will generate the most performant implementation. A very well-known system is ATLAS [9] (Automatically Tuned Linear Algebra Software). It automatically generates high performant numerical libraries known as the BLAS [5] (Basic Linear Algebra Subprograms) using global search over the optimization parameters. Many programs are generated with different combinations and then run on the actual hardware to determine the best values that give the best performance. ATLAS can also make use of hand-coded BLAS routines in the case that the best solution performs worse than the hand-coded version. FFTW3 [3] is also similar to ATLAS, although FFTW3 targets the implementation of the Discrete Fourier Transform. FFTW3 uses the concept of a planner that receives as input the shape of the input data (array sizes, memory layouts, etc). The planner returns a plan that is able to solve the problem using data that conforms to the initial shape. The planner measures the execution time of different plans and selects the best one. FFTW3 results show that the best plan performs very close to what an expert would do. While both ATLAS and FFTW3 use the same global search principle, Yotov [10] showed that it is possible to get similar results just by using traditional analytical models. In his work, he modified the search engine of ATLAS to use a model parameter estimator that receives hardware information (L1 cache size, number of registers, etc) and outputs one implementation of the target algorithm that should be best suited for the input parameters. Yotov wanted to show that traditional compilers should be able to get similar results as ATLAS without the need to execute multiple versions of the same algorithm to find the best solution. He showed that model driven optimization can achieve very similar results, without the need to do extensive and time consuming empirical search. Software pipelining was a technique invented by Patel and Davidson [6] for scheduling hardware pipelines. Later, Rau and Glaeser [8] were able to implement software pipelining in a compiler. In her seminal paper, Lam [4] showed that it is possible to implement software pipelining without hardware support and just instruction-level parallelism. This can be accomplished by using modulo variable expansion and unrolling of the loop code a small number of times, a technique called modulo renaming. Lam also noted that inter-iteration dependences in a loop can create cycles in the dependence graph and therefore constraint the scheduling of all other instructions. Lam presented some techniques to deal with this problem. 3

1.3 Our Contributions The contributions of this paper are the following: Software Pipelining We developed a transform that takes a numerical algorithm and adds software pipelining support using three stages. We are able to use this optimization along with other optimizations such as loop unrolling and loop tilling. Moreover, we were able to apply a technique that disallows C compilers to perform their own scheduling to the code. At the same time, we still use all the other optimizations provided by the C compiler. Evaluation We evaluated our programs using several tunable parameters for software pipelining, loop unrolling and loop tilling. The results show very good speedups over code that only uses loop unrolling. 2 Details Regarding Design SPIRAL is an Automatic Program Generation System (APGS) that exploits the mathematical structure of algorithms to optimize and fine tune them. Most compilers do a poor job of compiling mathematical/numerical algorithms since they don t take advantage of the underlying computer architecture. SPIRAL is a feedback-driven compiler because it explores the capabilities of the target machine and tries to find the best implementation, comparable to the one that would be hand-written by an expert programmer. SPIRAL is thus the perfect tool to exploit software pipelining in numeral algorithms. SPIRAL code is used to declare high level mathematical operations which compiled gets transformed into ICode. ICode is the intermediate representation that SPIRAL uses before the code is compiled into C code. It represents the abstract syntax tree (AST) of the parsed program. It is also possible to write programs in ICode. An example of ICode is presented in Fig. 1. It declares a function that receives two vectors (x and y) and swaps their values. In the example, we use unrolling (the second loop construct) to optimize the performance. This program has three parameters: datatype, to select the datatype of the vectors; loop bound, to select the size of both vectors; and, unroll bound to select the unrolling factor. We can ultimately compile the example in Fig. 1 to C code. If we compile the call to swap(tint, 20, 2) we get the C code in Fig. 2. 2.1 Software Pipelining Software pipelining is a coding technique that optimizes loops by overlapping instructions from different loop iterations in order to exploit instruction level parallelism. Due to this overlap we say that software pipelining is a type of Out- Of-Order execution. While software pipelining is somewhat similar to hardware pipelining, the order of the executions is not determined by the processor but by the compiler or the programmer. 4

swap := ( datatype, loop bound, u n r o l l b o u n d ) > l e t ( x := v a r ( x, TPtr ( d a t a t y p e ) ), y := v a r ( y, TPtr ( d a t a t y p e ) ), i t := v a r ( i, TInt ), i t 2 := v a r ( i i, TInt ), a c t u a l i := v a r ( a c t, TArray ( TInt, l o o p b o u n d ) ), temp := v a r ( temp, TArray ( d a t a t y p e, l o o p b o u n d ) ), program ( c h a i n ( f u n c ( TVoid, swap, [ x, y ], d e c l ( C o n c a t e n a t i o n ( [ i t ], getvararraynames ( temp, unroll bound, datatype ), getvararraynames ( act, unroll bound, TInt ) ), c h a i n ( loop ( it, loop bound / unroll bound, FlattenCode ( l o o p ( i t 2, u n r o l l b o u n d, c h a i n ( a s s i g n ( nthvar ( a c t u a l i, i t 2 ), add ( i t 2, mul ( i t, u n r o l l b o u n d ) ) ), a s s i g n ( nthvar ( temp, i t 2 ), nth ( x, nthvar ( a c t u a l i, i t 2 ) ) ), a s s i g n ( nth ( x, nthvar ( a c t u a l i, i t 2 ) ), nth ( y, nthvar ( a c t u a l i, i t 2 ) ) ), a s s i g n ( nth ( y, nthvar ( a c t u a l i, i t 2 ) ), nthvar ( temp, i t 2 ) ) ) ). u n r o l l ( ) ) ) ) ) ) ) ) ) ; Figure 1: Swap Example With software pipelining, we can reduce pipeline stalls in the hardware, since the pipeline can be fed with new instructions from different loop iterations. This greatly increases instruction parallelism because we can start a new loop iteration before the past ones have completed. For this technique to be effective, the loop is transformed into three sections: Prologue: We start adding several iterations to the hardware pipeline. This phase ends when the last instruction of the first instruction is to be executed. Steady State: This phase starts where the prologue ends. Most execution time is spent here and this finishes when the first instruction of the last iteration is executed. Epilogue: In this phase, the hardware pipeline executes the last remaining instructions. The need to duplicate the loop code into three phases is a notorious disadvantage of software pipelining, because it may increase the program s code size. Moreover, since software pipelining is usually used in conjunction with loop unrolling, this enlarges the code even more. While this can be an issue on processors with little to no instruction cache, in practice on most modern systems this is not an issue. The implementation of software pipelining in SPIRAL was straightforward, and involved three indexing functions each for the prologue, epilogue and steady state, which mapped the instructions in a fully unrolled loop to its correct position in an unrolled and software pipelined loop. 5

void swap ( int x, int y ) { int act 0, act 1, i, temp 0, temp 1 ; for ( i = 0 ; i <= 9 ; i ++) { a c t 0 = ( i 2 ) ; temp 0 = x [ a c t 0 ] ; x [ a c t 0 ] = y [ a c t 0 ] ; y [ a c t 0 ] = temp 0 ; a c t 1 = (1 + ( i 2 ) ) ; temp 1 = x [ a c t 1 ] ; x [ a c t 1 ] = y [ a c t 1 ] ; y [ a c t 1 ] = temp 1 ; } } Additionally, by targeting unrolled code we were able to bypass the need to do register allocation in SPIRAL. By insuring that the number of instructions in the pipelined loop is less than the number of registers on the system, the compiler was able to allocate registers with little to no spills. 2.2 Enforcing a Schedule Most modern compilers do not preserve instruction ordering imposed at the input language level, and instead rely on general purpose methods such as list scheduling to order instructions. In the common case, this frees the developer from the task of determining an efficient schedule, but in our case this feature serves as a roadblock. While the GCC [1] compiler provides flags for disabling instruction scheduling it comes at the cost of overall performance because we only plan to schedule loop code. Moreover, our plan involves using the Intel Compiler [2], thus we need a general solution for scheduling at the C code level. In order to enforce a schedule at the C code level we need to prevent the compiler from imposing its own scheduling. A compiler must preserve the semantics of the input code, so to take advantage of this fact we place a label before each instruction with a goto nested in a switch-case block. We declare the value for the switch statement as a global variable set such that the code will always jump to the first instruction. Initially, we simply embedded each instruction in a switch-case block, but both the GCC and Intel compiler would create multiple code paths and schedule each path individually. Fig. 2 shows an example of our method in practice. For our purposes this technique proved itself effective for enforcing a schedule, but in some cases the GCC compiler would add superfluous instructions. Thus in the future a more general approach, such as a compiler directive, would be advantageous. 6

int s t a t e =0; void t e s t ( double Y, double X) { switch ( s t a t e ) { case 1 : { goto l a b e l 1 ; } case 2 : { goto l a b e l 2 ; } /.... s n i p.... / case 2 6 : { goto l a b e l 2 6 ; } case 2 7 : { goto l a b e l 2 7 ; } } } l a b e l 1 : a91 = (X + 1 ) ; l a b e l 2 : a95 = (X) ; /.... s n i p.... / l a b e l 2 6 : a111 = ( a94 ) ; l a b e l 2 7 : a112 = (Y + 3 ) ; Figure 2: A template for enforcing an instruction ordering in C. 2.3 Putting it all Together Through the ability to apply software pipelining on any loop in SPIRAL and a mechanism to enforce scheduling we can tailor a schedule of any operation written in ICode to a given system. Currently, our optimizations for tailoring software pipelining include loop tilling (strip-mining), reordering and unrolling the outer loops before we perform the actual scheduling. These few optimization provide a great degree of freedom with respect to tuning an implementation to a system. However, our current system is not without limitations. Because we are attempting to schedule at a high level, our ICode implementation must map to the underlying Instruction Set Architecture (ISA). Furthermore, our method restricts the quality of optimizations that the compiler can provide, thus the ICode must be aggressively optimized. Fortunately, SPIRAL was designed with the necessary facilities in place for this level of optimization. 7

3 Experimental Setup The goal of our experiments is to demonstrate the effectiveness of software pipelining at the C code level for automatically generated numerical operations. Instruction scheduling is one of the few optimizations that programmer experts have over SPIRAL. Thus, showing that software pipelining in addition to loop unrolling outperforms unrolling alone will help in bridging the gap between experts and APGS like SPIRAL. For our experiments we benchmarked multiple automatically generated variants of Matrix-Vector (GEMV) on several systems with different microarchitectures. The variants included a baseline in which no loops were unrolled, two unrolled variants where the inner-most loop was fully unrolled and the other dimension was unrolled by either 1 or 4. Additionally, there were two variants that were software pipelined versions of the unrolled code. The two different unrollings resulted in completely different schedules in the software pipelined code. It is worth noting that we specifically chose our input sizes to fit entirely in the L1 cache, so that performance would mostly be impacted by the number of instruction stalls. Lastly, we used 4-way 32bit floating point vector instructions. The systems we tested on were selected for their varied scheduling facilities with respect to instruction window size where applicable. We chose three Out- Of-Order architectures (Core i7 960, Core2 X9650, Xeon E5320) and a single in-order system (Intel Atom). 4 Experimental Evaluation As anticipated, the use of software pipelining provided a substantial performance gain over unrolling alone, and on the more modern system (Core i7, Fig. 3) yielded an impressive 1.8x speedup over unrolling alone. The other two Out-Of-Order systems (Fig. 4 and Fig. 5) saw similar gains. The reason for performance improvement over simple unrolling is attributed to the fact that software pipelining breaks dependency chains and provides the processor s ready instruction window with a variety of independent instructions to choose from, thus reducing the risk of a dependency stall. Interestingly, software pipelining did not provide as significant of a gain on the Intel Atom (Fig. 6) as it did on the Out-Of-Order systems. This could be attributed to the fact that we were using this processor in 32bit mode and thus we were limited to half the number of registers that are available in 64bit mode. Furthermore, our current software pipelining implementation does not support long delays, which might have been necessary on this system. 5 Surprises and Lessons Learned Initially, we were going to use a simulator to model processors and determine how to map scheduling and unrolling parameters to the hardware model. Un- 8

Figure 3: Relative speedup of GEMV on the Intel Core i7 960. fortunately, we wildly underestimated the difficulty of simply installing the simulator. 6 Conclusion One of main limitations of APGS over expert programmers is that there are some optimizations that could not be done effectively without sacrificing the level of abstraction that such a system provides. The main limitation is that of scheduling, and for computationally dense operations renders APGS useless over experts. In this paper we demonstrated the effectiveness of schedule instructions in an APGS while maintaining a high level of abstraction. We achieved this goal in two steps. Firstly, we provided a mechanism for enforcing a schedule at the C code level. Secondly, we implemented a software pipelining transformation in SPIRAL. Through these two steps demonstrated significant performance gains of an automatically generated and tuned computationally dense operation on a 9

Figure 4: Relative speedup of GEMV on the Intel Core2 Extreme X9560. variety of systems. This enhancement to SPIRAL is general and can easily be extended to a large class of operations. 6.1 Future Work First and foremost we wish to test a larger variety of automatically generated operations. In particular, operations such as Matrix-Matrix Multiply stand to gain significantly from our system. Additionally, for the sake of the In-Order systems such as the Atom, or those with lengthy instruction, we plan on implementing some mechanism for delaying instructions for arbitrary lengths. Lastly, we wish to incorporate list scheduling in addition to software pipelining. We suspect that the added granularity will open up further opportunities for performance gains. 10

Figure 5: Relative speedup of GEMV on the Intel Xeon E5320. 7 Post-mortem and Final Notes We propose distributing the credit as 33% to Richard, 33% to Flvio and 33% to Berkin. Richard extended SPIRAL and devised the C scheduling mechanism, Flvio implemented the benchmarks, Berkin set up the experiments. References [1] Gcc webpage. http://http://gcc.gnu.org. [2] Intel compiler webpage. http://software.intel.com/en-us/articles/ intel-compilers/. [3] Matteo Frigo, Steven, and G. Johnson. The design and implementation of fftw3. In Proceedings of the IEEE, pages 216 231, 2005. 11

Figure 6: Relative speedup of GEMV on the Intel Atom. [4] M. Lam. Software pipelining: an effective scheduling technique for vliw machines. In Proceedings of the ACM SIGPLAN 1988 conference on Programming Language design and Implementation, PLDI 88, pages 318 328, New York, NY, USA, 1988. ACM. [5] C. L. Lawson, R. J. Hanson, D. R. Kincaid, and F. T. Krogh. Basic linear algebra subprograms for fortran usage. ACM Trans. Math. Softw., 5(3):308 323, September 1979. [6] Janak H. Patel and Edward S. Davidson. Improving the throughput of a pipeline by insertion of delays. In Proceedings of the 3rd annual symposium on Computer architecture, ISCA 76, pages 159 164, New York, NY, USA, 1976. ACM. [7] Markus Püschel, José M. F. Moura, Jeremy Johnson, David Padua, Manuela Veloso, Bryan Singer, Jianxin Xiong, Franz Franchetti, Aca Gacic, 12

Yevgen Voronenko, Kang Chen, Robert W. Johnson, and Nicholas Rizzolo. SPIRAL: Code generation for DSP transforms. Proceedings of the IEEE, special issue on Program Generation, Optimization, and Adaptation, 93(2):232 275, 2005. [8] B. R. Rau and C. D. Glaeser. Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing. In Proceedings of the 14th annual workshop on Microprogramming, MICRO 14, pages 183 198, Piscataway, NJ, USA, 1981. IEEE Press. [9] Clint Whaley, Antoine Petitet, and Jack J. Dongarra. Automated empirical optimization of software and the atlas project. PARALLEL COMPUTING, 27:2001, 2000. [10] Kamen Yotov, Xiaoming Li, Gang Ren, Maria Garzaran, David Padua, Keshav Pingali, and Paul Stodghill. Is search really necessary to generate high-performance blas?, 2005. 13