Performance Tuning on the Blackfin Processor

Similar documents
Section 6 Blackfin ADSP-BF533 Memory

Blackfin Optimizations for Performance and Power Consumption

Blackfin ADSP-BF533 External Bus Interface Unit (EBIU)

5 MEMORY. Overview. Figure 5-0. Table 5-0. Listing 5-0.

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Keystone Architecture Inter-core Data Exchange

LECTURE 11. Memory Hierarchy

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

Optimal Porting of Embedded Software on DSPs

Page 1. Multilevel Memories (Improving performance using a little cash )

1. Memory technology & Hierarchy

CS Computer Architecture

CS399 New Beginnings. Jonathan Walpole

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

The Nios II Family of Configurable Soft-core Processors

CS3350B Computer Architecture

Memory Management Outline. Operating Systems. Motivation. Paging Implementation. Accessing Invalid Pages. Performance of Demand Paging

Interconnecting Components

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Swapping. Operating Systems I. Swapping. Motivation. Paging Implementation. Demand Paging. Active processes use more physical memory than system has

EEC 170 Computer Architecture Fall Cache Introduction Review. Review: The Memory Hierarchy. The Memory Hierarchy: Why Does it Work?

1. Microprocessor Architectures. 1.1 Intel 1.2 Motorola

Chapter 8. Virtual Memory

Slide Set 9. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

Input/Output. Today. Next. Principles of I/O hardware & software I/O software layers Disks. Protection & Security

Operating system Dr. Shroouq J.

TMS320C6678 Memory Access Performance

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

Engineer-to-Engineer Note

CSF Cache Introduction. [Adapted from Computer Organization and Design, Patterson & Hennessy, 2005]

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS450/550 Operating Systems

JPEG decoding using end of block markers to concurrently partition channels on a GPU. Patrick Chieppe (u ) Supervisor: Dr.

Chapter 5 Memory Hierarchy Design. In-Cheol Park Dept. of EE, KAIST

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

Final Lecture. A few minutes to wrap up and add some perspective

EE 4683/5683: COMPUTER ARCHITECTURE

Mapping the AVS Video Decoder on a Heterogeneous Dual-Core SIMD Processor. NikolaosBellas, IoannisKatsavounidis, Maria Koziri, Dimitris Zacharis

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

The Memory System. Components of the Memory System. Problems with the Memory System. A Solution

AN4777 Application note

Chapter Seven. Large & Fast: Exploring Memory Hierarchy

Multimedia Decoder Using the Nios II Processor

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Lecture: Large Caches, Virtual Memory. Topics: cache innovations (Sections 2.4, B.4, B.5)

A Framework for Memory Hierarchies

5 MEMORY. Figure 5-0. Table 5-0. Listing 5-0.

ARM Processors for Embedded Applications

08 - Address Generator Unit (AGU)

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Lecture 11 Cache. Peng Liu.

I/O Management and Disk Scheduling. Chapter 11

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

The Memory Hierarchy & Cache

Textbook: Burdea and Coiffet, Virtual Reality Technology, 2 nd Edition, Wiley, Textbook web site:

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Memory Technologies. Technology Trends

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Virtual Memory Outline

Spring 2017 :: CSE 506. Device Programming. Nima Honarmand

Chapter 5 (Part II) Large and Fast: Exploiting Memory Hierarchy. Baback Izadi Division of Engineering Programs

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Cache Performance (H&P 5.3; 5.5; 5.6)

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Negotiating the Maze Getting the most out of memory systems today and tomorrow. Robert Kaye

Chapter 2: Memory Hierarchy Design Part 2

ECE ECE4680

Advanced Memory Organizations

Graduate Institute of Electronics Engineering, NTU 9/16/2004

Chapter 4: Memory Management. Part 1: Mechanisms for Managing Memory

AN4838. Managing memory protection unit (MPU) in STM32 MCUs. Application note. Introduction

Pick a time window size w. In time span w, are there, Multiple References, to nearby addresses: Spatial Locality

ECE 485/585 Midterm Exam

CPUs. Caching: The Basic Idea. Cache : MainMemory :: Window : Caches. Memory management. CPU performance. 1. Door 2. Bigger Door 3. The Great Outdoors

Misc. Third Generation Batch Multiprogramming. Fourth Generation Time Sharing. Last Time Evolution of OSs

HyperLink Programming and Performance consideration

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

Memory management. Knut Omang Ifi/Oracle 10 Oct, 2012

Copyright 2012, Elsevier Inc. All rights reserved.

Lecture notes for CS Chapter 2, part 1 10/23/18

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

CS 31: Intro to Systems Virtual Memory. Kevin Webb Swarthmore College November 15, 2018

LECTURE 10: Improving Memory Access: Direct and Spatial caches

Lecture 14: Cache & Virtual Memory

Course Administration

Lec 11 How to improve cache performance

Performance metrics for caches

Main Points of the Computer Organization and System Software Module

Transcription:

1 Performance Tuning on the Blackfin Processor

Outline Introduction Building a Framework Memory Considerations Benchmarks Managing Shared Resources Interrupt Management An Example Summary 2

Introduction The first level of optimization is provided by the compiler The remaining portion of the optimization comes from techniques performed at the system level Memory management DMA management Interrupt management The purpose of this presentation is to help you understand how some of the system aspects of your application can be managed to tune performance 3

One Challenge How do we get from here Format Columns Rows Total Pixels Type of Memory Required For Direct Implementation SQCIF 128 96 12288 L1, L2, off-chip L2 QCIF CIF VGA D-1 NTSC 176 352 640 720 144 288 480 486 25344 101376 307200 349920 L1, L2, off-chip L2 L2, off-chip L2 off-chip L2 off-chip L2 Unlimited memory Unlimited processing File-based Power cord 4CIF 704 576 405504 off-chip L2 D-1 PAL 16CIF 720 1408 576 1152 414720 1622016 off-chip L2 off-chip L2 to here? 4 All formats require management of multiple memory spaces Limited memory Finite processing Stream-based Battery

The path to optimization Compiler Optimization System Optimization Assembly Optimization 100.00% 90.00% 80.00% let compiler optimize Processor Utilization 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% exploit architecture features efficient memory layout streamline data flow use specialized instructions 0 1 2 3 4 5 6 7 8 9 Optimization Steps 5

The World Leader in High Performance Signal Processing Solutions Creating a Framework

The Importance of Creating a Framework Framework definition: The software infrastructure that moves code and data within an embedded system Building this early on in your development can pay big dividends There are three categories of frameworks that we see consistently from Blackfin developers 7

Common Frameworks Processing on the fly Safety-critical applications (e.g., automotive lane departure warning system) Applications without external memory Programming ease overrides performance Programmers who need to meet a deadline Performance supersedes programming ease Algorithms that push the limits of the processor 8

Processing on the fly Can t afford to wait until large video buffers filled in external memory Instead, they can be brought into on-chip memory immediately and processed line-by-line This approach can lead to quick results in decision-based systems The processor core can directly access lines of video in onchip memory. Software must ensure the active video frame buffer is not overwritten until processing on the current frame is complete. 9

Example of Processing-on-the-fly Framework CORE Collision avoidance 720 pixels Single cycle access Collision Warning 480 pixels 33 msec frame rate 63 µsec line rate Video in DMA L1 MEM Data is processed one line at a time instead of waiting for an entire frame to be collected. 10

Programming Ease Rules Strives to achieve simplest programming model at the expense of some performance Focus is on time-to-market Optimization isn t as important as time-to-market It can always be revisited later Provides a nice path for upgrades! Easier to develop for both novices and experts 11

Example of Programming Ease Rules framework Cores split Data processing CORE CORE Processed Data L2 MEM DMA Sub-image External Memory Video in DMA DMA This programming model best matches time-to-market focused designs 12

Performance Rules Bandwidth-efficient Strives to attain best performance, even if programming model is more complex Might include targeted assembly routines Every aspect of data flow is carefully planned Allows use of less expensive processors because the device is right-sized for the application Caveats Might not leave enough room for extra features or upgrades Harder to reuse 13

Example of Performance Rules Framework CORE CORE Compressed data Single cycle access Single cycle access L1 MEM L1 MEM DMA Macro block External Memory Video in DMA DMA 1 line at a time L2 MEM Provides most efficient use of external memory bus. 14

Common Attributes of Each Model Instruction and data management is best done up front in the project Small amounts of planning can save headaches later on 15

The World Leader in High Performance Signal Processing Solutions Features that can be used to improve performance

Now that we have discussed the background We will now review the concepts that will help you tune performance Using Cache and DMA Managing memory Managing DMA channels Managing interrupts 17

The World Leader in High Performance Signal Processing Solutions Cache vs. DMA

19 Cache versus DMA

The World Leader in High Performance Signal Processing Solutions Cache

Configuring internal instruction memory as cache Core Way 1 foo1(); foo2(); foo3(); Example: L1 instruction memory configured as 4-way set-associative cache External memory Instructions are brought into internal memory where single cycle performance can be achieved Instruction cache provides several key benefits to increase performance Usually provides the highest bandwidth path into the core For linear code, the next instructions will be on the way into the core after each cache miss The most recently used instructions are the least likely to be replaced Critical items can be locked in cache, by line 21 Instructions in cache can execute in a single cycle

Configuring internal data memory as cache Core Way 1 Volatile data Static data Peripheral DMA Cache Example: Data brought in from a peripheral External memory Data cache also provides a way to increase performance Usually provides the highest bandwidth path into the core For linear data, the next data elements will be on the way into the core after each cache miss Write-through option keeps source memory up to date Data written to source memory every time it is modified Write-back option can further improve performance Data only written to source memory when data is replaced in cache Volatile buffers must be managed to ensure coherency between DMA and cache 22

Write-back vs. Write-through Write-back is usually 10-15% more efficient but It is algorithm dependent Write-through is better when coherency between more than one resource is required Make sure you try both options when all of your peripherals are running 23

The World Leader in High Performance Signal Processing Solutions DMA

Why Use a DMA Controller? The DMA controller runs independently of the core The core should only have to set up the DMA and respond to interrupts Core processor cycles are available for processing data DMA can allow creative data movement and filtering Saves potential core passes to re-arrange the data Core Interrupt handler DMA Descriptor list Buffer Internal memory Peripheral Data Source 25

26 Using 2D DMA to efficiently move image data

DMA and Data Cache Coherence Coherency between data cache and buffer filled via DMA transfer must be maintained Cache must be invalidated to ensure old data is not used when processing most recent buffer Invalidate buffer addresses or actual cache line, depending on size of the buffer Interrupts can be used to indicate when it is safe to invalidate buffer for next processing interval This often provides a simpler programming model (with less of a performance increase) than a pure DMA model Core Data Cache Volatile buffer0 Data brought in from a peripheral New buffer Volatile buffer1 27 Interrupt Process buffer Invalidate cache lines associated with that buffer

Does code fit into internal memory? Yes Map code into internal memory No Map code to external memory Instruction partitioning Desired performance is achieved Turn Cache on Is desired performance achieved? Yes No Lock lines with critical code Use L1 SRAM Yes Is desired performance achieved? No Use overlay mechanism 28 Programming effort Increases as you move across Desired performance is achieved

Is the data volatile or static? Static Volatile Map to cacheable memory locations Yes Will the buffers fit into internal memory? No Data partitioning Single cycle access achieved Map to external memory Is DMA part of the programming model? No Turn data cache on Desired performance is achieved Is buffer larger than cache size? No Invalidate using invalidate instruction before read Programming effort Increases 29 as you move across Yes Invalidate with direct cache line access before read

A Combination of Cache and DMA Usually Provides the Best Performance Instruction Cache Func_A Way 1 Way 2 Way 3 Way 4 Data Cache High bandwidth cache fill Func_B Func_C Func_D Func_E Once cache is enabled and the DMA controller is configured, the programmer can focus on core algorithm development. Way 1 Func_F 30 Way 2 Data SRAM Buffer 1 Buffer 2 Stack On-chip memory: Smaller capacity but lower latency High bandwidth cache fill High-speed DMA Main( ) Tables Data High-speed peripherals Off-chip memory: Greater capacity but larger latency

The World Leader in High Performance Signal Processing Solutions Memory Considerations

Blackfin Memory Architecture: The Basics Core 600MHz Configurable as Cache or SRAM Single cycle to access L1 Instruction Memory L1 Data Memory L1 Data Memory 600MHz 10 s of Kbytes Several cycles to access 100 s of Kbytes DMA On-chip Unified on-chip L2 300MHz Several system cycles to access 100 s of Mbytes Off-chip Unified External External Memory Memory External off-chip Memory L2 Memory <133MHz 32

Instruction and Data Fetches In a single core clock cycle, the processor can perform One instruction fetch of 64 bits and either Two 32-bit data fetches or One 32-bit data fetch and one 32-bit data store The DMA controller can also be running in parallel with the core without any cycle stealing 33

Partitioning Data Internal Memory Sub-Banks Multiple accesses can be made to different internal sub-banks in the same cycle Core fetch Core fetch Buffer0 Buffer1 Buffer2 Coefficients Unused DMA DMA DMA Core fetch Core fetch Buffer0 Buffer1 Buffer2 DMA DMA Unused Coefficients Un-optimized Optimized DMA and core conflict when accessing sub-banks Core and DMA operate in harmony 34 Take advantage of the sub-bank architecture!

L1 Instruction Memory 16KB Configurable Bank 4KB sub-bank 4KB sub-bank EAB Cache Line Fill Instruction DCB -DMA 4KB sub-bank 4KB sub-bank 16 KB SRAM Four 4KB single-ported sub-banks Allows simultaneous core and DMA accesses to different banks 16 KB cache Least Recently Used replacement keeps frequently executed code in cache 4-way set associative with arbitrary locking of ways and lines 35

L1 Data Memory 16KB Configurable Bank Block is Multi-ported when: Accessing different sub-bank OR Accessing one odd and one even access (Addr bit 2 different) within the same sub-bank. 4KB sub-bank 4KB sub-bank EAB Cache Line Fill Data 0 DCB -DMA Data 1 4KB sub-bank 4KB sub-bank 36 When Used as SRAM Allows simultaneous dual DAG and DMA access When Used as Cache Each bank is 2-way set-associative No DMA access Allows simultaneous dual DAG access

Partitioning Code and Data -- External Memory Banks Row activation within an SDRAM consumes multiple system clock cycles Multiple rows can be active at the same time One row in each bank Code Code Instruction Fetch DMA DMA External Bus Video Frame 0 Video Frame 1 Ref Frame Unused Instruction Fetch DMA DMA External Bus Video Frame 0 Video Frame 1 Four 16MByte Banks internal to SDRAM Unused Ref Frame 37 External Memory Row activation cycles are taken almost every access External Memory Row activation cycles are spread across hundreds of accesses Take advantage of all four open rows in external memory to save activation cycles!

The World Leader in High Performance Signal Processing Solutions Benchmarks

Important Benchmarks Core accesses to SDRAM take longer than accesses made by the DMA controller For example, Blackfin Processors with a 16-bit external bus behave as follows: 16-bit core reads take 8 System Clock (SCLK) cycles 32-bit core reads take 9 System Clock cycles 16-bit DMA reads take ~1 SCLK cycle 16-bit DMA writes take ~ 1 SCLK cycle Bottom line: Data is most efficiently moved with the DMA controllers! 39

The World Leader in High Performance Signal Processing Solutions Managing Shared Resources

External Memory Interface: ADSP-BF561 Core has priority to external bus unless DMA is urgent* You are able to program this so that the DMA has a higher priority than the core * Urgent DMA implies data will be lost if access isn t granted Core A is higher priority than Core B Programmable priority 41

Priority of Core Accesses and the DMA Controller at the External Bus A bit within the EBIU_AMGCTL register can be used to change the priority between core accesses and the DMA controller 42

What is an urgent condition? Peripheral Peripheral When the DMA FIFO is empty and the Peripheral is transmitting Peripheral FIFO or Peripheral FIFO When the DMA FIFO is full and the Peripheral has a sample to send DMA FIFO Empty DMA FIFO Full External memory External memory 43

Bus Arbitration Guideline Summary: Who wins? External memory (in descending priority) Locked Core accesses (testset instruction) Urgent DMA Cache-line fill Core accesses DMA accesses We can set all DMA s to be urgent, which will elevate the priority of all DMAs above Core accesses L1 memory (in descending priority) DMA accesses Core accesses 44

The World Leader in High Performance Signal Processing Solutions Tuning Performance

Managing External Memory Transfers in the same direction are more efficient than intermixed accesses in different directions DMA Core External Bus External Memory Group transfers in the same direction to reduce number of turn-arounds 46

Improving Performance DMA Traffic Control improves system performance when multiple DMAs are ongoing (typical system) Multiple DMA accesses can be done in the same direction For example, into SDRAM or out of SDRAM Makes more efficient use of SDRAM 47

DMA Traffic Control DEB_Traffic_Period has the biggest impact on improving performance The correct value is application dependent but if 3 or less DMA channels are active at any one time, a larger value (15) of DEB_Traffic_Period is usually better. When more than 3 channels are active, a value closer to the mid value (4 to 7) is usually better. 48

The World Leader in High Performance Signal Processing Solutions Priority of DMAs

Priority of DMA Channels Each DMA controller has multiple channels If more than one DMA channel tries to access the controller, the highest priority channel wins The DMA channels are programmable in priority When more than one DMA controller is present, the priority of arbitration is programmable The DMA Queue Manager provided with System Services should be used 50

51 Interrupt Processing

Common Mistakes with Interrupts Spending too much time in an interrupt service routine prevents other critical code from executing From an architecture standpoint, interrupts are disabled once an interrupt is serviced Higher priority interrupts are enabled once the return address (RETI register) is saved to the stack It is important to understand your application real-time budget How long is the processor spending in each ISR? Are interrupts nested? 52

Programmer Options Program the most important interrupts as the highest priority Use nesting to ensure higher priority interrupts are not locked out by lower priority events Use the Call-back manager provided with System Services Interrupts are serviced quickly and higher priority interrupts are not locked out 53

The World Leader in High Performance Signal Processing Solutions A Example

Video Decoder Budget What s the cycle budget for real-time decoding? (Cycle/pel * pixel/frame * frame/sec) < core frequency Cycle/pel < (core frequency / [pixel/frame * frame/sec]) Leave at least 10% of the MIPS for other things (audio, system, transport layers ) For D-1 video: 720x480, 30 frame/sec (~10M pel/sec) Video Decoder budget only ~50 cycle/pel What s the bandwidth budget? ([Encoded bitstream in via DMA] + [Reference frame in via DMA] + [Reconstructed MDMA Out] + [ITU-R 656 Out] + [PPI Output]) < System Bus Throughput Video Decoder Budget ~130 MB/s 55

Video Decoders Input Bitstream Packet Buffer Ref Frames Display Frames 1011001001 External Memory Decode Interpolation Uncompensate Signal processing Format Conversion 56 Internal L1 Memory

Data Placement Large buffers go in SDRAM Packet Buffer (~ 3 MB) 4 Reference Frames (each 720x480x1.5 bytes) 8 Display Frames (each 1716x525 bytes) Small Buffers go in L1 memory VLD Lookup Tables Inverse Quantization, DCT, zig-zag, etc. Temporary Result Buffers 57

Data Movement (SDRAM to L1) Reference Frames are 2D in SDRAM Reference Windows are linear in L1 Y U V 2D-to-1D DMA used to bring in reference windows for interpolation 58 SDRAM L1

Data Movement (L1 to SDRAM) All temporary results in L1 buffers are stored linearly. All data brought in from SDRAM are used linearly. Decoded Macroblock L1 SDRAM 59 1D-to-2D DMA transfers decoded macroblock to SDRAM to build reference frame.

Bandwidth Used by DMA Input Data Stream Reference Data In In Loop Filter Data In Reference Data Out In Loop Filter Data Out Video Data Out 1 MB/s 30 MB/s 15 MB/s 15 MB/s 15 MB/s 27 MB/s Calculations based on 30 frames per second. Be careful not to simply add up the bandwidths! Shared resources, bus turnaround, and concurrent activity all need to be considered 60

Summary The compiler provides the first level of optimization There are some straightforward steps you can take up front in your development which will save you time Blackfin Processors have lots of features that help you achieve your desired performance level Thank you 61