SKA PHASE 1 CORRELATOR AND TIED ARRAY BEAMFORMER

Size: px
Start display at page:

Download "SKA PHASE 1 CORRELATOR AND TIED ARRAY BEAMFORMER"

Transcription

1 SKA PHASE 1 CORRELATOR AND TIED ARRAY BEAMFORMER Document number... WP TD 002 Revision... 1 Author... J. D. Bunton Date Status... Approved for release Name Designation Affiliation Date Signature Additional Authors G.A.Hampson Submitted by: J.D. Bunton CSIRO Approved by: W. Turner Signal Processing Domain Specialist SPDO

2 DOCUMENT HISTORY Revision Date Of Issue Engineering Change Number Comments A First draft release for internal review DOCUMENT SOFTWARE Package Version Filename Wordprocessor MsWord Word c wp td ASKAP_SKA1_concept description 2003 Block diagrams Other ORGANISATION DETAILS Name Physical/Postal Address SKA Program Development Office Jodrell Bank Centre for Astrophysics Alan Turing Building The University of Manchester Oxford Road Manchester, UK M13 9PL Fax. +44 (0) Website Page 2 of 22

3 TABLE OF CONTENTS 1 INTRODUCTION Purpose of the document REFERENCES INTRODUCTION Scope SKA SPECIFICATION TECHNICAL ASSUMPTIONS SPARSE APERTURE ARRAY CORRELATOR Architecture Overview Correlation Processing Correlator Data Reordering Data transport: SKA Phase 1 & 2 options SPF DISH CORRELATOR Architecture Overview Extension to PAF correlator TIED ARRAY BEAMFORMING IN THE CORRELATOR BEAMFORMING IN THE CORRELATOR Coherent Tied Array Beamforming Incoherent Beamforming Incoherent Beams from Coherent Station beams POWER DISSIPATION PATH TO THE PHASE 1 CORRELATOR COST Cost reduction EARLY CONSTRUCTION CONCLUSION ACKNOWLEGMENTS Page 3 of 22

4 LIST OF FIGURES Figure 1 "Pizza" box implementation for 1.5 AA beams. Dual FPGA implementation shown, Each has 25 bi directional 10 Gb/s links and there are 25 bi directional 10 Gb/s links between FPGAs. 12 Figure 2 Arrangement of data to the 55 correlation cells. Each group comprises data for 10 signals for a single frequency channel Figure 3 ASKAP s Redback2 processing board, with 4 processing FPGAs each with attached DRAM and 8 modules Figure 4 Single Pixel Feed correlator board. inputs are 10Gb/s, InterFPGA links dual 5Gb/s and backplane links 2.5Gb/s Figure 5 DragonFly2 digitiser and filterbank board LIST OF TABLES Table 1 Summary of SKA Phase 1 correlator specifications... 9 Table 2 ATCA class correlator board designs at CSIRO Table 3 Estimate Correlator cost Page 4 of 22

5 LIST OF ABBREVIATIONS AA... Aperture Array APERTIF... APERture Tile In Focus ASKAP... Australian SKA Pathfinder CMAC... Complex Multiply Accumulate EVLA... Extended Very Large Array FPGA... Field Programmable Gate Array FTE... Full Time Effective GS... GigaSamples MWA... Murchison Wide-field Array SKA... Square Kilometre Array SKAMP... SKA Molongo Prototype... Small Form-factor Pluggable (10 Gb/s optical module) SPF... Single Pixel Feed Shelf... Also card cage, chassis, or crate of boards ATCA... Advanced TCA an industry standard shelf FX... A Correlator architecture were the transform to the frequency domain (F) precede the cross multiplies (X) Copyright and Disclaimer 2010 CSIRO To the extent permitted by law, all rights are reserved and no part of this publication covered by copyright may be reproduced or copied in any form or by any means except with the written permission of CSIRO. Important Disclaimer CSIRO advises that the information contained in this publication comprises general statements based on scientific research. The reader is advised and needs to be aware that such information may be incomplete or unable to be used in any specific situation. No reliance or actions must therefore be made on that information without seeking prior expert professional, scientific and technical advice. To the extent permitted by law, CSIRO (including its employees and consultants) excludes all liability to any person for any consequences, including but all losses, damages, costs, expenses and any other compensation, arising directly or indirectly from using this publication (in part or in whole) and any not limited to information or material contained in it Page 5 of 22

6 1 Introduction This memo describes a potential correlator design, based on FPGAs, which would meet the requirements of the two separate correlator systems needed for SKA Phase 1. As this document is for a design to built some time in the future a number of assumption are made as to the progress in technology and its limitations. With these assumptions viable designs based on conservative 10Gb/s optical links are described. For the sparse aperture array a correlator system consisting of up to 320 pizza boxes is proposed. For the dish array, two standard ATCA shelves would suffice. In this memo we also describe how a tied array beamforming function could be incorporated within the correlator. This document is part of a series generated in support of the Signal Processing CoDR which includes the following: Signal Processing High Level Description [11] Technology Roadmap [12] Design Concept Descriptions [13] through [23] Signal Processing Requirements [24] Signal Processing Costs [25] Signal Processing Risk Register [26] Signal Processing Strategy to Proceed to the Next Phase [27] Signal Processing Co DR Review Plan [29] Software & Firmware Strategy [30] 1.1 Purpose of the document The purpose of this document is to provide a concept description as part of a larger document set in support of the SKA Signal Processing CoDR. It provides a bottom up perspective of Correlation for the different receptor types proposed for the SKA. This document has been produced in accordance to the Systems Engineering Management Plan and Signal Processing PrepSKA Work Breakdown document and includes: First draft block diagrams of the relevant subsystem First draft estimates of cost First draft estimates of power. At present, details on reliability have not been included. SKA Memos 125 and the DRM have been used as the baseline for best information on system parameters while the Systems Requirement Specification, SRS, is being created Page 6 of 22

7 2 References [1]. Dewdney et al. SKA Phase 1: Preliminary System Description, SKA Memo 130, Nov 2010 [2] Bunton, J.D., SKA Strawman Correlator, SKA memo 126, Aug 2010 [3] S. Iguchi, S. K. Okumura, M. Okiura, M. Momose, Y. Chikada, 4 Gsps 2 bit FX Correlator with point FFT URSI General Assembly 2002, Maastricht, August, See paper 970, Available at: [4] DeBoer, D.R., et.al., Australian SKA Pathfinder: A High-Dynamic Range Wide-Field of View Survey Telescope Array IEEE Proceedings, Sept 2009 [5] Kooistra, E., RadioNet FP7: UniBoard, CASPER Workshop 2009, Cape Town, Sept 29, [6] ITRS International Technology Roadmap For Semiconductors, 2010 Update, Overview, Available at: [7] De Souza, L., Bunton, J.D., Campbell Wilson, D., Cappallo, R., Kincaid, B., A Radioastronomy Correlator Optimised for the Virtex 4 SX FPGA, IEEE 17 th International Conference on Field Programmable Logic and Applications, Amsterdam, Netherland, Aug 27 29, 2007 [8] Hussein, J., Klien, M. and Hart, M., Lowering Power at 28nm with Xilinx 7 series FPGAs Xilinx White Paper 389, Feb, 2011 Available at: 28nm.pdf [9] Stratix V Device Handbook, Altera, Available at: v/stratix5_handbook.pdf [10] Xilinx 7 Series Product Brief Available at: Series Product Brief.pdf [11] Signal Processing High Level Description WP TD 001 Rev D [12] Signal Processing Technology Roadmap WP TD 001 Rev D [13] Software Correlator Concept Description WP td 001 Rev A [14] GSA Correlator Concept Description WP td 001 Rev A [15] ASKAP Correlator Concept Decription WP td 001 Rev A [16] UNIBOARD Concept Description WP td 001 Rev B [17] CASPER Correlator Concept Description WP td 001 Rev A [18] ASIC Based Correlator for Minimum Power Consumption Concept Description WP td 001 Rev B [19] SKADS Processing WP td 001 Rev A [20] Central Beamformer Concept Description WP td 001 Rev A [21] Station Beamformer Concept WP td 001 Rev A [22] SKA Non Imaging Processing Concept Decription: GPU Processing for Real Time Isolated Radio Pulse Detection WP td 001 Rev A [23] A Scalable Computer Architecture For On Line Pulsar Search on the SKA WP td 002 Rev A [24] Signal Processing Requirement Specification WP SRS 001 Rev B [25] SKA Signal Processing Costs WP TD 001 Rev C [26] Signal Processing Risk Register WP RE 001 Rev A [27] Signal Processing Strategy to Proceed to the Next Phase WP PLA 001 RevA Page 7 of 22

8 [28] SIGNAL PROCESSING CODR [29] CoDR Review Plan WP PLA 001 Rev B [30] Software and Firmware Strategy WP PLA 001 Rev A WP TD Page 8 of 22

9 3 Introduction 3.1 Scope The correlator design considers a 4+4 bit correlator and we include some discussion on the filterbanks needed to implement the FX (frequency based cross multiplier) correlator. However, as the filterbank is not included in the final costings presented here as it is considered to be part of the beamformer system in the case of the aperture array or is located at the dish antenna for the dish array 4 SKA SPECIFICATION The specifications for SKA phase 1 are set out in SKA Memo 130 [1], and comprises two antenna arrays: a sparse aperture array and dish array. The sparse aperture array covers the frequency range from 0.07 to 0.45 GHz; each of the 50 aperture array stations generates 480 full bandwidth beams. The dish array comprises 250 dishes each equipped with two octave band single pixel feeds (SPF). One SPF covers GHz and the other 1 2 GHz. These specifications are given in Table 1 together with the compute rate of the corresponding correlator. The compute rate is given in terms of complex multiply accumulate operation per second (CMAC/s) as the input to the correlator is complex data. Number antennas Processed Bandwidth GHz Beams/antenna Correlation TeraCMAC/s Sparse Aperture Array Dish with SPF, 1-2 GHz frequency coverage Dish with SPF, GHz frequency coverage Table 1 Summary of SKA Phase 1 correlator specifications It can be seen that the major compute load is from the sparse aperture array with the high frequency SPF system being only about one seventh the compute load 5 TECHNICAL ASSUMPTIONS To generate a base design we define some guiding assumptions. The following are largely taken from SKA Memo 126 [2] but adapted to suit SKA phase 1. Assumption 1: Data Precision The ADC resolution for the SPF systems is ~8bits. Correlator data precision is 4bits. While the lower bandwidth systems on PAFs or on an AA may have more ADC resolution, it is assumed that beamforming will occur at the antenna and that subsequent data transport to the correlator will be at 4bit precision. Assumption 2: FFT length The maximum length of an FFT or polyphase filterbank is ~1000 frequency bins within an FPGA or ASIC. As the size of an FFT increases the computational load increases as log(n) but the memory is proportional to N. Eventually the memory dominates. When this occurs it is better to use external memory and process long FFTs in two stages. An example is the ~256 thousand point FFT Page 9 of 22

10 implemented for the ALMA compact array correlator [3]. In these an ASIC implements a 512 point FFT. Two such ASICs with a corner turner memory in between are used to implement the long FFT. Synthesis on FPGAs show that a ~2000 point polyphase filter bank uses similar percentages of the block ram and multiplier resources. Beyond this, the design is dominated memory leading to underutilisation of multiplier and logic resources. Assumption 3: Inputs per Shelf A single shelf (crate, chassis or card cage) is limited to ~256 optical connections (16 boards with 16 inputs each). Both the ASKAP beamformer [4] and a full UNIboard system[5] are designed close to this limit. Digital optical modules are usually sold with both a transmitter and receiver so this gives ~256 optical inputs and 256 outputs. It is assumed that the input and output of an optical module can be used for independent data paths, for example, a correlator input and a tied array beam output. This can halve the number of expensive optical modules. It will be seen that data transport is a major correlator cost so this optimisation is vital. However, use of optical systems with independent inputs and outputs may preclude the use of commercial switches. Assumption 4: Optical Fibre Transmitter Data transport to the correlator will be on fibres and for SKA phase 1 it is assumed that this data is transported at 10Gb/s per fibre. 100GE transport may be an option but it is as yet uncertain whether this technology will have reached a sufficient level of maturity and affordability for SKA Phase 1. For long haul links individual single mode fibre transmitters are used. For short haul links up to 100m, multimode fibre can be used. This allows the use of optical transmitters that illuminate 12 fibre ribbon cables giving a data rate of up to 120Gb/s per transmitter. Assumption 5: FPGA Capabilities We chose to build the correlator out of midsized FPGAs. The largest FPGAs come at a premium in terms of cost per unit of processing. Furthermore, the largest FPGAs are always the last to be released which further reduces their usefulness. Next generation midsized FPGAs are due for release in 2011 with production quantities on the market in The midsized FPGAs in this generation have ~2000 multipliers that clock at close to 400 MHz. The ITRS roadmap [6] shows another generation in 2 years after which it is expected to become a 3 year cycle. Hence, in 2017 it is expected that midsized production FPGAs will have ~8000 multipliers. These FPGAs should clock at 450 MHz in correlator applications. This gives the FPGA a processing capacity of 3.6T 18 bit multiplies/sec. Each multiplier can be used as a complex 4+4bit multiplier [7] so this translates to 3.6T CMAC/s per FPGA. This equates to 29T arithmetic operations per second. Assumption 6: FPGA Power Dissipation Power dissipation of current generation FPGAs is ~10W when the device utilisation is at a level expected for the ASKAP beamformer and correlator. Xilinx claim next generation FPGAs will consume half the power for the same logic functionality [8] thus a 2000 multiplier next generation FPGA is expected to dissipate ~10W. Xilinx also achieved a halving of power dissipation between Virtex 5 and 6. If this trend continued, a 2017 generation FPGA with 8000 multipliers would also dissipate ~10W. Instead, it is assumed that the power per unit of logic will decrease by 0.7 for subsequent generation of FPGA. This would see the midsized 2017 FPGA dissipating ~20W. A summary of the assumptions is: 1. Input to correlator has 4 bit precision 2. Maximum FFT length implemented internally to an FPGA is ~2000 points. 3. Single hardware shelf support ~256 optical transceivers Page 10 of 22

11 4. Data transmission at 10Gb/s per fibre. Individual transmitters for long haul, fibre ribbon transmitters for short haul. 5. Midsized production FPGA has bit multipliers in Power dissipation of midsized FPGA in 2017 is ~20W. 6 SPARSE APERTURE ARRAY CORRELATOR 6.1 Architecture Overview. For the sparse aperture arrays beamforming occurs at each station. The preferred beamforming technique first decimates the bandwidth from the antenna into a number of frequency channels and then beamforms within each frequency channel. We propose that the data is decimated to its final frequency resolution of 1kHz at this location and then be quantised to 4 bit resolution before being transport to the correlator. The data rate for a single polarisation beam from a station is ~380MSample/s x 8 bits/sample or 3.04Gb/s. Note each sample consists of 4+4bit complex number. Thus each 10Gb/s link can transport the data for 3 single polarisation beams. The station has 480 dual polarisation beams each with a bandwidth of 380MHz. To transport this data 320 single mode fibres are required. This is a large capacity, but is considered reasonable when compared to ASKAP s [4] data transport capacity from each of its antennas on 192 fibres. There are 50 sparse aperture array stations. Assumption (3) puts a limit of 256 on the number of fibres into a single correlator shelf. Thus, each shelf can have as input 5 fibres from the each of the 50 stations. The resulting correlator would have 64 shelves and shelf has a processing load of 14T CMAC/s. Using assumption 5, this computing load can be accomplished with just 4 midsized FPGAs. A standard slot AdvancedTCA shelf or similar is not a good fit to these requirements. It is better to consider the processing load imposed by a single fibre from each of the 50 station. Each fibre is configured to carry data for 1.5 dual polarisation beams. This is achieved by transporting the full bandwidth for one beam and half the bandwidth for another. The compute load is 1/320 of the total or 2.8T CMAC/s. This can be processed by a single FPGA. FPGAs with sufficient I/O bandwidth have already been announced [9][10] Page 11 of 22

12 Figure 1 "Pizza" box implementation for 1.5 AA beams. Dual FPGA implementation shown, Each has 25 bidirectional 10 Gb/s links and there are 25 bi directional 10 Gb/s links between FPGAs The physical implementation of the correlator could be in pizza box size modules, Figure 1, with 25 dual height SPF+ modules to accept the beam data. The outputs of the SPF+ modules are routed to the processing FPGAs within the box. A two FPGA implementation is shown in Figure 1which includes a separate control unit, either FPGA or DSP that communicates via gigabit Ethernet. With the two FPGA solution half the input bandwidth to one FPGA must be transported to the other. This is implemented on the 25 10Gb/s links between the FPGAs. Download of correlation form the pizza box is assumed to be implemented via the optical modules. If the correlator implements a 1.2 second correlation on 1kHz resolution data then ~1/4 of the transmitters are needed to download bit correlation data. 6.2 Correlation Processing Using the correlation cell proposed in [7] a single multiplier can correlate one group of 16 antenna signals against another group of 16. The cell process 256 correlation at any one time and on average process a bandwidth equal to the clock rate divided by 256. For the Aperture Array correlator groups of 10 are a better choice. Exactly 10 groups of 10 are needed to cover the 100 inputs from the antennas (50 dual pol). Each correlation cell can be considered as forming full Stokes parameters for 25 baselines. Some cells form correlation between identical sets of inputs. It is these cells that calculate the autocorrelations but there is some inefficiency because cross correlations are duplicated. Because of the inefficiencies and added autocorrelations 55 correlation cells are needed to form all correlations simultaneously for a single frequency channel. The arrangement of inputs to the 55 cells is shown in Figure 2. The cells with two sets of identical inputs are located along the diagonal Page 12 of 22

13 Groups Figure 2 Arrangement of data to the 55 correlation cells. Each group comprises data for 10 signals for a single frequency channel The FPGA for SKA Phase1 has ~8000 multiplier/correlation cells allowing the FPGA to process all correlations for 145 individual frequency channels at one time. The 1.5 beams processed by the FPGA have 570,000 1 khz frequency channels. These 570,000 are divided into 3,931 sets of 145 channels. A single time sample for a 1 khz channel must be processed each1ms, implying a processing time of 0.254μs for single time sample in one of the 3931 sets. The correlation cell takes 100 clock cycles to process all data for a single time sample and the FPGA must clock at 393 MHz to process the data. The class of FPGAs discussed in assumption (5) are capable of comfortably achieving this clock rate. This implies a single FPGA per pizza box is sufficient to implement the correlator. In a later section the added capabilities of two FPGAs allows coherent tied array beamforming as well. 6.3 Correlator Data Reordering The correlation cell stores only one set of correlations at any one time. Ideally, the cell should process all time samples needed to calculate the correlation for a single frequency channel. These correlations are then dumped to the imaging system while the correlation cell processes data for the next frequency channel. The minimum dump time of the correlator is 1.2s [1]. In 1.2 seconds integration at 1 khz resolution the correlation cell processes 1,200 time samples. To achieve this order of processing it is necessary to store 1,200 time samples for 380,000 frequency channels of the 480 dual polarisation beams per antenna station. Each time sample is a single byte giving a total data set size of 440GB or 880GB with double buffering. In GB DRAMs should be commodity items and the 880GB can be stored in 28 of these The I/O bandwidth required is 0.38GHz by 1 byte/hz by 480 beams by two polarisations which gives 364Gbyte/s for read and the same for write. Total I/O bandwidth is 726Gbytes/s. In 2017 the clock rate for commodity DRAM modules is expected to be ~5GHz with 8 bytes transferred per transaction giving a possible I/O bandwidth of 40Gbytes/s. Thus, the 28 DRAMs have sufficient I/O bandwidth. The 28 DRAMs per antenna station can be located at the antenna station or at the correlator. The total for all 50 stations is 1400 DRAMs. At the correlator, this can be implemented by adding three DRAMs to each of the two FPGAs in a correlator Pizza box Page 13 of 22

14 6.4 Data transport: SKA Phase 1 & 2 options The design given above assumed 10Gb/s data links but already 28Gb/s SERDES transceivers are promised on next generation FPGAs. If matching optical transceivers are made available in the same form factor as modules then the number of optical fibres from a sparse AA station could be reduced to ~120. Each pizza box unit would now house five mid sized FPGAs with interconnection between the FPGAs. This would be a simple extension of design shown in Figure 1. It is more likely that 40b/s and eventually 100Gb/s links will become available. At these data rates a multi board correlator design may be needed for the SKA Phase 1 Aperture Array correlator. 7 SPF DISH CORRELATOR 7.1 Architecture Overview Assuming that the filterbank operations occur at the dish as implemented within ASKAP, then the data from the dish at a 1 GHz bandwidth is 16Gb/s for two polarisations at a 4+4bit complex data resolution. To transport this data, two 10Gb/s links are needed each carrying data for 500MHz for the two polarisations. There are 250 antennas in the array so by assumption (3) a single shelf of boards can take a single fibre from each antenna. The processing requirement for the 500MHz of bandwidth on the fibres into a single shelf is 63 TCMAC/s. A single FPGA is can implement ~8000 correlation cells at 0.45GHz or 3.6 TCMAC/s. The correlation requirement for the shelf can be implemented in 18 FPGAs. To accept the 256 optical inputs the shelf would consist of 16 boards each housing two FPGAs which may have fewer than 8000 multipliers. ASKAP are currently using a Redback2 processing board shown infigure 3, which comprises four processing FPGAs. Four of the next generation FPGAs could provide a board with the required compute capacity for the SPF SKA Phase 1 correlator. Current generation FPGAs have limited high speed I/O, however with next generation FPGAs a direct implementation of the current design would look like Figure 4. The current board has 8 modules on the front panel. The extra two large chips on the board are a control FPGA and cross point switch. The cross point switch would not be required in future generations of this board as the input data is switched by the FPGAs. Figure 3 ASKAP s Redback2 processing board, with 4 processing FPGAs each with attached DRAM and 8 modules Using the ATCA standard each board would have 16 single height on the front. These would be used for data input and also correlator output. The 16 inputs per board give the system the ability to handle 256 inputs. This could be 250 antennas and 6 dual polarisation RFI signals. The use to which the RFI signals can be put is not considered here Page 14 of 22

15 FPGA1 15 x 2.5Gb/s Input and output 16 x 10Gb/s optical FPGA2 FPGA3 15 x 2.5Gb/s 15 x 2.5Gb/s Back Plane FPGA4 15 x 2.5Gb/s Gigabit Ethernet Monitor and control Figure 4 Single Pixel Feed correlator board. inputs are 10Gb/s, InterFPGA links dual 5Gb/s and backplane links 2.5Gb/s The 16 inputs route to four FPGAs which then route the data to the backplane. After the backplane, routing each FPGA has MHz of data from 64 antennas. For simplicity, the input bandwidth is made 512 MHz and each FPGA has data for 32 MHz. A second stage of data redistribution is now needed between the four FPGAs. This is done on the dual 5GB/s links between FPGAs. After this data redistribution, each FPGA has 8 MHz of data for all 256 inputs. In an implementation with one processing FPGA there would be still be 16 10Gb/s inputs and 60 connection to the backplane, but no inter FPGA links, giving 74 high speed connection to the FPGA. It is interesting to note that the next generation FPGAs [9][10] approach this I/O capacity, but unfortunately they lack the required computing capacity. As with the AA correlator the correlator board will process a limited number of fine frequency channels at a time. In this case, the inputs could be broken up into 32 groups of 16. This requires 528 correlation cells to process a single frequency channel at one time. If the cell clocks at 400 MHz and it takes 256 (16x16) clock cycles to process a single data set then each group of 528 multipliers can process fine frequency channels for MHz of bandwidth. To process the 500 MHz bandwidth that comes into the 16 processing boards in each shelf will require 320 groups of cells. Across the 64 FPGAs in the shelf this translates into 2640 multipliers per FPGA for a 4 FPGA board and 5280 multiplier per FPGA if there are just two processing FPGAs per board. As the frequency resolution could be as fine as ~1 khz each group of 528 correlation cells could process 1563 fine frequency channels; however, there is insufficient memory in the FPGAs to store these correlations. One solution is to add external memory and accumulate the data to memory. At Page 15 of 22

16 a 400 MHz clock rate the 2,640 correlation cells generate 2640 x 256 correlations or 67,580 correlations every complete integration cycle. In the integration is over 512 time samples this occurs every 328μs. If the complex correlation data is accumulated to dual 32 bit words then a 64 bit word must be read and written every 4.8ns. Four times this data rate is achievable with current commodity DDR3 DRAMs. So a single DRAM DIMM is needed with every FPGA in either the four or two FPGA design. As with the sparse AA correlator it is assumed that the filterbanks are physically located at the antenna. A board with this level of capability is the ASKAP DragonFly2 board illustrated in Figure 5. An essential part of this board is the FPGA which provides the interface between the ADCs on the right and the SPF+ 10Gb/s optical modules on the left. In ASKAP it was reasoned that as the FPGA was needed for the interface it can be utilised to do some intermediate processing: within ASKAP this FPGA implements the coarse filterbank. Surprisingly the RFI generated by this board was so low that it was difficult to measure in the laboratory. For SKA Phase 1 it would be desirable for the FPGA to also implement the fine filterbank. The advantage of this approach is standard 8 bit ADC can be used at the antenna and truncation to 4+4 bit for the correlator occurs at only one place; after the fine filterbank. The implementation of the fine filterbank would require the addition of RAM to the boards to store the coarse filterbank data. This may raise the RFI levels which might require dual board design with the RAM isolated from the ADC. In either case, the system is quite compact. Note, implementing the filterbank at the antenna does not significantly impact the reliability of dish hardware as a system as the digitising is done at the antenna in any case. 7.2 Extension to PAF correlator Figure 5 DragonFly2 digitiser and filterbank board Assuming that beamforming for a PAF occurs at the antenna then to the correlator it appears that antenna is simply generating more 500 MHz bandwidth signal. Each one of these will need a correlator shelf. Consider for example PAF beamformer that generate 36 dual polarisation beams each with a bandwidth of 500 MHz. This has an order of magnitude higher survey speed than the 1 2 GHz SPF. The correlator hardware required for this comprises 36 correlator shelves. It is possible to fit three shelves in a equipment cabinet and the full system has 12 cabinets Page 16 of 22

17 8 TIED ARRAY BEAMFORMING IN THE CORRELATOR BEAMFORMING IN THE CORRELATOR 8.1 Coherent Tied Array Beamforming The inputs to the correlator are likely to be commodity optical modules such as. These are designed as bidirectional units. Assuming 10 5 frequency channels in the SPF correlator, bits per correlation and 125,000 correlations per channel then the correlator output data rate with 1 second correlations is 600Gb/s. This use 60 of the 500 optical outputs leaving 440 free for other purposes such outputting the data from tied array beamforming that can be implemented in the correlator. If coherent tied array beam data is also 4+4 bit complex data then the 440 outputs is sufficient for 220 dual polarisation beams. To a first approximation there are as many tied array beams as antennas. For the AA correlator one quarter of the optical outputs are used for transporting correlation data leaving 38 per pizza box for tied array data. Each output can transport data for 1.5 beams. Each pizza box has output capabilities for 57 beams. Over the 320 pizza boxes this gives 18,000 tied array beams. In both cases the number of tied array beams (per antenna beam) is approximately equal to the number of antenna. To form a coherent tied array beam output a single time complex sample from each antenna is is multiplied by a complex weight (usually corresponding to a given phase) and all the data summed. The number of complex multiply accumulate operations(cmacs) is the 2 x number of tied array dual pol beams x the number of inputs summed per beam. Both these factors are approximately equal to the number of antennas N ant, assuming the two polarisations are summed separately. Hence, the number of CMAC operations is equal to 2N 2 ant. For the correlator the same number of inputs requires 2 N ant (N ant +1) CMACs if autocorrelations are also calculated. The two compute requirements are approximately equal and require the same number of multipliers assuming 4 bit complex beam weights. (4 bit weights introduce up to 3.5 o phase errors and 6% amplitude error and analysis is needed to determine if this is sufficiently accurate) If the beamforming is done in the correlator then the added cost of generating N ant tied array beams doubles the compute load. It requires a doubling of FPGA resources. This doubling of FPGA resources has already been factored into the designs for both the AA and SPF correlators. If more than 220 SPF tied array beams or 18,000 AA tied array beams are need the signal from the antennas must be duplicated, possibly in a optical splitter and the correlator/tied array beamformer systems duplicated. After beamforming, the data is at the frequency resolution of the correlator. The channalised data is produced by analysis polyphase filterbanks, the beam data can be brought back to the original sample rate (frequency resolution) by a matching synthesis polyphase filterbank. An oversampling analysis polyphase filterbank is needed if the errors in the resampled data is to be kept low. 8.2 Incoherent Beamforming Full incoherent beamforming generates one beam per antenna beam and the data is generated as power averaged over a small time intervals, usually 1ms or less. The total data rate is much lower than that for coherent beams. The internal processing within the correlator is similar to a single coherent beam: instead of multiplying the input data by a 4+4 bit weight it is multiplied by the conjugate of itself. This adds very little to the compute load compared to that of the correlator Page 17 of 22

18 8.3 Incoherent Beams from Coherent Station beams Coherent beams have the highest sensitivity and incoherent beams the largest field of view. Intermediate between these two are beams that are formed by incoherent summation of station beams. This is a strong possibility for the dish array and is what is happening in practice for incoherent beams generated from AA station beams. The dish array is broken up into a number of stations or sub arrays all of a similar physical size. Within each station coherent beamforming is performed. This is followed by incoherent summation of the corresponding station beams. The compute cost for forming a station beam, for all stations, is equal to that for forming a coherent tied array beam as it is still necessary to multiply each input sample by a complex weight. However, each summation is now over the individual station. The station beams are then incoherently summed. In the section on coherent beamforming the number of beams that were generated was limited to a number equal the number of antennas. This approximately doubled the FPGA resources in the correlator. It is assumed the same limitation applies here. The field of view of a station beam is approximately the field of view of the dish FoV dish multiplied by filling factor F fill divided by the number of antennas in the station N station. Thus the fraction of the FoV dish that can be covered by the N ant beams is F fill * N ant / N station. A high station filling factor would be 0.1. Hence the full FoV dish could be covered if N station = 25 for SKA Phase 1. This all assumes that all stations have a similar antenna layout. However some antenna configuration have stations with antennas arranged in lines at various angles. This will greatly reduce the common field of view and increase the number of beams need to cover the field of view of the dish. Data output requirements for the incoherent data is much less than that for coherent beams. Typically the data might be 1ms power integrations over 1MHz bands. This reduces the data rate by a factor of Even allowing an increase in precision from 4 to 32 bits the bit rate is still 1/125 of that for coherent beams. This can be transmitted on two 10G links from each of the correlator shelves in the SPF dish correlator. 9 POWER DISSIPATION The inclusion of beamforming in the same subsystem as the correlator doubles the number of FPGAs and hence the power requirements. The following assumes the correlator also generates tied array beams For the AA correlator the inclusion of beamforming for 50 beams results in a system that has 640 FPGAs assuming 2017 class FPGAs are used. From assumption 6, each FPGA would dissipate ~20W. If the power supplies for the FPGAs are 80% efficient then the dissipation attributable to the FPGAs is 16kW. The power dissipation of the AA the beamformer and fine filterbank is out of scope for this document. The optical transceiver are currently ~1W per link. By 2017 the expectation is the power per link will more than halve resulting in a optical module dissipation of ~8kW. This brings the total power to 24 kw. For the SPF dish correlator the FPGA power dissipation can be calculated by noting that the compute requirements are about one seventh of that for the AA correlator. This gives power dissipation of just over 2.28 kw for the shelves and 250W for the modules. This gives a correlator dissipation of 2.5kW. For the SPF dish correlator the ADC and filterbanks systems are needed. The main components are a high speed dual channel ADC, FPGA and DRAM. With today s technology this is ~10W and by 2017 it is expected to be less than ~3W. This adds ~750 W to the power dissipation. Optical modules add another ~250 W of dissipation, taking the total power dissipation ~3.5 kw for the entire system Page 18 of 22

19 If 2014 class FPGAs are adopted, then we estimate that the power dissipation would increase by 40%, i.e. the AA correlator would consume ~34kW and the dish correlator ~4.5kW. 10 PATH TO THE PHASE 1 CORRELATOR Experience at CSIRO has shown there is considerable benefit of a program of continuous hardware development. This not only keeps a cohesive team together who maintain the knowledge from earlier system implementations but there is also the advantage that lessons learned with each hardware implementation are used to improve the next generation hardware. Fundamentally, individuals within the team will likely change over time, but as long as the there is continuity in the core skill areas the team s expertise remains high. For ATCA class hardware CSIRO has already implemented many designs most of which are listed below. This productivity was achieved by having two teams operating concurrently for all of the earlier boards. Number Processing FPGAs No 64 bit DRAMs interfaces 18bit Mulipliers Used for Processing Input data rate Technology MOPS Gb/s Virtex II Pro SKAMP Gb/s Virtex 4 CABB Gb/s Virtex 4 and II Pro Redback Gb/s Virtex 5 Redback Gb/s Virtex 6 Table 2 ATCA class correlator board designs at CSIRO One obvious trend in Table 2 is the decreasing number of FPGAs on the boards at each technology step. The lesson here was that more complex boards take longer to design and it was better to implement simpler boards. Another trend is the improving ratio of DRAMs to FPGAs even though the ratio of DRAM to multipliers is roughly constant. A major limitation of FPGAs is their limited internal memory and this is compensated by the addition of external memory. The actual processing capabilities of the board have not increased greatly but the board development time (schematic and layout) is now down to 3 months (Redback1, Redback 2 was same basic design with and FPGA upgrade and took a couple of weeks to layout). What has reduced is the cost of producing each board. This cost has steadily declined so there has been a significant reduction in the cost per unit of processing capability with time. CSIRO plans to continue developing boards in future generations of FPGAs and the lessons learnt regarding the limitation of previous board will shape their design. The current Redback2 board uses Virtex 6 and the limitation in the design is the I/O capabilities of the FPGAs. Next generation FPGAs, both Xilinx and Altera, have increased both the speed and number of SERDES inputs so this limitation will not occur in next generation boards. A simple upgrade\` of the Redback2 FPGAs with next generation FPGAs produces a system that has that is close to that needed for the dish with SPF correlator, Figure 4 This design is discussed in section 5. To implement the SKA correlator and other digital systems such as beamformers it is suggested that at least one team be assembled to develop designs. The team is to develop a design for all digital systems, beamformer, filterbanks and correlators, with each generation of FPGA so that the final system has the refinements derived from correcting the limitation of earlier generation of hardware. This approach can also means that a new design will eventuate very rapidly when a new generation Page 19 of 22

20 of FPGAs are announced. The ASKAP experience was that it took considerable time to develop a concept starting from scratch. However the second generation Redback system was rapidly developed and it is expected that the same should be true of third and later generations of hardware. The team that delivered two generations of hardware and half the firmware for ASKAP comprises an average of 6 people over a year and half. The pace of development for SKA Phase 1 will likely be less concentrated and it should also be able to build on the foundation of firmware developed for projects like APERTIF, MWA, EVLA, SKAMP as well as ASKAP. For the hardware, assuming 2 3 years between generations, about 1 FTE is needed over a number of different skill sets. The engineers need to be skilled in schematic and board layouts working with at least 2 concept developers. In addition, the functions of management and prototype production must be covered. A number of different people are needed to cover these functions and it is suggested that the people are embedded in a radioastronomy institute that would use their skills for other functions when not involved in SKA hardware development. It is suggested that at least two concept developers be used as a single developer can get trapped by adherence to a particular philosophy/approach and a at least one other person is needed to see faults or alternative approaches. Here a healthy rivalry can be very productive. The input from these people is quite sporadic however. In addition to hardware development, there needs to be 2 3 FPGA programmers to develop new firmware and adapt existing firmware to new hardware. This is an area where more personnel can greatly speed development. In addition for SKA phase 1 a number of production personnel are needed in the years that the full system is deployed. This gives a team size of at least six during production and installation. In the years leading up to this a team of a minimum 3 FTEs is needed spread over 5 6 people. The above presupposes a design that is deliberately chosen so that it is easily implementable. Designs of higher complexity could be chosen but these would require more input into the hardware design. This will require more effort but could lead to a design that has fewer boards but with a higher risk of late delivery. With the simple design approach the requirement could be as low as 3 FTEs for 3 years and plus 6 FTEs for 2 years, or a total input to the design and production of ~20 FTEs. In addition ~$1M is needed to allow prototyping of full capability subsystems. 11 COST The cost of midsized FPGAs in the quantities needed for the SKA, assuming the same FPGA is used throughout all beamformers and correlators, is expected to ~$1000 each. This is less than the equivalent FPGA current list price but more than competitive high volume pricing. The highest performance FPGA is necessarily of a similar size to high end CPUs hence with similar margins they should have a similar cost base. These CPUs have a cost of ~$1000. In this memo devices half the size of the biggest FPGAs are used ( midsized ). Considering the size reduction and improved yield, they are potentially about a third the cost of the highest performance devices and a cost much less than $1000. Thus an estimate volume price of $1000 could be considered conservative. In 2017 the AA correlator comprises 320 midsized FPGAs, excluding tied array beamforming. If the FPGAs cost ~$1000 each including DRAMs, and the boards, assembly, pizza boxes and power supplies another ~$2000, then cost for processing in the correlator is $960,000. To these costs we need to add the optical links; there are 16,000 10Gb/s links. If these are $100 each then this becomes the dominate cost at $1,600,000. Hence, the estimate cost is $2.6M using 10Gb/s links. The SPF dish correlator has two ATCA shelves at ~$10k each and a 48v power supply costing ~$5k. Each shelf has 16 processing boards for a total of 32 processing boards. Each processing board has two FPGAs. Assuming $2000 for the two FPGAs on each board and the same cost for the board Page 20 of 22

21 gives a total cost of $2000 per board. The cost for all 32 boards is $128,000. The input used 500 modules and these are estimated to cost $50,000 giving a total cost of about $180,000. The 500MHz PAF correlator is 18 times this value or $3.2M. Adding coherent beamforming capabilities adds a single FPGA to board for both designs, remembering that the only part of the FPGA capabilities are used in dish correlator design. This adds $320,000 to the AA correlator and $64,000 to the dish correlator Cost reduction In the section on tied array beamforming all the optical outputs in the correlator can be used if coherent beamforming is used. This represents a massive amount of data to process and it is probably impractical to implement coherent transient searches on this data. It is more likely that incoherent data will be used for searching with a limited number of coherent tied array beams being used for task such as pulsar timing, SETI search and targeted observations of transient sources. Assuming the total number of coherent tied array beams is of order tens for dishes and a hundred for aperture arrays then it is seen that the majority of optical transmitter are unused. After considering correlation data, coherent beams and incoherent beams ~200 are unused in the dish correlator shelf and 37 per AA correlator pizza box. The major cost of any optical transceiver is the optical transmitter. In ASKAP input to the correlator has been implemented by receive only ROSA modules. The cost saving in using these for the majority of the correlator inputs is ~$35,000 for the dish correlator and for the aperture array correlator ~$1M. With limited coherent beamforming the dish correlator needs only two FPGAs and the dish correlator cost is reduced to $165,000 and the AA correlator $1.6M 11.2 EARLY CONSTRUCTION The designs given above are for systems assumed to be constructed in If an earlier generation of FPGAs are used the roughly twice as many are needed, but initial construction could can be advanced to as early as For both systems, the actual boards have sufficient area to accept the extra FPGAs. In the cost of sparse aperture array correlator with 10Gb/s links each pizza box holds two FPGAs and the added cost is ~$320,000 this increases to ~$640,000 for full implemented coherent beamforming. For the dish correlator the number of FPGAs increases from 2 to 3 and to 4 with beamforming and the added cost is $32 64k. In both cases the added cost is comparatively small as the major cost is in the signal transport and infrastructure to support the FPGAs. For either generation of FPGA there is little difference. The correlator would be prototyped in early generation FPGA and the decision move to next generation FPGA can left to late in the construction phase Page 21 of 22

22 Correlator Type 2017 Cost $k 2017 with ROSAs $k Cost 2014 $k AA AA + coherent BF 2900 NA 3500 SPF Dish SPF Dish + coherent BF 210 NA 274 PAF Dish PAF Dish + coherent BF 3780 NA CONCLUSION Table 3 Estimate Correlator cost. We have outlined potential designs for the SKA Phase 1 AA and dish correlators. They are comparatively simple and consist of 1 4 FPGAs per board and a number of 10G optical transceivers. The design assumes technological advances in FPGA compute power over the coming years and are incremental changes to existing boards used in radioastronomy. The firmware will also be incremental on that used currently. As well as implementing the correlator, the same designs can implement tied array beamforming. Using bi directional optical modules a board can output almost as many beams as there are antennas and station beams. This is ~220 coherent tied array beams for the dish correlator/beamformer and ~19,000 for the full aperture array correlator/beamformer. The computing requires a doubling of the FPGA resources on each processing board. Implementing full incoherent beamforming has a much lower cost as only one beam is generated. Coherent station beamforming followed by incoherent beamforming is also possible. The single pixel feed dish correlator is a system that fits into a single cabinet and has two shelves of processing boards. Each processing board has 2 FPGAs as well as an industry standard backplane interconnect between the 16 boards in a shelf. For a 500MHz PAF correlator the system is 18 times larger. The sparse aperture array correlator could use a single board in pizza box design. The limiting factor in the size of this correlator is the number of optical receivers. With 10Gb/s technology there are 16,000 fibre links. Each pizza box takes a single fibre from each sparse aperture array and this requires ~320 pizza box modules to take in the data. With higher speed optical connections the number of pizza box modules can be reduces. The estimated cost of these correlator is given in Table ACKNOWLEGMENTS The expertise in the construction of high performance processing boards at CSIRO would not be possible with a great team of people. The author would like to thank Andrew Brown, Evan Davis, Dick Ferris and Joseph Pathikulangara for their contribution to the design and layout of boards mentioned in Table 2. Behind this group is a large dedicated team who have procured the parts and ensured the boards were assembled and debugged Page 22 of 22

Memo 126. Strawman SKA Correlator. J.D. Bunton (CSIRO) August

Memo 126. Strawman SKA Correlator. J.D. Bunton (CSIRO) August Memo 126 Strawman SKA Correlator J.D. Bunton (CSIRO) August 2010 www.skatelescope.org/pages/page_memos.htm Enquiries should be addressed to: John.Bunton@csiro.au Document his tory REVISION DATE AUTHOR

More information

AA CORRELATOR SYSTEM CONCEPT DESCRIPTION

AA CORRELATOR SYSTEM CONCEPT DESCRIPTION AA CORRELATOR SYSTEM CONCEPT DESCRIPTION Document number WP2 040.040.010 TD 001 Revision 1 Author. Andrew Faulkner Date.. 2011 03 29 Status.. Approved for release Name Designation Affiliation Date Signature

More information

MWA Correlator Status. Cambridge MWA Meeting Roger Cappallo

MWA Correlator Status. Cambridge MWA Meeting Roger Cappallo MWA Correlator Status Cambridge MWA Meeting Roger Cappallo 2007.6.25 Correlator Requirements Complex cross-multiply and accumulate data from 524800 signal pairs Each pair comprises 3072 channels with 10

More information

The UniBoard. a RadioNet FP7 Joint Research Activity. Arpad Szomoru, JIVE

The UniBoard. a RadioNet FP7 Joint Research Activity. Arpad Szomoru, JIVE The UniBoard a RadioNet FP7 Joint Research Activity Arpad Szomoru, JIVE Overview Background, project setup Current state UniBoard as SKA phase 1 correlator/beam former Future: UniBoard 2 The aim Creation

More information

PoS(10th EVN Symposium)098

PoS(10th EVN Symposium)098 1 Joint Institute for VLBI in Europe P.O. Box 2, 7990 AA Dwingeloo, The Netherlands E-mail: szomoru@jive.nl The, a Joint Research Activity in the RadioNet FP7 2 programme, has as its aim the creation of

More information

SKA Technical developments relevant to the National Facility. Keith Grainge University of Manchester

SKA Technical developments relevant to the National Facility. Keith Grainge University of Manchester SKA Technical developments relevant to the National Facility Keith Grainge University of Manchester Talk Overview SKA overview Receptors Data transport and network management Synchronisation and timing

More information

2011 Signal Processing CoDR: Technology Roadmap W. Turner SPDO. 14 th April 2011

2011 Signal Processing CoDR: Technology Roadmap W. Turner SPDO. 14 th April 2011 2011 Signal Processing CoDR: Technology Roadmap W. Turner SPDO 14 th April 2011 Technology Roadmap Objectives: Identify known potential technologies applicable to the SKA Provide traceable attributes of

More information

Adaptive selfcalibration for Allen Telescope Array imaging

Adaptive selfcalibration for Allen Telescope Array imaging Adaptive selfcalibration for Allen Telescope Array imaging Garrett Keating, William C. Barott & Melvyn Wright Radio Astronomy laboratory, University of California, Berkeley, CA, 94720 ABSTRACT Planned

More information

SKA Low Correlator & Beamformer - Towards Construction

SKA Low Correlator & Beamformer - Towards Construction SKA Low Correlator & Beamformer - Towards Construction Dr. Grant Hampson 15 th February 2018 ASTRONOMY AND SPACE SCIENCE Presentation Outline Context + Specifications Development team CDR Status + timeline

More information

ALMA CORRELATOR : Added chapter number to section numbers. Placed specifications in table format. Added milestone summary.

ALMA CORRELATOR : Added chapter number to section numbers. Placed specifications in table format. Added milestone summary. ALMA Project Book, Chapter 10 Revision History: ALMA CORRELATOR John Webber Ray Escoffier Chuck Broadwell Joe Greenberg Alain Baudry Last revised 2001-02-07 1998-09-18: Added chapter number to section

More information

SPDO report. R. T. Schilizzi. US SKA Consortium meeting Pasadena, 15 October 2009

SPDO report. R. T. Schilizzi. US SKA Consortium meeting Pasadena, 15 October 2009 SPDO report R. T. Schilizzi US SKA Consortium meeting Pasadena, 15 October 2009 SPDO Team Project Director Project Engineer Project Scientist (0.5 fte) Executive Officer System Engineer Domain Specialist

More information

John W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands

John W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands Signal Processing on GPUs for Radio Telescopes John W. Romein Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands 1 Overview radio telescopes six radio telescope algorithms on

More information

White Paper The Need for a High-Bandwidth Memory Architecture in Programmable Logic Devices

White Paper The Need for a High-Bandwidth Memory Architecture in Programmable Logic Devices Introduction White Paper The Need for a High-Bandwidth Memory Architecture in Programmable Logic Devices One of the challenges faced by engineers designing communications equipment is that memory devices

More information

INTRODUCTION TO FPGA ARCHITECTURE

INTRODUCTION TO FPGA ARCHITECTURE 3/3/25 INTRODUCTION TO FPGA ARCHITECTURE DIGITAL LOGIC DESIGN (BASIC TECHNIQUES) a b a y 2input Black Box y b Functional Schematic a b y a b y a b y 2 Truth Table (AND) Truth Table (OR) Truth Table (XOR)

More information

Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA

Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Randall Wayth ICRAR/Curtin University with Marcin Sokolowski, Cathryn Trott Outline "Holy grail of CASPER system is

More information

The Square Kilometre Array. Miles Deegan Project Manager, Science Data Processor & Telescope Manager

The Square Kilometre Array. Miles Deegan Project Manager, Science Data Processor & Telescope Manager The Square Kilometre Array Miles Deegan Project Manager, Science Data Processor & Telescope Manager The Square Kilometre Array (SKA) The SKA is a next-generation radio interferometer: 3 telescopes, on

More information

White Paper Compromises of Using a 10-Gbps Transceiver at Other Data Rates

White Paper Compromises of Using a 10-Gbps Transceiver at Other Data Rates White Paper Compromises of Using a 10-Gbps Transceiver at Other Data Rates Introduction Many applications and designs are adopting clock data recovery-based (CDR) transceivers for interconnect data transfer.

More information

Chapter 5: ASICs Vs. PLDs

Chapter 5: ASICs Vs. PLDs Chapter 5: ASICs Vs. PLDs 5.1 Introduction A general definition of the term Application Specific Integrated Circuit (ASIC) is virtually every type of chip that is designed to perform a dedicated task.

More information

Transient Buffer Wideband (TBW) Preliminary Design Ver. 0.1

Transient Buffer Wideband (TBW) Preliminary Design Ver. 0.1 Transient Buffer Wideband (TBW) Preliminary Design Ver. 0.1 Steve Ellingson November 11, 2007 Contents 1 Summary 2 2 Design Concept 2 3 FPGA Implementation 3 4 Extending Capture Length 3 5 Issues to Address

More information

InfiniBand SDR, DDR, and QDR Technology Guide

InfiniBand SDR, DDR, and QDR Technology Guide White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses

More information

High-bandwidth CX4 optical connector

High-bandwidth CX4 optical connector High-bandwidth CX4 optical connector Dubravko I. Babić, Avner Badihi, Sylvie Rockman XLoom Communications, 11 Derech Hashalom, Tel-Aviv, Israel 67892 Abstract We report on the development of a 20-GBaud

More information

EVLA Correlator P. Dewdney Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics

EVLA Correlator P. Dewdney Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics EVLA Correlator Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics National Research Council Canada National Research Council Canada Conseil national de recherches Canada Outline

More information

PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5)

PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5) PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5) Rob van Nieuwpoort Vrije Universiteit Amsterdam & Astron, the Netherlands Institute for Radio Astronomy Why Radio? Credit: NASA/IPAC

More information

Controlling Field-of-View of Radio Arrays using Weighting Functions

Controlling Field-of-View of Radio Arrays using Weighting Functions Controlling Field-of-View of Radio Arrays using Weighting Functions MIT Haystack FOV Group: Lynn D. Matthews,Colin Lonsdale, Roger Cappallo, Sheperd Doeleman, Divya Oberoi, Vincent Fish Fulfilling scientific

More information

ASKAP Data Flow ASKAP & MWA Archives Meeting

ASKAP Data Flow ASKAP & MWA Archives Meeting ASKAP Data Flow ASKAP & MWA Archives Meeting Ben Humphreys ASKAP Software and Computing Project Engineer 25 th March 2013 ASTRONOMY AND SPACE SCIENCE ASKAP @ Murchison Radioastronomy Observatory Australian

More information

White Paper Low-Cost FPGA Solution for PCI Express Implementation

White Paper Low-Cost FPGA Solution for PCI Express Implementation White Paper Introduction PCI Express is rapidly establishing itself as the successor to PCI, providing higher performance, increased flexibility, and scalability for next-generation systems, as well as

More information

White Paper Assessing FPGA DSP Benchmarks at 40 nm

White Paper Assessing FPGA DSP Benchmarks at 40 nm White Paper Assessing FPGA DSP Benchmarks at 40 nm Introduction Benchmarking the performance of algorithms, devices, and programming methodologies is a well-worn topic among developers and research of

More information

Memory Systems IRAM. Principle of IRAM

Memory Systems IRAM. Principle of IRAM Memory Systems 165 other devices of the module will be in the Standby state (which is the primary state of all RDRAM devices) or another state with low-power consumption. The RDRAM devices provide several

More information

White Paper Broadband Multimedia Servers for IPTV Design options with ATCA

White Paper Broadband Multimedia Servers for IPTV Design options with ATCA Internet channels provide individual audiovisual content on demand. Such applications are frequently summarized as IPTV. Applications include the traditional programmed Video on Demand from a library of

More information

SKA 1 Infrastructure Element Australia. SKA Engineering Meeting Presentation 8 th October 2013

SKA 1 Infrastructure Element Australia. SKA Engineering Meeting Presentation 8 th October 2013 SKA 1 Infrastructure Element Australia SKA Engineering Meeting Presentation 8 th October 2013 1. Contents Introduction Management Structures Overview Work Breakdown Structure Proposal Review Prototypes

More information

QuiXilica V5 Architecture

QuiXilica V5 Architecture QuiXilica V5 Architecture: The High Performance Sensor I/O Processing Solution for the Latest Generation and Beyond Andrew Reddig President, CTO TEK Microsystems, Inc. Military sensor data processing applications

More information

Signal Conversion in a Modular Open Standard Form Factor. CASPER Workshop August 2017 Saeed Karamooz, VadaTech

Signal Conversion in a Modular Open Standard Form Factor. CASPER Workshop August 2017 Saeed Karamooz, VadaTech Signal Conversion in a Modular Open Standard Form Factor CASPER Workshop August 2017 Saeed Karamooz, VadaTech At VadaTech we are technology leaders First-to-market silicon Continuous innovation Open systems

More information

Technical Article MS-2442

Technical Article MS-2442 Technical Article MS-2442. JESD204B vs. Serial LVDS Interface Considerations for Wideband Data Converter Applications by George Diniz, Product Line Manager, Analog Devices, Inc. Some key end-system applications

More information

OSKAR: Simulating data from the SKA

OSKAR: Simulating data from the SKA OSKAR: Simulating data from the SKA Oxford e-research Centre, 4 June 2014 Fred Dulwich, Ben Mort, Stef Salvini 1 Overview Simulating interferometer data for SKA: Radio interferometry basics. Measurement

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

System Engineering, Moore s Law and Collaboration: a Perspective from SKA South Africa and CASPER

System Engineering, Moore s Law and Collaboration: a Perspective from SKA South Africa and CASPER System Engineering, Moore s Law and Collaboration: a Perspective from SKA South Africa and CASPER Francois Kapp, Jason Manley SKA SA - MeerKAT francois.kapp@ska.ac.za, tel: 021-506 7300 Abstract: The Digital

More information

Parallel FIR Filters. Chapter 5

Parallel FIR Filters. Chapter 5 Chapter 5 Parallel FIR Filters This chapter describes the implementation of high-performance, parallel, full-precision FIR filters using the DSP48 slice in a Virtex-4 device. ecause the Virtex-4 architecture

More information

Computational issues for HI

Computational issues for HI Computational issues for HI Tim Cornwell, Square Kilometre Array How SKA processes data Science Data Processing system is part of the telescope Only one system per telescope Data flow so large that dedicated

More information

Stratix II vs. Virtex-4 Performance Comparison

Stratix II vs. Virtex-4 Performance Comparison White Paper Stratix II vs. Virtex-4 Performance Comparison Altera Stratix II devices use a new and innovative logic structure called the adaptive logic module () to make Stratix II devices the industry

More information

GPUS FOR NGVLA. M Clark, April 2015

GPUS FOR NGVLA. M Clark, April 2015 S FOR NGVLA M Clark, April 2015 GAMING DESIGN ENTERPRISE VIRTUALIZATION HPC & CLOUD SERVICE PROVIDERS AUTONOMOUS MACHINES PC DATA CENTER MOBILE The World Leader in Visual Computing 2 What is a? Tesla K40

More information

Wholesale Optical Product handbook. March 2018 Version 7

Wholesale Optical Product handbook. March 2018 Version 7 Wholesale Optical Product handbook March 2018 Version 7 Contents Page 1 Wholesale Optical overview 4 2 Wholesale Optical description 5 2.1 introduction 5 2.2 introduction 7 2.3 Customer access and interface

More information

Data Processing for the Square Kilometre Array Telescope

Data Processing for the Square Kilometre Array Telescope Data Processing for the Square Kilometre Array Telescope Streaming Workshop Indianapolis th October Bojan Nikolic Astrophysics Group, Cavendish Lab University of Cambridge Email: b.nikolic@mrao.cam.ac.uk

More information

Performance Analysis of Line Echo Cancellation Implementation Using TMS320C6201

Performance Analysis of Line Echo Cancellation Implementation Using TMS320C6201 Performance Analysis of Line Echo Cancellation Implementation Using TMS320C6201 Application Report: SPRA421 Zhaohong Zhang and Gunter Schmer Digital Signal Processing Solutions March 1998 IMPORTANT NOTICE

More information

SKA Computing and Software

SKA Computing and Software SKA Computing and Software Nick Rees 18 May 2016 Summary Introduc)on System overview Compu)ng Elements of the SKA Telescope Manager Low Frequency Aperture Array Central Signal Processor Science Data Processor

More information

Field Programmable Gate Array (FPGA) Devices

Field Programmable Gate Array (FPGA) Devices Field Programmable Gate Array (FPGA) Devices 1 Contents Altera FPGAs and CPLDs CPLDs FPGAs with embedded processors ACEX FPGAs Cyclone I,II FPGAs APEX FPGAs Stratix FPGAs Stratix II,III FPGAs Xilinx FPGAs

More information

ANTC205. Introduction

ANTC205. Introduction ANTC205 Introduction The JitterBlocker takes very noisy and jittery clocks and cleans out all the deterministic and excessive jitter. It can handle thousands of picoseconds of period jitter at its input

More information

Emergence of Segment-Specific DDRn Memory Controller and PHY IP Solution. By Eric Esteve (PhD) Analyst. July IPnest.

Emergence of Segment-Specific DDRn Memory Controller and PHY IP Solution. By Eric Esteve (PhD) Analyst. July IPnest. Emergence of Segment-Specific DDRn Memory Controller and PHY IP Solution By Eric Esteve (PhD) Analyst July 2016 IPnest www.ip-nest.com Emergence of Segment-Specific DDRn Memory Controller IP Solution By

More information

Data Acquisition in Particle Physics Experiments. Ing. Giuseppe De Robertis INFN Sez. Di Bari

Data Acquisition in Particle Physics Experiments. Ing. Giuseppe De Robertis INFN Sez. Di Bari Data Acquisition in Particle Physics Experiments Ing. Giuseppe De Robertis INFN Sez. Di Bari Outline DAQ systems Theory of operation Case of a large experiment (CMS) Example of readout GEM detectors for

More information

ALMA Correlator Enhancement

ALMA Correlator Enhancement ALMA Correlator Enhancement Technical Perspective Rodrigo Amestica, Ray Escoffier, Joe Greenberg, Rich Lacasse, J Perez, Alejandro Saez Atacama Large Millimeter/submillimeter Array Karl G. Jansky Very

More information

Power Consumption in 65 nm FPGAs

Power Consumption in 65 nm FPGAs White Paper: Virtex-5 FPGAs R WP246 (v1.2) February 1, 2007 Power Consumption in 65 nm FPGAs By: Derek Curd With the introduction of the Virtex -5 family, Xilinx is once again leading the charge to deliver

More information

Current Developments in the NRC Correlator Program

Current Developments in the NRC Correlator Program Current Developments in the NRC Correlator Program Lewis B.G. Knee (on behalf of the NRC Herzberg Correlator Group) ALMA Developers Workshop Chalmers University of Technology, Gothenburg May 26, 2016 Main

More information

SKA Central Signal Processor Local Monitor and Control

SKA Central Signal Processor Local Monitor and Control SKA Central Signal Processor Local Monitor and Control Sonja Vrcic, NRC-Herzberg, Canada SKA LMC Standardization Workshop Trieste, Italy, 25-27 March 2015 Outline 1. CSP design and architecture. 2. Monitor

More information

Square Kilometre Array

Square Kilometre Array Square Kilometre Array C4SKA 16 February 2018 David Luchetti Australian SKA Project Director Presentation 1. Video of the Murchison Radioastronomy Observatory 2. Is SKA a Collaborative project? 3. Australia/New

More information

Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices

Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices Implementing Bus LVDS Interface in Cyclone III, Stratix III, and Stratix IV Devices November 2008, ver. 1.1 Introduction LVDS is becoming the most popular differential I/O standard for high-speed transmission

More information

Five Ways to Build Flexibility into Industrial Applications with FPGAs

Five Ways to Build Flexibility into Industrial Applications with FPGAs GM/M/A\ANNETTE\2015\06\wp-01154- flexible-industrial.docx Five Ways to Build Flexibility into Industrial Applications with FPGAs by Jason Chiang and Stefano Zammattio, Altera Corporation WP-01154-2.0 White

More information

Sky Core Network Positioning for Massive Video Demand UKNOF - 17 Jan Tim Rossiter Core Network Architect, Sky Network Services

Sky Core Network Positioning for Massive Video Demand UKNOF - 17 Jan Tim Rossiter Core Network Architect, Sky Network Services Sky Network Positioning for Massive Video Demand UKNOF - 17 Jan 2018 Tim Rossiter Network Architect, Sky Network Services Positioning Sky Network for Massive Video Demand UK Broadband Usage and Pricing

More information

All hands meeting MFAA/Receiver Analogue-to-Digital Conversion. Stéphane Gauffre from Université de Bordeaux

All hands meeting MFAA/Receiver Analogue-to-Digital Conversion. Stéphane Gauffre from Université de Bordeaux All hands meeting MFAA/Receiver Analogue-to-Digital Conversion Stéphane Gauffre from Université de Bordeaux Outline 1.Digitisation concepts for MFAA 2.Digital platform 3.Commercially available ADCs 4.Full

More information

Addendum to Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches

Addendum to Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches Addendum to Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches Gabriel H. Loh Mark D. Hill AMD Research Department of Computer Sciences Advanced Micro Devices, Inc. gabe.loh@amd.com

More information

Understanding Peak Floating-Point Performance Claims

Understanding Peak Floating-Point Performance Claims white paper FPGA Understanding Peak ing-point Performance Claims Learn how to calculate and compare the peak floating-point capabilities of digital signal processors (DSPs), graphics processing units (GPUs),

More information

DINI Group. FPGA-based Cluster computing with Spartan-6. Mike Dini Sept 2010

DINI Group. FPGA-based Cluster computing with Spartan-6. Mike Dini  Sept 2010 DINI Group FPGA-based Cluster computing with Spartan-6 Mike Dini mdini@dinigroup.com www.dinigroup.com Sept 2010 1 The DINI Group We make big FPGA boards Xilinx, Altera 2 The DINI Group 15 employees in

More information

Design of a Processor to Support the Teaching of Computer Systems

Design of a Processor to Support the Teaching of Computer Systems Design of a Processor to Support the Teaching of Computer Systems Murray Pearson, Dean Armstrong and Tony McGregor Department of Computer Science University of Waikato Hamilton New Zealand fmpearson,daa1,tonymg@cs.waikato.nz

More information

The S6000 Family of Processors

The S6000 Family of Processors The S6000 Family of Processors Today s Design Challenges The advent of software configurable processors In recent years, the widespread adoption of digital technologies has revolutionized the way in which

More information

EPICS in the Australian SKA Pathfinder

EPICS in the Australian SKA Pathfinder EPICS in the Australian SKA Pathfinder Craig Haskins Software Engineer 25 April 2012 ASTRONOMY AND SPACE SCIENCE ASKAP Site Murchison Radio Observatory (MRO): Australia s SKA Candidate site Traditional

More information

TetraNode Scalability and Performance. White paper

TetraNode Scalability and Performance. White paper White paper Issue 1.0, May 2017 Introduction Rohill solutions are known for performance, flexibility, scalability, security and affordability. Also, the strong TetraNode system architecture, open standards-based

More information

Implementation of a Digital Processing Subsystem for a Long Wavelength Array Station

Implementation of a Digital Processing Subsystem for a Long Wavelength Array Station Jet Propulsion Laboratory California Institute of Technology Implementation of a Digital Processing Subsystem for a Long Wavelength Array Station Robert Navarro 1, Elliott Sigman 1, Melissa Soriano 1,

More information

1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM

1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1.1 Introduction Given that digital logic and memory devices are based on two electrical states (on and off), it is natural to use a number

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

OSKAR-2: Simulating data from the SKA

OSKAR-2: Simulating data from the SKA OSKAR-2: Simulating data from the SKA AACal 2012, Amsterdam, 13 th July 2012 Fred Dulwich, Ben Mort, Stef Salvini 1 Overview OSKAR-2: Interferometer and beamforming simulator package. Intended for simulations

More information

Using a Scalable Parallel 2D FFT for Image Enhancement

Using a Scalable Parallel 2D FFT for Image Enhancement Introduction Using a Scalable Parallel 2D FFT for Image Enhancement Yaniv Sapir Adapteva, Inc. Email: yaniv@adapteva.com Frequency domain operations on spatial or time data are often used as a means for

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Versatile Link and GBT Chipset Production

Versatile Link and GBT Chipset Production and GBT Chipset Production Status, Issues Encountered, and Lessons Learned Lauri Olanterä on behalf of the and GBT projects Radiation Hard Optical Link Architecture: VL and GBT GBT GBT Timing & Trigger

More information

BushLAN Distributed Wireless:

BushLAN Distributed Wireless: Australian National University BushLAN Distributed Wireless: Spectrum efficient long range wireless extension of broadband infrastructure to remote areas July 24, 2014 1 1 Abstract BushLAN is a distributed

More information

New System Solutions for Laser Printer Applications by Oreste Emanuele Zagano STMicroelectronics

New System Solutions for Laser Printer Applications by Oreste Emanuele Zagano STMicroelectronics New System Solutions for Laser Printer Applications by Oreste Emanuele Zagano STMicroelectronics Introduction Recently, the laser printer market has started to move away from custom OEM-designed 1 formatter

More information

Why Vyatta is Better than Cisco

Why Vyatta is Better than Cisco VYATTA, INC. White Paper Why Vyatta is Better than Cisco How standard hardware, evolving deployment models and simplified application integration make Vyatta a better choice for next generation networking

More information

FPGA Technology and Industry Experience

FPGA Technology and Industry Experience FPGA Technology and Industry Experience Guest Lecture at HSLU, Horw (Lucerne) May 24 2012 Oliver Brndler, FPGA Design Center, Enclustra GmbH Silvio Ziegler, FPGA Design Center, Enclustra GmbH Content Enclustra

More information

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility

Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility White Paper Cisco 4000 Series Integrated Services Routers: Architecture for Branch-Office Agility The Cisco 4000 Series Integrated Services Routers (ISRs) are designed for distributed organizations with

More information

QLogic TrueScale InfiniBand and Teraflop Simulations

QLogic TrueScale InfiniBand and Teraflop Simulations WHITE Paper QLogic TrueScale InfiniBand and Teraflop Simulations For ANSYS Mechanical v12 High Performance Interconnect for ANSYS Computer Aided Engineering Solutions Executive Summary Today s challenging

More information

IMPROVES. Initial Investment is Low Compared to SoC Performance and Cost Benefits

IMPROVES. Initial Investment is Low Compared to SoC Performance and Cost Benefits NOC INTERCONNECT IMPROVES SOC ECONO CONOMICS Initial Investment is Low Compared to SoC Performance and Cost Benefits A s systems on chip (SoCs) have interconnect, along with its configuration, verification,

More information

FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES

FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES Experimental Astronomy (2004) 17: 213 220 C Springer 2005 FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES D.H.P. MAAT and G.W. KANT ASTRON, P.O. Box 2, 7990 AA Dwingeloo,

More information

High Performance Embedded Applications. Raja Pillai Applications Engineering Specialist

High Performance Embedded Applications. Raja Pillai Applications Engineering Specialist High Performance Embedded Applications Raja Pillai Applications Engineering Specialist Agenda What is High Performance Embedded? NI s History in HPE FlexRIO Overview System architecture Adapter modules

More information

Low Power Design Techniques

Low Power Design Techniques Low Power Design Techniques August 2005, ver 1.0 Application Note 401 Introduction This application note provides low-power logic design techniques for Stratix II and Cyclone II devices. These devices

More information

AMC GSPS 8-bit ADC, 2 or 4 channel with XCVU190 UltraScale

AMC GSPS 8-bit ADC, 2 or 4 channel with XCVU190 UltraScale KEY FEATURES 56 GSPS, 8-bit ADC, UltraScale 8-bit ADC at up to dual 56 GSPS 2 x 56 or 4 x 28 GSPS channels Xilinx UltraScale XCVU190 FPGA 16 GB of DDR-4 Memory (2 banks of 64-bit) ADC is 65 nm CMOS process

More information

AT40K FPGA IP Core AT40K-FFT. Features. Description

AT40K FPGA IP Core AT40K-FFT. Features. Description Features Decimation in frequency radix-2 FFT algorithm. 256-point transform. -bit fixed point arithmetic. Fixed scaling to avoid numeric overflow. Requires no external memory, i.e. uses on chip RAM and

More information

Revision: August 30, Overview

Revision: August 30, Overview Module 5: Introduction to VHDL Revision: August 30, 2007 Overview Since the first widespread use of CAD tools in the early 1970 s, circuit designers have used both picture-based schematic tools and text-based

More information

Stratix vs. Virtex-II Pro FPGA Performance Analysis

Stratix vs. Virtex-II Pro FPGA Performance Analysis White Paper Stratix vs. Virtex-II Pro FPGA Performance Analysis The Stratix TM and Stratix II architecture provides outstanding performance for the high performance design segment, providing clear performance

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Table of Contents: The Accelerated Data Center Optimizing Data Center Productivity Same Throughput with Fewer Server Nodes

More information

FPGA APPLICATIONS FOR SINGLE DISH ACTIVITY AT MEDICINA RADIOTELESCOPES

FPGA APPLICATIONS FOR SINGLE DISH ACTIVITY AT MEDICINA RADIOTELESCOPES MARCO BARTOLINI - BARTOLINI@IRA.INAF.IT TORINO 18 MAY 2016 WORKSHOP: FPGA APPLICATION IN ASTROPHYSICS FPGA APPLICATIONS FOR SINGLE DISH ACTIVITY AT MEDICINA RADIOTELESCOPES TORINO, 18 MAY 2016, INAF FPGA

More information

12/04/ Dell Inc. All Rights Reserved. 1

12/04/ Dell Inc. All Rights Reserved. 1 Dell Solution for JD Edwards EnterpriseOne with Windows and Oracle 10g RAC for 200 Users Utilizing Dell PowerEdge Servers Dell EMC Storage Solutions And Dell Services Dell/EMC storage solutions combine

More information

How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO)

How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO) Economic Insight Paper How Architecture Design Can Lower Hyperconverged Infrastructure (HCI) Total Cost of Ownership (TCO) By Eric Slack, Sr. Analyst December 2017 Enabling you to make the best technology

More information

SKA Monitoring & Control Realisation Technologies Hardware aspects. R.Balasubramaniam GMRT

SKA Monitoring & Control Realisation Technologies Hardware aspects. R.Balasubramaniam GMRT SKA Monitoring & Control Realisation Technologies Hardware aspects R.Balasubramaniam GMRT Basic entities of M&C Sensors and Actuators Data acquisition boards Field buses Local M&C nodes Regional M&C nodes

More information

HES-7 ASIC Prototyping

HES-7 ASIC Prototyping Rev. 1.9 September 14, 2012 Co-authored by: Slawek Grabowski and Zibi Zalewski, Aldec, Inc. Kirk Saban, Xilinx, Inc. Abstract This paper highlights possibilities of ASIC verification using FPGA-based prototyping,

More information

Intel Xeon Scalable Family Balanced Memory Configurations

Intel Xeon Scalable Family Balanced Memory Configurations Front cover Intel Xeon Scalable Family Balanced Memory Configurations Last Update: 20 November 2017 Demonstrates three balanced memory guidelines for Intel Xeon Scalable processors Compares the performance

More information

Dell Solution for JD Edwards EnterpriseOne with Windows and SQL 2000 for 50 Users Utilizing Dell PowerEdge Servers And Dell Services

Dell Solution for JD Edwards EnterpriseOne with Windows and SQL 2000 for 50 Users Utilizing Dell PowerEdge Servers And Dell Services Dell Solution for JD Edwards EnterpriseOne with Windows and SQL 2000 for 50 Users Utilizing Dell PowerEdge Servers And Dell Services Dell server solutions combine Dell s direct customer relationship with

More information

LHC Detector Upgrades

LHC Detector Upgrades Su Dong SLAC Summer Institute Aug/2/2012 1 LHC is exceeding expectations in many ways Design lumi 1x10 34 Design pileup ~24 Rapid increase in luminosity Even more dramatic pileup challenge Z->µµ event

More information

SOFTWARE DEFINED STORAGE VS. TRADITIONAL SAN AND NAS

SOFTWARE DEFINED STORAGE VS. TRADITIONAL SAN AND NAS WHITE PAPER SOFTWARE DEFINED STORAGE VS. TRADITIONAL SAN AND NAS This white paper describes, from a storage vendor perspective, the major differences between Software Defined Storage and traditional SAN

More information

RAFT Tuner Design for Mobile Phones

RAFT Tuner Design for Mobile Phones RAFT Tuner Design for Mobile Phones Paratek Microwave Inc March 2009 1 RAFT General Description...3 1.1 RAFT Theory of Operation...3 1.2 Hardware Interface...5 1.3 Software Requirements...5 2 RAFT Design

More information

Technical Backgrounder: The Optical Data Interface Standard April 28, 2018

Technical Backgrounder: The Optical Data Interface Standard April 28, 2018 ! AdvancedTCA Extensions for Instrumentation and Test PO Box 1016 Niwot, CO 80544-1016 (303) 652-1311 FAX (303) 652-1444 Technical Backgrounder: The Optical Data Interface Standard April 28, 2018 AXIe

More information

COMPARING COST MODELS - DETAILS

COMPARING COST MODELS - DETAILS COMPARING COST MODELS - DETAILS SOFTLAYER TOTAL COST OF OWNERSHIP (TCO) CALCULATOR APPROACH The Detailed comparison tab in the TCO Calculator provides a tool with which to do a cost comparison between

More information

Development of Optical Wiring Technology for Optical Interconnects

Development of Optical Wiring Technology for Optical Interconnects Development of Optical Wiring Technology for Optical Interconnects Mitsuhiro Iwaya*, Katsuki Suematsu*, Harumi Inaba*, Ryuichi Sugizaki*, Kazuyuki Fuse*, Takuya Nishimoto* 2, Kenji Kamoto* 3 We had developed

More information

In-chip and Inter-chip Interconnections and data transportations for Future MPAR Digital Receiving System

In-chip and Inter-chip Interconnections and data transportations for Future MPAR Digital Receiving System In-chip and Inter-chip Interconnections and data transportations for Future MPAR Digital Receiving System A presentation for LMCO-MPAR project 2007 briefing Dr. Yan Zhang School of Electrical and Computer

More information