Memo 126. Strawman SKA Correlator. J.D. Bunton (CSIRO) August

Size: px
Start display at page:

Download "Memo 126. Strawman SKA Correlator. J.D. Bunton (CSIRO) August"

Transcription

1 Memo 126 Strawman SKA Correlator J.D. Bunton (CSIRO) August

2 Enquiries should be addressed to: Document his tory REVISION DATE AUTHOR SECTIONS/PAGES AFFECTED REMARKS April 2010 John Bunton Initial Draft June 2010 John Bunton Simplified design for WBSP/PAF correlator August 2010 John Bunton Revision after review by SPDO Copyright and Dis claimer 2010 CSIRO To the extent permitted by law, all rights are reserved and no part of this publication covered by copyright may be reproduced or copied in any form or by any means except with the written permission of CSIRO. Important Disclaimer CSIRO advises that the information contained in this publication comprises general statements based on scientific research. The reader is advised and needs to be aware that such information may be incomplete or unable to be used in any specific situation. No reliance or actions must therefore be made on that information without seeking prior expert professional, scientific and technical advice. To the extent permitted by law, CSIRO (including its employees and consultants) excludes all liability to any person for any consequences, including but not limited to all losses, damages, costs, expenses and any other compensation, arising directly or indirectly from using this publication (in part or in whole) and any information or material contained in it.

3 C ontents 1. INTR ODUCTION Summary Scope Glossary S K A S pecification Technical Assumptions WB S PF Correlator Dis h with PAF Simultaneous WBSF and PAF operation Combined PAF and WBSP correlator Non-Imaging proces s ing Tied Array Beams Transient and Pulsar processing Incoherent beam forming Transient buffer and transient trigger Aperture Array Correlators Conclusion Acknowledgement R eferences Appendix A.1 Multiplication per Input Sample for a Polyphase Filter Bank A.2 Correlator compute load Aperture Array Phases Array Feeds WBSPFs i

4 L is t of F igures Figure 1 Proposed filterbank operation for SKA. After data reordering fine filterbank process a single coarse channel at a time Figure 2 WBSPF correlator... 5 Figure 3 WBSPF and PAF correlator... 7 Figure 4 Possible configuration of AA correlators Figure 5 AA correlator with 150MHz combined AA lo and AA hi correlator or separated AA lo and AA hi correlators. When the combined correlator is operating then the AA hi bandwidth is reduced to 300MHz L is t of T ables Table 1 Summary of correlator requirements... 2 Table 2 Correlator compute load for Dishes... 6 ii

5 INTRODUCTION 1. INTR ODUCTION 1.1 S ummary The routing of data places a major constraint on the building of correlators. This memo considers a possible implementation for routing the data for a SKA consisting of 250 AA stations and 3600 WBSPF dish antennas, 2000 of which have a PAF that can be switched in. It also looks at some of the non-imaging processing that could occur in the correlator system. Assumptions are made regarding the data transport technology and the number of inputs possible in a single shelf of processing boards. In addition some rather arbitrary assumptions are made regarding the bandwidth. The designs proposed are indicative only, but provide a starting point for the SKA. In all cases, FPGAs are used to determine the viability of the design but in the future, it might be ASICs, GPUs or CPUs that are the actual processing unit. 1.2 S cope This design only covers processing that might occur at the site of the correlator, and then only at the very highest level. It does not include any processing that occurs at the antenna or antenna stations such as beamforming. It also does not cover any processing that occurs after the correlator such as imaging and calibration nor any details of non imaging processing. 1.3 Glossary Acronym AA CMAC FPGA GPU GS MWA PAF MAC SKA SKAMP WBSPF Shelf Definition Aperture Array Complex Multiply-Accumulate Field-Programmable Gate Array Graphics Processing Unit GigaSamples Murchison Widefield Array Phased Array Feed Multiply-Accumulate Square Kilometre Array SKA Molonglo Prototype Wide Band Single Pixel Feed Also card cage, chassis, or crate of boards Strawman SKA Correlator 2010/08/16, Version 1.0 1

6 SKA SPECIFICATION 2. S K A S PE CIFICATION SKA specifications are still being refined. The ultimate specification will be a balance between science requirements, capital cost and operating cost. Here the specifications are allowed to diverge from various currently proposed specifications in [1 ] and also from specifications put forward by various subsystem proponents. For wide band single pixel feeds WBSPF on parabolic dish reflectors the bandwidth is chosen to be 9GHz as this fits into three 100Gb/s links per antenna. To make comparisons easier the phased array feed on parabolic dishes PAFs and dense aperture arrays bandwidth are both set to 0.6GHz. There are 250 dense aperture array stations (AA hi). Each station operates in the frequency range 0.3 to 1.4GHz. At the same location are sparse aperture arrays (AA lo) that operate in the frequency range.05 to 0.45GHz. The AA lo stations process the full bandwidth of 0.4GHz. Each aperture array generates 1200 simultaneous beams. In the frequency range 0.3 to 0.45 GHz the two types of aperture array operate simultaneously. Total sensitivity in this frequency band is doubled if correlations between the two different arrays are formed. The compute requirements for these correlators is calculated in Appendix [A2] and are shown in Table 1. Table 1 Summary of SKA modes and correlator requirements Number antennas Bandwidth Beams/antenna Correlation CMAC/s Dense Aperture Array GHz x10 16 Sparse Aperture Array GHz x10 16 Dense and Sparse Array 250 with Ghz x10 16 overlap (extra) Dish to 20km with PAF GHz x10 17 Dish with WBSPF, GHz x10 17 to 180km and 300 beams from VLBI stations Dish with WBSPF beyond 133 Stations 9GHz 4 beams/station 1.3x km with station beamforming Dish with WBSPF, 20 to 180km 700 9GHZ 1 8.8x10 15 There are 3600 dishes, 900 of which are beyond 180km from the core. These dishes are formed into stations and only station beams are transported to the correlator. The actual details of the beamforming do not greatly impact the correlator. Here it is assumed that the number of beams from these dishes is at most 300. Thus the total number of inputs into the correlator is 3000 dual polarisation signals for dishes with WBSPF. Of the remaining dishes the 2000 within 20km of the core can carry phased array feeds (PAFs). Each PAF generates 36 beams with a bandwidth of 0.6GHz each. When the core antennas are operating with PAFs, the 700 antennas in the range 20km to 180km are separately correlated. In addition all WBSPF antennas beyond 20km are in stations. These stations are beamformed and the corresponding beams correlated. A possible configuration is 133 stations with 4 beams per station. The compute load for these correlator are also given in Table 1. 2 Strawman SKA Correlator 2010/08/16, Version 1.0

7 TECHNICAL ASSUMPTIONS 3. T E CHNICAL AS S UMPTIONS To generate a design some constraints are needed. These take the form of the following assumptions. Some of the assumptions are based on the author s experience with the ASKAP and SKAMP/MWA correlators, and some are based on extrapolation to It is not known if the assumption will be valid in 2020, however they form a starting point for developing a strawman design. Assumption 1 ADC resolution for the WBSPF systems is ~8bits. Correlator data precision is 4bits. The lower bandwidth systems on PAFs or on an AA may have more bits, however as it is assumed that beamforming will occur at the antenna, this does not impact the correlator or signal transport to it. Assumption 2 The maximum length of an FFT or polyphase filterbank is ~1000 within an FPGA or ASIC. As the size of an FFT increases the computation increases as log(n) but the memory is proportional to N. Eventually the memory dominates. When this starts to occur it is better to use memory external to the processing FPGA/CPU/GPU and process long FFTs in two stages. An example of this is the ~256 thousand point FFTs built for the ALMA compact array correlator [2]. In these an ASIC implements a 512 point FFT. Two such ASICs with a corner turner memory in between are used to implement the long FFT. Simulations on FPGAs show that a ~2000 point polyphase filter bank uses similar percentages of the block ram and multiplier resources. Beyond this the design is memory limited, leading to underutilisation of multiplier and logic resources. Assumption 3 A single shelf (crate, chassis or card cage) is limited to ~256 optical connections (16 boards with 16 inputs each). Both the ASKAP beamformer [3] and a full UNIboard system [4] are close to this limit. Digital optical modules are usually sold with both a transmitter and receiver so this gives ~256 optical inputs and 256 outputs. It is assumed that the input and output of an optical module can be used for independent data paths, for example, an input from the antenna and an output to the correlator. This can halves the number of expensive optical modules. It will be seen that data transport is a major correlator cost so this optimisation is needed to reduce costs. However, use of optical systems in this way may preclude the use of commercial switches. Assumption 4 Data transport to the correlator will be on fibres. Already standards exist for 10x10G links to transport 100GE (CAUI IEEE 802.3ba) and existing FPGAs can implement up to 6 such interfaces. Hence, three current generation FPGAs are sufficient to interface to the 16 optical inputs proposed in assumption 3. In the future even more advanced interfaces, possibly quad 25Gb/s system, will be in place. By the time of SKA the cost of fibre transceivers will be lower and the sweet spot for fibre will be ~100Gb/s. Use of 100Gb/s fibre reduces the physical data routing problem by a factor of 10 compared to 10Gb/s. Use of 100Gb/s fibre also reduces the total number of equipment racks (see assumption(3) 10 times the fibres means the minimum number of shelves is up to 10 times higher). Within the correlator path, cabling lengths are likely to exceed 10m. Even now fibre is competitive. It is expected that all data routing between equipment cabinets is on fibre. 4. WBSPF CORRELATOR At a quantisation resolution of 8bits, a 100Gb/s fibre can transport 6GHz of bandwidth for a single polarisation beam (8bits by 12GS/s). Three fibres are needed to transport all data from a single antenna (2x9GHz). If there is no filtering at the antenna, the samples for a single Strawman SKA Correlator 2010/08/16, Version 1.0 3

8 WBSPF CORRELATOR polarisation are split across two fibres and must be recombined at the filterbank. The data rate from the antenna is 36 GS/s. This must be first processed by a filterbank at a cost of ~20 real MAC per sample [A.1]. The total compute load per antenna is ~0.7TMAC per antenna for the filterbanks. This can be achieved in a single present-day FPGAs or GPUs. The SKA specifications are for 10 5 frequency channels, by assumption 2, this is achieved as cascade of two filterbank [5] with external memory to store the intermediate results. It is assumed DRAM is used as external memory. The possible filterbank system is shown in Figure 1. With it, frequency resolutions from 1MHz to 4.5kHz are possible in a 9GHz band. Internal to the processing device a single 9 MHz stream is processed at any one time by the fine filterbank. For higher frequency resolutions, a third filterbank can be cascaded on some of the fine filterbank channels. Coarse Filterbank ~1000 channels DRAM for data reordering Fine Filterbank 8 to 2000 Data from other antennas Cross connections To Correlator shelves Figure 1 Proposed filterbank operation for SKA. After data reordering a fine filterbank process a single coarse channel at a time. Also shown is the cross connection needed to aggregate data before transport to the correlator. The data rate out of the coarse filterbanks for both polarisations is approximately the same as that from the antenna, 9GHz x 2pol x 8+8bits/sample = 288Gb/s. Already FPGAS with DRAMs data rates of ~2Gb/s per pin [7] have been announces. Future devices will have higher data rates so at most 150 pins are needed to write data to DRAM and a further 150 pins are needed to read the data. Five 72-pin DRAM modules are sufficient for this. The DRAMs provide a 4 second transient buffer assuming 32Gbyte DRAM are used. By the time of implementation, 32Gbyte DRAM are expected to be standard, and higher capacity DRAMs should be available allowing transient buffering of over 10 seconds. A possible design for the filterbank board has 4 FPGAs and 20 DRAM modules. Each FPGA processes dual polarisation data from a single antenna. The FPGA has 30 10G links to support the I/O (using the same bidirectional port for input from the antenna and output to the correlator). This requires 30 10G SERDES per FPGA, already Altera [6] have devices with 66 10G SERDES. The board processes the data from 4 antennas (12 fibres). A single 16-board shelf can process data for 64 antennas, so 47 shelves are needed for a full SKA. The 47 shelves contain 752 boards so by assumption 3, it is not possible to transport the data from individual filterbank boards to a single correlator shelf. With 3000 dual polarisation signals and at most 256 fibres into a correlator shelf each fibre must transport data for at least 12 antennas. Thus a cross connect is needed between filterbank boards. There are 12 fibres, carrying 8 bit data, coming into a filterbank board. The output is 4 bit data, for which 6 fibres are sufficient. For a shelf with 16 filterbank boards there are at least 96 output fibres per shelf. The cross connect that occurs between the boards allows each output fibre to carry part of the data for all filterbanks. For example, if there are 96,000 frequency channels and 96 fibres then after the cross connect, Figure 1, each fibre carries data for 1000 frequency channels. At least one fibre is needed to connect a beamformer shelf to a correlator shelf and each shelf processes part of the total bandwidth. Assumption 3 limits the possible outputs from a filterbank to fewer than ~256, so there are at most 256 correlator shelves. The minimum correlator shelf count occurs if there are multiple fibres from each filterbank shelf to a correlator shelf. For 47 filterbank shelves there are up to 5 fibre connections to each correlator shelf, giving 235 inputs per correlator. In this case, there are 20 correlator shelves. 4 Strawman SKA Correlator 2010/08/16, Version 1.0

9 DISH WITH PAF The choice of the number of shelves at this time is arbitrary but to make comparisons easier the compute capacity of a shelf is chosen to match requirements for an aperture array correlator shelf C S =7.5x10 14 CMAC/s [A.2]. For WBSPFs the compute load C WPSPF is 1.63x10 17 CMAC/s [A.2]. Hence, 216 correlator shelves are needed as shown in Figure 2. In this design the data rate on each fibre to the correlator is less than 50Gb/s. A system with fewer correlator shelves would make better use of the filterbank-correlator 100Gb/s links WBSPF fibres 6GHz of BW each 47 filterbank Shelves One fibre from each filterbank shelf to each correlator shelf Correlator 216 Shelves Correlations Figure 2 WBSPF correlator Each fibre to the correlator carries data for 464 (16x29) frequency channels for all 64 antennas processed by a filterbank shelf, assuming ~10 5 frequency channels. To aggregate the data this requires a cross connect within the filterbank shelf. Another cross connect is needed in the correlator shelf to distribute the 464 frequency channels across the 16 correlator boards. Each correlator board processes data for 29 frequency channels. The SKAMP/MWA 4bit correlation cell [8] implements a 4bit CMAC with a single 18-bit multiplier and associated memory and logic. Currently proposed FPGAs [7] have ~ bit multipliers that can implement up to, 4000 CMAC at 0.4GHz = 1.6x10 12 CMAC/s. Assuming 8 FPGAs per board and 16 boards per shelf, the shelf has 128 FPGAs and can implement 2.048x10 14 CMAC/s. This is a factor of 4 less than the requirements. A SKA correlator based on FPGAs should be possible with two more generations of FPGA. This is expected in If an ASIC is designed then a correlator is possible with current 40nm technology and each ASIC would dissipate about the same as a high-end FPGA: ~50W. There are 216 shelves with 128 ASICs each, giving a total of 28,000 ASICs. These would dissipate ~1.4MW. Data transport and distribution, and the filterbank would take the total dissipation to close to 2MW. This shows the feasibility of an SKA correlator using 100G fibre links and today s processing technology. With future lower-power technology the correlator power dissipation will be much less than 1MW. As a rule of thumb the cost of a fully populated shelf is $ k so the cost of the 263 correlator and filterbank shelves is estimated to be $26 to $52 million. The costs excludes any NRE costs for an ASIC or system development costs. Added to this are the 9000 input 100G fibre links and the 47x216=10,152 short haul filterbank to correlator links. These are possibly $200 to $1000 per link pair at the time of the SKA which adds $4 to $20M to the cost. Moving the filterbanks to the antenna reduces the long haul data fibre costs as the correlator data has fewer bits per sample. However, the cross connect operation implemented in what were the filterbank shelves is still needed. There is now an extra filterbank system to be built at the antennas. An investigation is needed to determine which approach is cheaper. 5. DIS H WITH PAF For dishes with PAFs a beamformer is needed. Here it is assumed that the beamformer is at the antenna as this reduces the data rate from the antenna. In the beamformer the data is first decomposed to coarse frequency channels of ~1MHz. This data is then beamformed using a weighted add of the data in the ~1MHz channels. Up to 36 beams are formed with a bandwidth of 600MHz. The beamformer will also allow a tradeoff to be made between beams and Strawman SKA Correlator 2010/08/16, Version 1.0 5

10 DISH WITH PAF bandwidth for example 25 beams at a bandwidth of 864MHz. But this does not affect the correlator requirements. To minimise data transport from the antennas the beam data should be decimated to its final frequency resolution. This allows the number of bits to be reduced to the 4+4-bit data resolution of the correlator. Each fibre carries dual polarisation fine filterbank data for part of the bandwidth and some of the beams. This could be 9 dual polarisation beams at 600MHz per fibre, but the actual split is unimportant. There are 4 fibres per antenna to carry the data for 36 beams. As the WBSPF and PAF are not used simultaneously, then 3 WBSPF fibres can be reused to transport PAF data. An extra fibre per antenna is needed to carry the rest of the data. The compute load for the PAF correlations is 1.73x10 17 CMAC/s [A.2] which is 6% more than that for the WBSF correlator. 5.1 S imultaneous WB S F and PAF operation With the PAF operating there is still WBSPF data coming from 1600 antennas beyond 20km. This provides a permanent VLBI mode. The total point source sensitivity for this VLBI mode is 4/9 of that for the full 3600 antennas. The observing speed is the square of this. This suggests that for high resolution astronomy the outer antennas operate at 0.2 of the observing speed of the full 3600 antennas. However, for high resolution astronomy the baselines less than 20km add little to the sensitivity. In effect, they provide a single almost noise free datum. It is the correlations between core and the outer antennas, which provide the vast majority of the data. In terms of correlation there are /2 correlations between outer antennas and there are 1600x2000 correlations between core and outer antennas. Observing speed is proportional to the total number of correlation. So the reduction in observing speed is 16 2 /2/(16 2 /2+16*20) = Note, station beamforming does not change the sensitivity calculation, only the field of view. It is expected that all the outer antenna stations will be beamformed in this mode of operation. The larger the number of antennas beamformed the lower the compute load. For an upper limit consider stations with 12 antennas, with each station generating 4 beams. The compute load is 1.3x10 15 CMAC/s [A.2]. Another possible mode (180km mode) is correlation of the WBSPF data from the 700 antennas in the range 20 to 180km from the core. This has a sensitivity that is ~25% of the full SKA for the same resolution but it still provides a useful adjunct while the PAFs are being used. The compute load for this mode is 8.8x10 15 CMAC/s [A.2]. Table 2 Correlator compute load for Dishes Correlator compute load (CMAC/s) Modes PAF WBSPF Total 3600 WBSPF x x PAF + VLBI 1.73x x x PAF + VLBI + 180km 1.73x x x10 17 A summary of the compute load for the correlator processing dish data is shown in Table 2. The added correlator load for always operating the dishes beyond 20km is small. Adding 12% to the capacity of the WPSPF correlator is sufficient to for all the proposed dish PAF and WBSPF modes. This added capacity increases the number of correlator shelves to Strawman SKA Correlator 2010/08/16, Version 1.0

11 NON-IMAGING PROCESSING 5.2 Combined PAF and WB S P correlator The existing WPSPF fibres now carry either WBSPF data or 3/4 of the data for the PAF. An extra 2000 fibres are needed for the rest of the PAF data. These fibres could carry data for 6 of the 36 beams at the final frequency and quantisation resolution needed by the correlator. Eleven filterbank shelves are needed to cross connect the data from the 2000 fibres even though the compute capacity of these shelves is not needed. However, for these 11 shelves the data on each output fibre is close to 100Gb/s as there is data reduction in the filterbank shelves. There are now 59 filterbank shelves and each shelf routes a single 100G fibres to each of the 242 correlator shelves. If this number of shelves is excessive then higher capacity correlator shelves are required. The number of possible modes complicates the operation of the correlator but an FPGA, GPU or CPU based correlators are easy to reconfigure. Alternatively, a combination of these for control and data routing with ASICs to form the correlations is a possibility. The layout of the resulting correlator is shown in Figure 3. Also shown are the data paths and some of the processing for non-imaging process. These are discussed in the next section PAF fibres 9 beams 0.6GHz 11 filterbank Shelves One fibre from each filterbank to each correlator shelf 47 filterbank Shelves Correlator Module 242 Shelves Correlations 9000 WBSPF fibres 6GHz of BW each Or 3000 WBSPF fibres and 6000 PAF fibres Transient buffer and beam power spectra Transient buffer and beam power from PAF Incoherent beam processing + Transient buffer dump and processing Transient data Tied array beam data Other Trigger sources Dish Tied array beam post processing Dedispersion, Pulsar and transient Processing Pulsar, transient To PAF Transient buffer trigger Figure 3 WBSPF and PAF correlator 6. NON-IMAGING PR OCE S S ING Non-imaging processing involves all processing that generates astronomy data that does not result in an image being formed. It includes: the generation of tied array beams, searching for pulsars and other transient events in tied array beams, incoherent summing of beam data as a simple way to detect strong transients in the field of view of an antenna beam, storage of beam data in a transient buffer and the triggering of that buffer, and retrieval and processing of the data in the transient buffer after a trigger 6.1 Tied Array B eams The data from all stations is first aggregated in the correlator. This makes the correlator a location where tied array beams can be implemented. In earlier parts of the system only part of Strawman SKA Correlator 2010/08/16, Version 1.0 7

12 NON-IMAGING PROCESSING the antenna data is present so partial tied array beams can be generated. These partial beams would then need to be summed separately. For example, in the WBSPF filterbanks data for up to 96 antennas is available and beam data from the 47 filterbanks needs to be aggregated and summed before a complete tied array beam is available. This increases the tied array data transported by a factor of 47 times compared to generating them in the correlator. The possible number of beams is limited there are fewer filterbank shelves than correlator shelves and most of the most of the output data capacity is used to connect to the correlator. The compute cost of a tied array beam is one complex multiply per input sample per beam. In FPGAs this could possibly be a 4+4 bit complex multiply using as single 18-bit multiplier [8]. This gives station phasing with a maximum error of 4 degrees. For the correlator the compute cost per input sample is 2000 complex multiplies per input sample for PAFs and 3000 for WBSPFs (number of correlation divided by number of electrical inputs). If the tied array beamforming compute load is limited to a maximum of 10% of the correlation compute load then 200 tied array beams can be generated per PAF beam. This gives up to 36*200 = 7200 tied array beams for the dish with PAF system. This is a total of 4,000GHz of dual polarisation tied array beams. A 100Gb/s fibre can transport 3GHz of dual polarisation data so ~1300 fibres are needed. This corresponds to 6 fibres for each of the 242 PAF shelves. This number of output optical connections is within the capabilities of the proposed design with the limit being reached with over tied array beams. For the WBSPF there are ~3000 complex multiplies per input sample. With a 10% added load there are 300 dual polarisation beams. This requires 900 fibres for signal transport. Four fibres per correlator shelf are needed to transport this. Each fibre carries data for part of the full bandwidth of the correlator shelf for a subset of the tied array beams. In the case of WBSPF the bandwidth on each fibre is 42MHz. To generate a single tied array beam this data is aggregated. There are separate shelves to aggregate the data. Each of these shelves takes as input a single fibre from each correlator shelf. This requires six shelves with 242 fibres into each shelf. Inside these shelves the data is distributed across the processing boards so that each board receives data for the full bandwidth of the beams it is processing. 6.2 Trans ient and Puls ar proces s ing With ~130Tb/s of tied array beam data it will be necessary to process the data in real time. The first operation that may be needed is a resampling of the data as the correlator is normally operating at a frequency resolution needed for imaging. In the case of the PAF the frequency resolution may be ~20kHz. For the detection of transients it may be necessary to bring the data back to the original sampling rate. A major use of these beams is the detection of transients. For short time duration transients, dispersion smears the transient. It is proposed that the data is first de-dispersed over a fairly coarse range of DM (dispersion measures) before being sent to transient detection. A simpler task is pulsar timing where de-dispersion to a single DM is needed. For pulsar searching very high time resolution Fourier transforms are needed. It is proposed this operation occur in a separate shelf to the one that aggregates and resamples the data. 6.3 Incoherent beam forming Summing of beam power gives a field of view equal to that of an antenna beam but at a lower sensitivity compared to a tied array beam. It provides yet another way to detect transients. The computing cost for this processing is quite low. Assuming a 1ms dump of 1MHz bandwidth data then the beam power data is ~1000 times less than the voltage data. A single 100 Gb/s fibre can transport this data for each filterbank shelf and still have room to increase the dump rate or frequency resolution. This data is collected in a single shelf and summed. For dishes with PAFs strong transients can be detected over ~30 square degrees. 8 Strawman SKA Correlator 2010/08/16, Version 1.0

13 APERTURE ARRAY CORRELATORS 6.4 Trans ient buffer and trans ient trigger The two-stage filterbank method described here requires a DRAM to store data in between the coarse and fine filterbank. At the time of the SKA the cheapest DRAM module may hold 32Gbytes with a data rate of 16 Gbytes/s per DRAM. Half of this is used for input so the DRAMs hold ~4 seconds of data. Higher capacity DRAMs would increase this to tens of seconds. An alternative is to use less data precision and a separate transient buffer write to increase the buffer storage time by a factor of 2 to 8. The data paths for transient buffer data is shown in Figure 3. The transient buffer is written continuously. Systems such as the tied array beams, incoherent beam processing, or external systems such as Xray satellites, provide a trigger to freeze most of the buffer. Part of the buffer is still needed for the operation of the filterbanks. In a multibeam system, this freeze would be on one or a limited number of beams. This buffer is useful for transients with time variation faster than a second. A couple of seconds of data covering the time of the transient is sufficient for most applications. For very high dispersion events a longer buffer is needed (Science input is needed to determine appropriate buffer sizes). However, the trigger needs to be applied before the data is lost from the buffer. For the local detection of transients a 4 second buffer is sufficient, but for transient detected by other means a longer buffer is needed. The possible paths by which the buffer might be triggered need further exploration by the SKA scientists. After a trigger the buffer, or appropriate parts of the buffer, is transferred to a CPU cluster and processed in non-real time. This process could, for example, be the imaging of the data at high time resolution. 7. APERTURE ARRAY CORRELATORS This description closely follows the design already proposed for the AA [9] but uses 100G fibre links. It also addresses the proposal for correlating dense and sparse aperture in the frequency range that they overlap. Dense and sparse aperture arrays have the same number of stations, so the correlator for all of them can be built using a common correlator shelf. Beamforming takes place at the station, so there is a choice between sending coarse filterbank data at ~8bit resolution or fine filterbank data ready for the correlator at 4-bit resolution. With 4-bit correlator data the 100Gb/s fibres from the station can carry 12 GHz of data. Assuming 1200 dual polarisation beams per station then each fibre could carry 5MHz of data for all beams. For AA hi the bandwidth is 600MHz requiring 120 fibres from each station. For AA lo the bandwidth is 400MHz and 80 fibres are needed. The large number of optical connections indicates that it will be better to transport 4-bit correlator data. If the arrays are operated separately a single fibre from each station carrying 5MHz bandwidth, could be the inputs to a correlator shelf. The compute load C s for these correlator shelves is 7.5x10 14 CMAC/s [A.2.1]. The aperture arrays overlap in the frequency range 300 to 450MHz. Forming the correlation between the two arrays in this frequency band improves sensitivity. In the frequency range 300 to 450MHz each correlator shelf would need 500 inputs to directly receive data from all aperture arrays. This is not possible by assumption 3. Thus a cross connect is needed at the correlator for data in the overlap region. If wavelength division multiplexing is used on the 100G fibres then a passive optical cross connect is possible. In the cross connect, all data in a 5MHz band is divided into 1.25MHz bands, carrying data for all 500 inputs on each output fibre. The data in each 1.25MHz band is transported on 125 fibres. The correlation load for 500 inputs in a 1.25MHz band is identical to that for 250 inputs for a 5 MHz band. Hence, the same correlator shelf can be used to provide the required correlation capacity. A possible configuration for this correlator is shown in Figure 4. Strawman SKA Correlator 2010/08/16, Version 1.0 9

14 APERTURE ARRAY CORRELATORS 250 x 90 AA hi fibres 1200 beams 450MHz Correlator Module 90 Shelves Correlations 500 x 30 AA lo & hi fibres 1200 beams 150MHz Optical Cross connect Correlator Module 120 Shelves Correlations 250 x 50 AA lo fibres 1200 beams 250MHz Correlator Module 50 Shelves Correlations AA Incoherent beam data & and transient data from stations Trigger to transient buffer at stations Incoherent beam processing + Transient dump and processing Transient data Other Trigger sources AA Tied array beam post processing Dedispersion, Pulsar nad transient Processing Pulsar, transient Figure 4 Possible configuration of AA correlators In the non-overlap region, each correlator shelf receives a single fibre from each station. There is a one-to-one correspondence between fibres from an antenna station and a correlator shelf. Hence there is one shelf for each 5MHz of bandwidth or 90 shelves for 450MHz of bandwidth from AA hi and 50 shelves for 250MHz of bandwidth from AA lo. In the 150MHz overlap region, the cross connect redistributes the data into 120 groups of 125 fibres. Each group carries 1.25MHz of data for all aperture arrays and connects to a single shelf. For transient processing the transient buffer data and incoherent beam data comes from the stations. There are 260 correlator shelves compared to 242 shelves in the dish with WBSPF/PAF correlator, Figure 3. The cost of the correlator is ~$26-$52M for the correlator shelves. The 50, G fibre links from the antenna arrays add $10-$50M. This does not include the cost of the optical crossconnect, filterbanks or NRE for an ASIC. Also within the antenna station beamformer will be the equivalent of the dish filterbank input fibres. These will probably carry data at a resolution of 8+8bits complex and will require the equivalent of 100, G fibres, ~$20-$100M. A significant fraction of the correlator compute capacity is devoted to the overlap correlation. However AA hi will not be producing data in the overlap region all the time. In this case the overlap compute correlator is 50% utilised. To have the system fully utilise the correlator capacity the AA hi bandwidth is reduced to 300MHz when the overlap is in operation. When there is no overlap between AA hi and AA lo the overlap correlator shelves are split. In this case 30 shelves process AA lo data and 90 shelves AA hi data. Further modes are possible; with a different balance between the AA hi total bandwidth and the combined bandwidth. The total number of correlator shelves is reduced to 200. This configuration is shown below. However, the cross connect is more complex and it may not be possible to implement it optically. This may add 120 shelves of cross connection or switching hardware. 10 Strawman SKA Correlator 2010/08/16, Version 1.0

15 CONCLUSION 250 x 30 AA hi fibres 1200 beams 150MHz Correlator Module 30 Shelves AA hi Correlations 250 x 30 AA lo fibres 150MHz 250 x 90 AA hi fibres 450MHz 1200 beams Cross connect Correlator Module 120 Shelves Mixed Correlations 250 x 50 AA lo fibres 1200 beams 250MHz Correlator Module 50 Shelves AA lo Correlations AA Incoherent beam data & and transient data from stations Trigger to transient buffer at stations Incoherent beam processing + Transient dump and processing Transient data Other Trigger sources AA Tied array beam post processing Dedispersion, Pulsar and transient Processing Pulsar, transient Figure 5 AA correlator with 150MHz combined AA lo and AA hi correlator or separated AA lo and AA hi correlators. When the combined correlator is operating then the AA hi bandwidth is reduced to 300MHz 8. CONCLUSION The correlation requirements for a WBSPF and a PAF are similar. The PAF has more total bandwidth but there are more WBSPF antennas. Assuming both are not operating simultaneously, then the correlator can be shared. When the ~2000 antennas with PAFs are in operation, the longer baseline antennas can be operated independently with little extra cost in the correlator. The observing speed for VLBI is ~0.28 of the full SKA in this case. For aperture arrays it is possible to connect the stations directly to a correlator shelf. But this is expected to fully utilise the input capacity of a correlator shelf. In the overlap region between AA lo and AA hi, data from both can be correlated. Science is maximised if there is a dedicated correlator for this overlap region. However it is not fully utilised when AA hi operates above 450MHz. Better correlator resource utilisation is achieved if a dual mode correlator is used, but AA hi bandwidth is reduced to 300MHz when AA hi and AA lo are combined. The design also addresses non-imaging data processing and demonstrates how transient buffers, tied array beams and incoherent processing can be implemented. The designs given here are for a fairly arbitrary specification but all designs can be easily scaled. The design scales linearly with beams and bandwidth. Changing the number of antennas or AA stations would require a more detailed analysis. Strawman SKA Correlator 2010/08/16, Version

16 ACKNOWLEDGEMENT 9. ACK NOWLEDGEMENT The author would like to thank Neale Morison, Tim Bateman and Wallace Turner for their corrections. The author also thanks David Hawkins for his suggestions, checking the memo and in particular for uncovering a number of hidden assumptions that needed clarifying. 12 Strawman SKA Correlator 2010/08/16, Version 1.0

17 REFERENCES REFERENCES [1] Schilizzi RT et al, "Preliminary Specifications for the Square Kilometre Array. " [2] S. Iguchi, S. K. Okumura, M. Okiura, M. Momose, Y. Chikada, 4-Gsps 2-bit FX Correlator with point FFT URSI General Assembly 2002, Maastricht, August, See paper 970, [3] DeBoer, D.R., et.al., Australian SKA Pathfinder: A High-Dynamic Range Wide-Field of View Survey Telescope Array IEEE Proceedings, Sept 2009 [4] Kooistra, E., RadioNet FP7: UniBoard, CASPER Workshop 2009, Cape Town, Sept 29, [5] Bunton, J.D. Multi-resolution FX Correlator, ALMA memo 447, Feb 2003, series.shtml [6] Stratix V Device Handbook, Altera, [7] Xilinx 7 Series Product Brief Brief.pdf [8] De Souza, L., Bunton, J.D., Campbell-Wilson, D., Cappallo, R., Kincaid, B., A Radioastronomy Correlator Optimised for the Virtex-4 SX FPGA, IEEE 17 th International Conference on Field Programmable Logic and Applications, Amsterdam, Netherland, Aug 27-29, 2007 [9] Faulkner, A. et. al. The Aperture Arrays for the SKA: the SKADS White Paper SKA memo 122, April 2010, APPENDIX A.1 Multiplication per Input S ample for a Polyphase Filter B ank A polyphase filterbank consists of short FIR filters at the input to an FFT. The short FIR filters are of length ~10 and the cost of the FFT is ~3(log 4 (L)-1) real multiplications per input sample, where L is the length of the FFT. By using the real and imaginary inputs for different signal the cost of the FFTis reduced to ~1.5(log 4 (L)-1). For each input sample the cost in multiplications for the filterbanks C fb is C fb = k [ (log 4 (L)-1)]S ~ 20S for a 1000 channel filterbank where k is the oversampling ratio for the filterbank ~1.2. There is also ~2 additions per multiplication in the FFT and one per multiply in the FIR filter. For simplicity the number of multiply accumulates MACs is made equal to the number of additions as it is the multiplies that are normally the limiting resource in ASICs and FPGAs. Strawman SKA Correlator 2010/08/16, Version

18 APPENDIX A.2 Correlator compute load 1. Aperture Array For dense aperture arrays there are 1200 dual polarisation beams with a bandwidth of 0.6 GHz. For the 250 stations proposed the total correlator compute load C DAA is; C DAA = /2 baselines x 1200 beams x 0.6X10 9 GHz x 4 Stokes = 9x10 16 CMAC/s. Each 100G fibre from the antenna station can carry 6GHz of bandwidth for dual polarisation beam data to the correlator which is assumed to process 4bit complex data (6GHz x 2polx 4+4bits/Hz = 96Gb/s). The compute load for a correlator shelf C S with a single fibre from each station is; C S = /2 baselines x 6X10 9 GHz x 4 Stokes = 7.5x10 14 CMAC/s. The sparse aperture arrays process 0.4GHz of bandwidth and the correlator compute load C SAA is; C SAA = /2 baselines x 1200 beams x 0.4X10 9 GHz x 4 Stokes = 6 x10 16 CMAC/s To include the correlations in the 150 MHz region there are 250 dense array stations to be correlated with 250 sparse array stations. This adds a correlator compute load C O of C O = 250x250 baselines x 1200 beams x 0.15X10 9 GHz x 4 Stokes = 4.5x10 16 CMAC/s The total correlator compute load is the sum of that for dense AA, spares AA and the overlap, C DAA + C saa + C 0 = 2.1x10 16 CMAC/s. 2. Phases Array Feeds For dishes with phased array feeds there are 36 dual polarisation beams with a bandwidth of 0.6 GHz. For the 2000 antennas proposed the total correlator compute load C PAF is; C PAF = /2 baselines x 36 beams x 0.6X10 9 GHz x 4 Stokes = 1.73x10 17 CMAC/s. 3. WBSPFs For dishes with WBSPFs there are 2700 antennas with in 180 km of the core and for the antennas beyond this there are a total of 300 beams. The number of baselines is approximately that for 3000 separate antennas. With a bandwidth of 9 GHz the correlator compute load C WSPF is; C WBSPF = /2 baselines x 9X10 9 GHz x 4 Stokes = 1.62x10 17 CMAC/s. When the inner 2000 antennas are being operated with PAFs it is expected that all outer antenna stations will be beamformed and correlated separately. The larger the number of antennas beamformed the lower the compute load. For an upper limit, consider station with 12 antennas, with each station generating 4 beams. There are 133 stations and the total compute load C VLBI for the long baseline WBSPFs is; C VLBI =133 2 /2 baselines x 4 beams x 9GHz x 4 Stokes = 1.3x10 15 CMAC/s. Another possible mode (180km mode) is correlation of the WBSPF from the 700 antennas in the range 20 to 180km from the core. This has a sensitivity that is ~25% of the full SKA for the same resolution but it still provides a useful adjunct while the PAFs are being used. The correlation load C 180 for this mode is; C 180 = /2 baselines x 9GHz x 4 Stokes =8.8x10 15 CMAC/s. 14 Strawman SKA Correlator 2010/08/16, Version 1.0

SKA PHASE 1 CORRELATOR AND TIED ARRAY BEAMFORMER

SKA PHASE 1 CORRELATOR AND TIED ARRAY BEAMFORMER SKA PHASE 1 CORRELATOR AND TIED ARRAY BEAMFORMER Document number... WP2 040.060.010 TD 002 Revision... 1 Author... J. D. Bunton Date... 2011 04 01 Status... Approved for release Name Designation Affiliation

More information

AA CORRELATOR SYSTEM CONCEPT DESCRIPTION

AA CORRELATOR SYSTEM CONCEPT DESCRIPTION AA CORRELATOR SYSTEM CONCEPT DESCRIPTION Document number WP2 040.040.010 TD 001 Revision 1 Author. Andrew Faulkner Date.. 2011 03 29 Status.. Approved for release Name Designation Affiliation Date Signature

More information

MWA Correlator Status. Cambridge MWA Meeting Roger Cappallo

MWA Correlator Status. Cambridge MWA Meeting Roger Cappallo MWA Correlator Status Cambridge MWA Meeting Roger Cappallo 2007.6.25 Correlator Requirements Complex cross-multiply and accumulate data from 524800 signal pairs Each pair comprises 3072 channels with 10

More information

Adaptive selfcalibration for Allen Telescope Array imaging

Adaptive selfcalibration for Allen Telescope Array imaging Adaptive selfcalibration for Allen Telescope Array imaging Garrett Keating, William C. Barott & Melvyn Wright Radio Astronomy laboratory, University of California, Berkeley, CA, 94720 ABSTRACT Planned

More information

The UniBoard. a RadioNet FP7 Joint Research Activity. Arpad Szomoru, JIVE

The UniBoard. a RadioNet FP7 Joint Research Activity. Arpad Szomoru, JIVE The UniBoard a RadioNet FP7 Joint Research Activity Arpad Szomoru, JIVE Overview Background, project setup Current state UniBoard as SKA phase 1 correlator/beam former Future: UniBoard 2 The aim Creation

More information

PoS(10th EVN Symposium)098

PoS(10th EVN Symposium)098 1 Joint Institute for VLBI in Europe P.O. Box 2, 7990 AA Dwingeloo, The Netherlands E-mail: szomoru@jive.nl The, a Joint Research Activity in the RadioNet FP7 2 programme, has as its aim the creation of

More information

SKA Technical developments relevant to the National Facility. Keith Grainge University of Manchester

SKA Technical developments relevant to the National Facility. Keith Grainge University of Manchester SKA Technical developments relevant to the National Facility Keith Grainge University of Manchester Talk Overview SKA overview Receptors Data transport and network management Synchronisation and timing

More information

2011 Signal Processing CoDR: Technology Roadmap W. Turner SPDO. 14 th April 2011

2011 Signal Processing CoDR: Technology Roadmap W. Turner SPDO. 14 th April 2011 2011 Signal Processing CoDR: Technology Roadmap W. Turner SPDO 14 th April 2011 Technology Roadmap Objectives: Identify known potential technologies applicable to the SKA Provide traceable attributes of

More information

SKA Low Correlator & Beamformer - Towards Construction

SKA Low Correlator & Beamformer - Towards Construction SKA Low Correlator & Beamformer - Towards Construction Dr. Grant Hampson 15 th February 2018 ASTRONOMY AND SPACE SCIENCE Presentation Outline Context + Specifications Development team CDR Status + timeline

More information

John W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands

John W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands Signal Processing on GPUs for Radio Telescopes John W. Romein Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands 1 Overview radio telescopes six radio telescope algorithms on

More information

ALMA CORRELATOR : Added chapter number to section numbers. Placed specifications in table format. Added milestone summary.

ALMA CORRELATOR : Added chapter number to section numbers. Placed specifications in table format. Added milestone summary. ALMA Project Book, Chapter 10 Revision History: ALMA CORRELATOR John Webber Ray Escoffier Chuck Broadwell Joe Greenberg Alain Baudry Last revised 2001-02-07 1998-09-18: Added chapter number to section

More information

PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5)

PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5) PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5) Rob van Nieuwpoort Vrije Universiteit Amsterdam & Astron, the Netherlands Institute for Radio Astronomy Why Radio? Credit: NASA/IPAC

More information

European VLBI Network

European VLBI Network European VLBI Network Cormac Reynolds, JIVE European Radio Interferometry School, Bonn 12 Sept. 2007 EVN Array 15 dissimilar telescopes Observes 3 times a year (approx 60 days per year) Includes some

More information

Fast Holographic Deconvolution

Fast Holographic Deconvolution Precision image-domain deconvolution for radio astronomy Ian Sullivan University of Washington 4/19/2013 Precision imaging Modern imaging algorithms grid visibility data using sophisticated beam models

More information

White Paper Taking Advantage of Advances in FPGA Floating-Point IP Cores

White Paper Taking Advantage of Advances in FPGA Floating-Point IP Cores White Paper Recently available FPGA design tools and IP provide a substantial reduction in computational resources, as well as greatly easing the implementation effort in a floating-point datapath. Moreover,

More information

GPUS FOR NGVLA. M Clark, April 2015

GPUS FOR NGVLA. M Clark, April 2015 S FOR NGVLA M Clark, April 2015 GAMING DESIGN ENTERPRISE VIRTUALIZATION HPC & CLOUD SERVICE PROVIDERS AUTONOMOUS MACHINES PC DATA CENTER MOBILE The World Leader in Visual Computing 2 What is a? Tesla K40

More information

ASKAP Data Flow ASKAP & MWA Archives Meeting

ASKAP Data Flow ASKAP & MWA Archives Meeting ASKAP Data Flow ASKAP & MWA Archives Meeting Ben Humphreys ASKAP Software and Computing Project Engineer 25 th March 2013 ASTRONOMY AND SPACE SCIENCE ASKAP @ Murchison Radioastronomy Observatory Australian

More information

Long Baseline Array Status

Long Baseline Array Status Long Baseline Array Status Cormac Reynolds, Chris Phillips + LBA Team 19 November 2015 CSIRO ASTRONOMY & SPACE SCIENCE LBA LBA VLBI array operated by CSIRO University of Tasmania, Auckland University of

More information

The Square Kilometre Array. Miles Deegan Project Manager, Science Data Processor & Telescope Manager

The Square Kilometre Array. Miles Deegan Project Manager, Science Data Processor & Telescope Manager The Square Kilometre Array Miles Deegan Project Manager, Science Data Processor & Telescope Manager The Square Kilometre Array (SKA) The SKA is a next-generation radio interferometer: 3 telescopes, on

More information

ASKAP Pipeline processing and simulations. Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010

ASKAP Pipeline processing and simulations. Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010 ASKAP Pipeline processing and simulations Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010 ASKAP Computing Team Members Team members Marsfield: Tim Cornwell, Ben Humphreys, Juan Carlos Guzman, Malte

More information

Data Processing for the Square Kilometre Array Telescope

Data Processing for the Square Kilometre Array Telescope Data Processing for the Square Kilometre Array Telescope Streaming Workshop Indianapolis th October Bojan Nikolic Astrophysics Group, Cavendish Lab University of Cambridge Email: b.nikolic@mrao.cam.ac.uk

More information

Signal Conversion in a Modular Open Standard Form Factor. CASPER Workshop August 2017 Saeed Karamooz, VadaTech

Signal Conversion in a Modular Open Standard Form Factor. CASPER Workshop August 2017 Saeed Karamooz, VadaTech Signal Conversion in a Modular Open Standard Form Factor CASPER Workshop August 2017 Saeed Karamooz, VadaTech At VadaTech we are technology leaders First-to-market silicon Continuous innovation Open systems

More information

S.A. Torchinsky, A. van Ardenne, T. van den Brink-Havinga, A.J.J. van Es, A.J. Faulkner (eds.) 4-6 November 2009, Château de Limelette, Belgium

S.A. Torchinsky, A. van Ardenne, T. van den Brink-Havinga, A.J.J. van Es, A.J. Faulkner (eds.) 4-6 November 2009, Château de Limelette, Belgium WIDEFIELD SCIENCE AND TECHNOLOGY FOR THE SKA SKADS CONFERENCE 2009 S.A. Torchinsky, A. van Ardenne, T. van den Brink-Havinga, A.J.J. van Es, A.J. Faulkner (eds.) 4-6 November 2009, Château de Limelette,

More information

FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES

FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES Experimental Astronomy (2004) 17: 213 220 C Springer 2005 FIBER OPTIC NETWORK TECHNOLOGY FOR DISTRIBUTED LONG BASELINE RADIO TELESCOPES D.H.P. MAAT and G.W. KANT ASTRON, P.O. Box 2, 7990 AA Dwingeloo,

More information

Transient Buffer Wideband (TBW) Preliminary Design Ver. 0.1

Transient Buffer Wideband (TBW) Preliminary Design Ver. 0.1 Transient Buffer Wideband (TBW) Preliminary Design Ver. 0.1 Steve Ellingson November 11, 2007 Contents 1 Summary 2 2 Design Concept 2 3 FPGA Implementation 3 4 Extending Capture Length 3 5 Issues to Address

More information

OSKAR: Simulating data from the SKA

OSKAR: Simulating data from the SKA OSKAR: Simulating data from the SKA Oxford e-research Centre, 4 June 2014 Fred Dulwich, Ben Mort, Stef Salvini 1 Overview Simulating interferometer data for SKA: Radio interferometry basics. Measurement

More information

Correlator Field-of-View Shaping

Correlator Field-of-View Shaping Correlator Field-of-View Shaping Colin Lonsdale Shep Doeleman Vincent Fish Divya Oberoi Lynn Matthews Roger Cappallo Dillon Foight MIT Haystack Observatory Context SKA specifications extremely challenging

More information

SPDO report. R. T. Schilizzi. US SKA Consortium meeting Pasadena, 15 October 2009

SPDO report. R. T. Schilizzi. US SKA Consortium meeting Pasadena, 15 October 2009 SPDO report R. T. Schilizzi US SKA Consortium meeting Pasadena, 15 October 2009 SPDO Team Project Director Project Engineer Project Scientist (0.5 fte) Executive Officer System Engineer Domain Specialist

More information

Square Kilometre Array

Square Kilometre Array Square Kilometre Array C4SKA 16 February 2018 David Luchetti Australian SKA Project Director Presentation 1. Video of the Murchison Radioastronomy Observatory 2. Is SKA a Collaborative project? 3. Australia/New

More information

Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA

Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Randall Wayth ICRAR/Curtin University with Marcin Sokolowski, Cathryn Trott Outline "Holy grail of CASPER system is

More information

OSKAR-2: Simulating data from the SKA

OSKAR-2: Simulating data from the SKA OSKAR-2: Simulating data from the SKA AACal 2012, Amsterdam, 13 th July 2012 Fred Dulwich, Ben Mort, Stef Salvini 1 Overview OSKAR-2: Interferometer and beamforming simulator package. Intended for simulations

More information

PARALLEL PROGRAMMING MANY-CORE COMPUTING FOR THE LOFAR TELESCOPE ROB VAN NIEUWPOORT. Rob van Nieuwpoort

PARALLEL PROGRAMMING MANY-CORE COMPUTING FOR THE LOFAR TELESCOPE ROB VAN NIEUWPOORT. Rob van Nieuwpoort PARALLEL PROGRAMMING MANY-CORE COMPUTING FOR THE LOFAR TELESCOPE ROB VAN NIEUWPOORT Rob van Nieuwpoort rob@cs.vu.nl Who am I 10 years of Grid / Cloud computing 6 years of many-core computing, radio astronomy

More information

The Breakthrough LISTEN Search for Intelligent Life: A Wideband Data Recorder for the Robert C. Byrd Green Bank Telescope

The Breakthrough LISTEN Search for Intelligent Life: A Wideband Data Recorder for the Robert C. Byrd Green Bank Telescope The Breakthrough LISTEN Search for Intelligent Life: A Wideband Data Recorder for the Robert C. Byrd Green Bank Telescope Dave MacMahon University of California at Berkeley Breakthrough LISTEN SETI Project

More information

SKA Computing and Software

SKA Computing and Software SKA Computing and Software Nick Rees 18 May 2016 Summary Introduc)on System overview Compu)ng Elements of the SKA Telescope Manager Low Frequency Aperture Array Central Signal Processor Science Data Processor

More information

Computational issues for HI

Computational issues for HI Computational issues for HI Tim Cornwell, Square Kilometre Array How SKA processes data Science Data Processing system is part of the telescope Only one system per telescope Data flow so large that dedicated

More information

Development of Focal-Plane Arrays and Beamforming Networks at DRAO

Development of Focal-Plane Arrays and Beamforming Networks at DRAO Development of Focal-Plane Arrays and Beamforming Networks at DRAO Bruce Veidt Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics National Research Council of Canada Penticton,

More information

Current Developments in the NRC Correlator Program

Current Developments in the NRC Correlator Program Current Developments in the NRC Correlator Program Lewis B.G. Knee (on behalf of the NRC Herzberg Correlator Group) ALMA Developers Workshop Chalmers University of Technology, Gothenburg May 26, 2016 Main

More information

FPGA Provides Speedy Data Compression for Hyperspectral Imagery

FPGA Provides Speedy Data Compression for Hyperspectral Imagery FPGA Provides Speedy Data Compression for Hyperspectral Imagery Engineers implement the Fast Lossless compression algorithm on a Virtex-5 FPGA; this implementation provides the ability to keep up with

More information

Mineral Spectral Library_CI

Mineral Spectral Library_CI MINERAL RESOURCES Mineral Spectral Library_CI Manual Ian C Lau, Peter Warren, Peter Mason, Victor Tey and Carsten Laukamp Epublish Report Number: EP175214 4 th of January 2018 National Virtual Core Library

More information

Parallel FIR Filters. Chapter 5

Parallel FIR Filters. Chapter 5 Chapter 5 Parallel FIR Filters This chapter describes the implementation of high-performance, parallel, full-precision FIR filters using the DSP48 slice in a Virtex-4 device. ecause the Virtex-4 architecture

More information

Suppliers Information Note. 21CN Optical Solution. Service & Interface Description

Suppliers Information Note. 21CN Optical Solution. Service & Interface Description Suppliers Information Note SIN 516 Issue v3.0 April 2018 For the BT Network 21CN Optical Solution Service & Interface Description Each SIN is the copyright of British Telecommunications plc. Reproduction

More information

B. Carlson. National Research Council Canada. Conseil national de recherches Canada

B. Carlson. National Research Council Canada. Conseil national de recherches Canada The EVLA Correlator B. Carlson National Research Council Canada Conseil national de recherches Canada Novel Tel. Workshop, DRAO June 14, 2011 EVLA project overview. Outline What is a correlator? EVLA correlator

More information

Argus Radio Telescope Architecture

Argus Radio Telescope Architecture Argus Radio Telescope Architecture Douglas Needham http://cinnion.ka8zrt.com http://www.naapo.org Argus Architecture p.1/15 Introduction: Traditional Telescopes Radio telescopes commonly consist of a single

More information

ALMA Correlator Enhancement

ALMA Correlator Enhancement ALMA Correlator Enhancement Technical Perspective Rodrigo Amestica, Ray Escoffier, Joe Greenberg, Rich Lacasse, J Perez, Alejandro Saez Atacama Large Millimeter/submillimeter Array Karl G. Jansky Very

More information

A Scalable, FPGA Based 8 Station Correlator Based on Modular Hardware and Parameterized Libraries

A Scalable, FPGA Based 8 Station Correlator Based on Modular Hardware and Parameterized Libraries A Scalable, FPGA Based 8 Station Correlator Based on Modular Hardware and Parameterized Libraries Aaron Parsons Space Sciences Lab University of California, Berkeley http://seti.berkeley.edu/ CASPER: Center

More information

Controlling Field-of-View of Radio Arrays using Weighting Functions

Controlling Field-of-View of Radio Arrays using Weighting Functions Controlling Field-of-View of Radio Arrays using Weighting Functions MIT Haystack FOV Group: Lynn D. Matthews,Colin Lonsdale, Roger Cappallo, Sheperd Doeleman, Divya Oberoi, Vincent Fish Fulfilling scientific

More information

AllWave FIBER BENEFITS EXECUTIVE SUMMARY. Metropolitan Interoffice Transport Networks

AllWave FIBER BENEFITS EXECUTIVE SUMMARY. Metropolitan Interoffice Transport Networks AllWave FIBER BENEFITS EXECUTIVE SUMMARY Metropolitan Interoffice Transport Networks OFS studies and other industry studies show that the most economic means of handling the expected exponential growth

More information

BushLAN Distributed Wireless:

BushLAN Distributed Wireless: Australian National University BushLAN Distributed Wireless: Spectrum efficient long range wireless extension of broadband infrastructure to remote areas July 24, 2014 1 1 Abstract BushLAN is a distributed

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

ALMA Memo 386 ALMA+ACA Simulation Tool J. Pety, F. Gueth, S. Guilloteau IRAM, Institut de Radio Astronomie Millimétrique 300 rue de la Piscine, F-3840

ALMA Memo 386 ALMA+ACA Simulation Tool J. Pety, F. Gueth, S. Guilloteau IRAM, Institut de Radio Astronomie Millimétrique 300 rue de la Piscine, F-3840 ALMA Memo 386 ALMA+ACA Simulation Tool J. Pety, F. Gueth, S. Guilloteau IRAM, Institut de Radio Astronomie Millimétrique 300 rue de la Piscine, F-38406 Saint Martin d'h eres August 13, 2001 Abstract This

More information

OptiDriver 100 Gbps Application Suite

OptiDriver 100 Gbps Application Suite OptiDriver Application Suite INTRODUCTION MRV s OptiDriver is designed to optimize both 10 Gbps and applications with the industry s most compact, low power, and flexible product line. Offering a variety

More information

Spark processor programming guide

Spark processor programming guide Spark processor programming guide Guide to creating custom processor plug-ins for Spark James Hilton and William Swedosh EP17523 January 2017 Spark version 0.9.3 Spark processor programming guide 3 Citation

More information

Powering Real-time Radio Astronomy Signal Processing with latest GPU architectures

Powering Real-time Radio Astronomy Signal Processing with latest GPU architectures Powering Real-time Radio Astronomy Signal Processing with latest GPU architectures Harshavardhan Reddy Suda NCRA, India Vinay Deshpande NVIDIA, India Bharat Kumar NVIDIA, India What signals we are processing?

More information

University Program Advance Material

University Program Advance Material University Program Advance Material Advance Material Modules Introduction ti to C8051F360 Analog Performance Measurement (ADC and DAC) Detailed overview of system variances, parameters (offset, gain, linearity)

More information

Concept Design of a Software Correlator for future ALMA. Jongsoo Kim Korea Astronomy and Space Science Institute

Concept Design of a Software Correlator for future ALMA. Jongsoo Kim Korea Astronomy and Space Science Institute Concept Design of a Software Correlator for future ALMA Jongsoo Kim Korea Astronomy and Space Science Institute Technologies for Correlators ASIC (Application-Specific Integrated Circuit) e.g, ALMA 64-antenna

More information

SMT9091 SMT148-FX-SMT351T/SMT391

SMT9091 SMT148-FX-SMT351T/SMT391 Unit / Module Description: Unit / Module Number: Document Issue Number: Issue Date: Original Author: This Document provides an overview of the developed system key features. SMT148-FX-SMT351T/SMT391 E.Puillet

More information

Power Solutions for Leading-Edge FPGAs. Vaughn Betz & Paul Ekas

Power Solutions for Leading-Edge FPGAs. Vaughn Betz & Paul Ekas Power Solutions for Leading-Edge FPGAs Vaughn Betz & Paul Ekas Agenda 90 nm Power Overview Stratix II : Power Optimization Without Sacrificing Performance Technical Features & Competitive Results Dynamic

More information

Developing A Universal Radio Astronomy Backend. Dr. Ewan Barr, MPIfR Backend Development Group

Developing A Universal Radio Astronomy Backend. Dr. Ewan Barr, MPIfR Backend Development Group Developing A Universal Radio Astronomy Backend Dr. Ewan Barr, MPIfR Backend Development Group Overview Why is it needed? What should it do? Key concepts and technologies Case studies: MeerKAT FBF and APSUSE

More information

ATS-GPU Real Time Signal Processing Software

ATS-GPU Real Time Signal Processing Software Transfer A/D data to at high speed Up to 4 GB/s transfer rate for PCIe Gen 3 digitizer boards Supports CUDA compute capability 2.0+ Designed to work with AlazarTech PCI Express waveform digitizers Optional

More information

CorrelX: A Cloud-Based VLBI Correlator

CorrelX: A Cloud-Based VLBI Correlator CorrelX: A Cloud-Based VLBI Correlator V. Pankratius, A. J. Vazquez, P. Elosegui Massachusetts Institute of Technology Haystack Observatory pankrat@mit.edu, victorpankratius.com 5 th International VLBI

More information

The S6000 Family of Processors

The S6000 Family of Processors The S6000 Family of Processors Today s Design Challenges The advent of software configurable processors In recent years, the widespread adoption of digital technologies has revolutionized the way in which

More information

INTRODUCTION TO FPGA ARCHITECTURE

INTRODUCTION TO FPGA ARCHITECTURE 3/3/25 INTRODUCTION TO FPGA ARCHITECTURE DIGITAL LOGIC DESIGN (BASIC TECHNIQUES) a b a y 2input Black Box y b Functional Schematic a b y a b y a b y 2 Truth Table (AND) Truth Table (OR) Truth Table (XOR)

More information

High-Performance 16-Point Complex FFT Features 1 Functional Description 2 Theory of Operation

High-Performance 16-Point Complex FFT Features 1 Functional Description 2 Theory of Operation High-Performance 16-Point Complex FFT April 8, 1999 Application Note This document is (c) Xilinx, Inc. 1999. No part of this file may be modified, transmitted to any third party (other than as intended

More information

FPGA APPLICATIONS FOR SINGLE DISH ACTIVITY AT MEDICINA RADIOTELESCOPES

FPGA APPLICATIONS FOR SINGLE DISH ACTIVITY AT MEDICINA RADIOTELESCOPES MARCO BARTOLINI - BARTOLINI@IRA.INAF.IT TORINO 18 MAY 2016 WORKSHOP: FPGA APPLICATION IN ASTROPHYSICS FPGA APPLICATIONS FOR SINGLE DISH ACTIVITY AT MEDICINA RADIOTELESCOPES TORINO, 18 MAY 2016, INAF FPGA

More information

Correlator and Beamformer

Correlator and Beamformer Correlator and Beamformer technology for the SKA CSP Brent Carlson ngvla Workshop @ Caltech, April 9, 2015 NRC-Herzberg Astronomy Technology Program Outline Overview of re-baselined SKA telescope(s). SKA

More information

LOFAR-WAN ADD D.H.P. Maat, R.B. Gloudemans

LOFAR-WAN ADD D.H.P. Maat, R.B. Gloudemans LOFAR-WAN ADD D.H.P. Maat, R.B. Gloudemans -1- Verified: Name Signature Date Rev.nr. Accepted: Work Package Manager System Engineering Manager Program Manager D.H.P. Maat. date: A. Gunst. date: J.M. Reitsma.

More information

Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System

Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System Frequency Domain Acceleration of Convolutional Neural Networks on CPU-FPGA Shared Memory System Chi Zhang, Viktor K Prasanna University of Southern California {zhan527, prasanna}@usc.edu fpga.usc.edu ACM

More information

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks

Internet Traffic Characteristics. How to take care of the Bursty IP traffic in Optical Networks Internet Traffic Characteristics Bursty Internet Traffic Statistical aggregation of the bursty data leads to the efficiency of the Internet. Large Variation in Source Bandwidth 10BaseT (10Mb/s), 100BaseT(100Mb/s),

More information

QuiXilica V5 Architecture

QuiXilica V5 Architecture QuiXilica V5 Architecture: The High Performance Sensor I/O Processing Solution for the Latest Generation and Beyond Andrew Reddig President, CTO TEK Microsystems, Inc. Military sensor data processing applications

More information

Network Design Considerations for Grid Computing

Network Design Considerations for Grid Computing Network Design Considerations for Grid Computing Engineering Systems How Bandwidth, Latency, and Packet Size Impact Grid Job Performance by Erik Burrows, Engineering Systems Analyst, Principal, Broadcom

More information

Current and Projected Digital Complexity of DMT VDSL

Current and Projected Digital Complexity of DMT VDSL June 1, 1999 1 Standards Project: T1E1.4:99-268 VDSL Title: Current and Projected Digital Complexity of DMT VDSL Source: Texas Instruments Author: C. S. Modlin J. S. Chow Texas Instruments 2043 Samaritan

More information

SPEAD Recommended Practice

SPEAD Recommended Practice SPEAD Recommended Practice Document number: Revision: Classification: C Open Source, GPL Author: J. Manley, M. Welz, A. Parsons, S. Ratcliffe, R. van Rooyen Date: Document History Revision Date of Issue

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction In a packet-switched network, packets are buffered when they cannot be processed or transmitted at the rate they arrive. There are three main reasons that a router, with generic

More information

"On the Capability and Achievable Performance of FPGAs for HPC Applications"

On the Capability and Achievable Performance of FPGAs for HPC Applications "On the Capability and Achievable Performance of FPGAs for HPC Applications" Wim Vanderbauwhede School of Computing Science, University of Glasgow, UK Or in other words "How Fast Can Those FPGA Thingies

More information

Memory Systems IRAM. Principle of IRAM

Memory Systems IRAM. Principle of IRAM Memory Systems 165 other devices of the module will be in the Standby state (which is the primary state of all RDRAM devices) or another state with low-power consumption. The RDRAM devices provide several

More information

LWA DRX C Language Simulation Report - v0.1

LWA DRX C Language Simulation Report - v0.1 LWA DRX C Language Simulation Report - v0.1 Johnathan York (ARL:UT) February 1, 2008 1 Introduction This document contains a short report on the Long Wavelength Array (LWA) Digital Receiver (DRX) C simulator

More information

CASA. Algorithms R&D. S. Bhatnagar. NRAO, Socorro

CASA. Algorithms R&D. S. Bhatnagar. NRAO, Socorro Algorithms R&D S. Bhatnagar NRAO, Socorro Outline Broad areas of work 1. Processing for wide-field wide-band imaging Full-beam, Mosaic, wide-band, full-polarization Wide-band continuum and spectral-line

More information

EVLA Correlator P. Dewdney Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics

EVLA Correlator P. Dewdney Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics EVLA Correlator Dominion Radio Astrophysical Observatory Herzberg Institute of Astrophysics National Research Council Canada National Research Council Canada Conseil national de recherches Canada Outline

More information

Low Power Design Techniques

Low Power Design Techniques Low Power Design Techniques August 2005, ver 1.0 Application Note 401 Introduction This application note provides low-power logic design techniques for Stratix II and Cyclone II devices. These devices

More information

Modular Passive CWDM Multiplexers (Cassettes)

Modular Passive CWDM Multiplexers (Cassettes) Modular Passive CWDM Multiplexers (Cassettes) Overview Getting the most out of your network is critical in today s capital-constrained environment. Deploying dedicated fiber runs to every location is both

More information

High-bandwidth CX4 optical connector

High-bandwidth CX4 optical connector High-bandwidth CX4 optical connector Dubravko I. Babić, Avner Badihi, Sylvie Rockman XLoom Communications, 11 Derech Hashalom, Tel-Aviv, Israel 67892 Abstract We report on the development of a 20-GBaud

More information

Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16

Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16 Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16 Przemek Lenkiewicz, Researcher@IBM Netherlands Bernard

More information

ASKAP Central Processor: Design and Implementa8on

ASKAP Central Processor: Design and Implementa8on ASKAP Central Processor: Design and Implementa8on Calibra8on and Imaging Workshop 2014 Ben Humphreys ASKAP So(ware and Compu3ng Project Engineer 3rd - 7th March 2014 ASTRONOMY AND SPACE SCIENCE Australian

More information

Multi-Channel Neural Spike Detection and Alignment on GiDEL PROCStar IV 530 FPGA Platform

Multi-Channel Neural Spike Detection and Alignment on GiDEL PROCStar IV 530 FPGA Platform UNIVERSITY OF CALIFORNIA, LOS ANGELES Multi-Channel Neural Spike Detection and Alignment on GiDEL PROCStar IV 530 FPGA Platform Aria Sarraf (SID: 604362886) 12/8/2014 Abstract In this report I present

More information

All hands meeting MFAA/Receiver Analogue-to-Digital Conversion. Stéphane Gauffre from Université de Bordeaux

All hands meeting MFAA/Receiver Analogue-to-Digital Conversion. Stéphane Gauffre from Université de Bordeaux All hands meeting MFAA/Receiver Analogue-to-Digital Conversion Stéphane Gauffre from Université de Bordeaux Outline 1.Digitisation concepts for MFAA 2.Digital platform 3.Commercially available ADCs 4.Full

More information

Implementation of a Digital Processing Subsystem for a Long Wavelength Array Station

Implementation of a Digital Processing Subsystem for a Long Wavelength Array Station Jet Propulsion Laboratory California Institute of Technology Implementation of a Digital Processing Subsystem for a Long Wavelength Array Station Robert Navarro 1, Elliott Sigman 1, Melissa Soriano 1,

More information

8. Migrating Stratix II Device Resources to HardCopy II Devices

8. Migrating Stratix II Device Resources to HardCopy II Devices 8. Migrating Stratix II Device Resources to HardCopy II Devices H51024-1.3 Introduction Altera HardCopy II devices and Stratix II devices are both manufactured on a 1.2-V, 90-nm process technology and

More information

Project Overview and Status

Project Overview and Status Project Overview and Status EVLA Advisory Committee Meeting, March 19-20, 2009 Mark McKinnon EVLA Project Manager Outline Project Goals Organization Staffing Progress since last meeting Budget Contingency

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

Radio astronomy data reduction at the Institute of Radio Astronomy

Radio astronomy data reduction at the Institute of Radio Astronomy Mem. S.A.It. Suppl. Vol. 13, 79 c SAIt 2009 Memorie della Supplementi Radio astronomy data reduction at the Institute of Radio Astronomy J.S. Morgan INAF Istituto di Radioastronomia, Via P. Gobetti, 101

More information

SCHEDULE AND TIMELINE

SCHEDULE AND TIMELINE MMA Project Book, Chapter 19 SCHEDULE AND TIMELINE Richard Simon Last Changed 1999-Apr-21 Revision History: 11 November 1998: Complete update from baseline WBS plan. Links to internal NRAO web pages with

More information

System Engineering, Moore s Law and Collaboration: a Perspective from SKA South Africa and CASPER

System Engineering, Moore s Law and Collaboration: a Perspective from SKA South Africa and CASPER System Engineering, Moore s Law and Collaboration: a Perspective from SKA South Africa and CASPER Francois Kapp, Jason Manley SKA SA - MeerKAT francois.kapp@ska.ac.za, tel: 021-506 7300 Abstract: The Digital

More information

Chapter 5: ASICs Vs. PLDs

Chapter 5: ASICs Vs. PLDs Chapter 5: ASICs Vs. PLDs 5.1 Introduction A general definition of the term Application Specific Integrated Circuit (ASIC) is virtually every type of chip that is designed to perform a dedicated task.

More information

Mark 6 Next-Generation VLBI Data System

Mark 6 Next-Generation VLBI Data System Mark 6 Next-Generation VLBI Data System, IVS 2012 General Meeting Proceedings, p.86 90 http://ivscc.gsfc.nasa.gov/publications/gm2012/whitney.pdf Mark 6 Next-Generation VLBI Data System Alan Whitney, David

More information

Data Handling and Transfer for German LOFAR Stations

Data Handling and Transfer for German LOFAR Stations Data Handling and Transfer for German LOFAR Stations anderson@mpifr-bonn.mpg.de On behalf of LOFAR and GLOW (slides stolen from many other LOFAR presentations by other people) 1/24 Some Initial Comments

More information

Zynq-7000 All Programmable SoC Product Overview

Zynq-7000 All Programmable SoC Product Overview Zynq-7000 All Programmable SoC Product Overview The SW, HW and IO Programmable Platform August 2012 Copyright 2012 2009 Xilinx Introducing the Zynq -7000 All Programmable SoC Breakthrough Processing Platform

More information

Session S0069: GPU Computing Advances in 3D Electromagnetic Simulation

Session S0069: GPU Computing Advances in 3D Electromagnetic Simulation Session S0069: GPU Computing Advances in 3D Electromagnetic Simulation Andreas Buhr, Alexander Langwost, Fabrizio Zanella CST (Computer Simulation Technology) Abstract Computer Simulation Technology (CST)

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

White Paper The Need for a High-Bandwidth Memory Architecture in Programmable Logic Devices

White Paper The Need for a High-Bandwidth Memory Architecture in Programmable Logic Devices Introduction White Paper The Need for a High-Bandwidth Memory Architecture in Programmable Logic Devices One of the challenges faced by engineers designing communications equipment is that memory devices

More information

VLBI progress Down-under. Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008

VLBI progress Down-under. Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008 VLBI progress Down-under Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008 Outline Down-under == Southern hemisphere VLBI in Australia (LBA) Progress in the last few years Disks

More information