Intel Stratix 10 Transceiver Usage

Similar documents
Intel Stratix 10 L- and H-Tile Transceiver PHY User Guide

Intel Cyclone 10 GX Transceiver PHY User Guide

Intel Stratix 10 Clocking and PLL User Guide

Arria 10 Transceiver PHY User Guide

SerialLite III Streaming IP Core Design Example User Guide for Intel Arria 10 Devices

Altera I/O Phase-Locked Loop (Altera IOPLL) IP Core User Guide

Intel Stratix 10 H-tile Hard IP for Ethernet Design Example User Guide

SerialLite III Streaming IP Core Design Example User Guide for Intel Stratix 10 Devices

H-tile Hard IP for Ethernet Intel Stratix 10 FPGA IP Design Example User Guide

Design Guidelines for Intel FPGA DisplayPort Interface

25G Ethernet Intel Stratix 10 FPGA IP Design Example User Guide

Building Interfaces with Arria 10 High-Speed Transceivers

Intel Stratix 10 Low Latency 40G Ethernet Design Example User Guide

Using the Transceiver Reconfiguration Controller for Dynamic Reconfiguration in Arria V and Cyclone V Devices

AN 756: Altera GPIO to Altera PHYLite Design Implementation Guidelines

AN 830: Intel FPGA Triple-Speed Ethernet and On-Board PHY Chip Reference Design

AN 830: Intel FPGA Triple-Speed Ethernet and On-Board PHY Chip Reference Design

Intel Stratix 10 Analog to Digital Converter User Guide

Low Latency 100G Ethernet Intel Stratix 10 FPGA IP Design Example User Guide

Intel Stratix 10 Logic Array Blocks and Adaptive Logic Modules User Guide

Intel MAX 10 Clocking and PLL User Guide

Implementing 9.8G CPRI in Arria V GT and ST FPGAs

Altera I/O Phase-Locked Loop (Altera IOPLL) IP Core User Guide

Intel MAX 10 Clocking and PLL User Guide

Physical View of the Stratix 10 L-Tile Transceiver Registers

Low Latency 100G Ethernet Design Example User Guide

Interlaken IP Core (2nd Generation) Design Example User Guide

High Bandwidth Memory (HBM2) Interface Intel FPGA IP Design Example User Guide

Low Latency 40G Ethernet Example Design User Guide

External Memory Interfaces Intel Arria 10 FPGA IP Design Example User Guide

3. ALTGX_RECONFIG IP Core User Guide for Stratix IV Devices

Intel Cyclone 10 External Memory Interfaces IP Design Example User Guide

Intel Stratix 10 H-Tile PCIe Link Hardware Validation

AN 558: Implementing Dynamic Reconfiguration in Arria II Devices

Errata Sheet for Cyclone IV Devices

Intel Stratix 10 High-Speed LVDS I/O User Guide

Intel Stratix 10 External Memory Interfaces IP Design Example User Guide

Intel FPGA Voltage Sensor IP Core User Guide

Intel Acceleration Stack for Intel Xeon CPU with FPGAs 1.0 Errata

Intel Arria 10 Native Floating- Point DSP Intel FPGA IP User Guide

Quartus II Software Version 10.0 SP1 Device Support

AN 825: Partially Reconfiguring a Design on Intel Stratix 10 GX FPGA Development Board

Building Gigabit Interfaces in Altera Transceiver Devices

Intel Stratix 10 Avalon -MM Interface for PCI Express* Solutions User Guide

Stratix 10 Avalon-ST Interface for PCIe Solutions User Guide

AN 610: Implementing Deterministic Latency for CPRI and OBSAI Protocols in Altera Devices

AN 825: Partially Reconfiguring a Design on Intel Stratix 10 GX FPGA Development Board

Intel Stratix 10 Avalon -MM Interface for PCIe* Solutions User Guide

Configuration via Protocol (CvP) Implementation in V-series FPGA Devices User Guide

AN 558: Implementing Dynamic Reconfiguration in Arria II Devices

Intel Quartus Prime Pro Edition Software and Device Support Release Notes

Transceiver Architecture in Arria V Devices

Quad-Serial Configuration (EPCQ) Devices Datasheet

Errata Sheet for Cyclone V Devices

Remote Update Intel FPGA IP User Guide

Configuration via Protocol (CvP) Implementation in Altera FPGAs User Guide

AN 826: Hierarchical Partial Reconfiguration Tutorial for Stratix 10 GX FPGA Development Board

Ethernet Link Inspector User Guide v4.1 for Intel Stratix 10 Devices

ASMI Parallel II Intel FPGA IP Core User Guide

AN 779: Intel FPGA JESD204B IP Core and ADI AD9691 Hardware Checkout Report

Intel Stratix 10 General Purpose I/O User Guide

PCI Express*: Migrating to Intel Stratix 10 Devices for the Avalon Streaming Interface

Customizable Flash Programmer User Guide

Intel Cyclone 10 GX Device Overview

Intel MAX 10 High-Speed LVDS I/O User Guide

Recommended Protocol Configurations for Stratix IV GX FPGAs

Ethernet Link Inspector User Guide v3.0 for Intel Stratix 10 Devices

AN 797: Partially Reconfiguring a Design on Intel Arria 10 GX FPGA Development Board

Intel Stratix 10 General Purpose I/O User Guide

Intel Stratix 10 Power Management User Guide

Dynamic Reconfiguration of PMA Controls in Stratix V Devices

Stratix II FPGA Family

Intel Stratix 10 Power Management User Guide

Quad-Serial Configuration (EPCQ) Devices Datasheet

PCI Express Multi-Channel DMA Interface

AN 818: Static Update Partial Reconfiguration Tutorial

AN 818: Static Update Partial Reconfiguration Tutorial

AN822: Intel FPGA Configuration Device Migration Guideline

AN 810: Intel FPGA JESD204B IP Core and ADI AD9208 Hardware Checkout Report

Intel MAX 10 User Flash Memory User Guide

AN822: Intel Configuration Device Migration Guideline

AN 836: RapidIO II Reference Design for Avalon-ST Pass-Through Interface

Altera ASMI Parallel II IP Core User Guide

7. External Memory Interfaces in Stratix IV Devices

PowerPlay Early Power Estimator User Guide

Intel Cyclone 10 LP Device Family Pin Connection Guidelines

Interlaken (2nd Generation) Intel FPGA IP User Guide

AN 839: Design Block Reuse Tutorial

Interlaken (2nd Generation) Intel Stratix 10 FPGA IP User Guide

AN 447: Interfacing Intel FPGA Devices with 3.3/3.0/2.5 V LVTTL/ LVCMOS I/O Systems

Intel FPGA Temperature Sensor IP Core User Guide

Intel Quartus Prime Pro Edition Software and Device Support Release Notes

7. External Memory Interfaces in Arria II Devices

Intel Agilex Power Management User Guide

AN 754: MIPI D-PHY Solution with Passive Resistor Networks in Intel Low-Cost FPGAs

AN 834: Developing for the Intel HLS Compiler with an IDE

Intel Stratix 10 Configuration via Protocol (CvP) Implementation User Guide

Intel Arria 10 and Intel Cyclone 10 Avalon-ST Hard IP for PCIe* Design Example User Guide

Intel Xeon with FPGA IP Asynchronous Core Cache Interface (CCI-P) Shim

SDI II Intel FPGA IP User Guide

Transcription:

Updated for Intel Quartus Prime Design Suite: 17.1 Subscribe Send Feedback Latest document on the web: PDF HTML

Contents Contents 1 Transceiver Layout... 3 1.1 L-Tile and H-Tile Overview...4 1.1.1 PLLs...4 1.1.2 Transmitter Clock Network... 8 1.1.3 GXT Clock Network... 12 1.1.4 Calibration... 15 2 Tile Architecture Constraints...16 2.1 Transceiver Channel Placement... 16 2.1.1 Possible Combinations of GX and GXT Channels... 16 2.1.2 GX Channels... 20 2.1.3 GXT Channels...22 2.1.4 Reference Clock Guidelines for L-Tile and H-Tile...28 2.1.5 PLL Placement... 29 2.2 Unsupported Dynamic Reconfiguration Features... 34 2.3 Intel Stratix 10 L-Tile Transceiver to H-Tile Transceiver Migration... 34 2.4 Thermal Guidelines... 35 3 PCIe Guidelines...36 3.1 PCIe Hard IP... 36 3.1.1 Channel Placement for PCIe Hard IP... 36 3.1.2 PLL Placement for PCIe Hard IP...37 3.2 PHY Interface for PCIe Express (PIPE)...40 3.2.1 Channel Placement for PIPE...40 3.2.2 PLL Placement for PIPE... 40 4 Document Revision History... 41 2

1 Transceiver Layout Note: This app note currently covers Intel Stratix 10 L-Tile ES1, L-Tile and H-Tile (L- Tile/H-Tile) information. E-Tile information will be available in a future document. Intel Stratix 10 devices support a transceiver tile architecture. A tile consists of 24 transceiver channels and associated phase locked loops (PLLs), reference clock buffers, and Hard IPs. There are currently four different types of transceiver tiles: L-Tile ES1 L-Tile H-Tile E-Tile The range of capabilities in each tile type offers a customized solution suited to the various transceiver applications. The next section describes the L-Tile in greater detail. An Intel Stratix 10 device contains one or more tiles on the left and right side of the device. The types of tiles do not have to be homogenous. Refer to the table "Transceiver Tile Variants Comparison of Transceiver Capabilities" in the Overview chapter of the Intel Stratix 10 L- and H-Tile Transceiver PHY User Guide for additional information. Figure 1. Transceiver Tile Layout Example Intel Stratix 10 TX device with two different types of tiles on the left side of the device. An E-Tile is located above an H-Tile. E-Tile Package Substrate Clock Network Transceivers (24 Channels) 100G Ethernet Hard IP Transceiver PLLs Reference Clock Network Transceiver Tile (24 Channels) E-Tile 4 x 100GE EMIB H-Tile Clock Network Transceiver Transceiver Transceiver (6 Channels) (6 Channels) (6 Channels) PCIe Hard IP Transceiver (6 Channels) Transceiver PLLs Reference Clock Network Transceiver Tile (24 Channels) H-Tile PCIe Gen3 Hard IP EMIB Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Intel. Intel customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. *Other names and brands may be claimed as the property of others. ISO 9001:2008 Registered

1 Transceiver Layout Related Links Overview 1.1 L-Tile and H-Tile Overview The Intel Stratix 10 L-Tile/H-Tile transceivers contain 24 full-duplex channels, grouped into four transceiver banks. Each tile is divided into banks of six channels each. 1 tile = 4 banks * 6 channels = 24 transceiver channels Each bank contains two triplets of 3 channels each. 1 tile = 4 banks * 2 triplets * 3 channels = 24 transceiver channels In L-Tile up to 8 transceiver channels can be configured as GXT channels, reaching datarates up to 26.6 Gbps. Similarly in an H-Tile, up to 16 channels can be configured as GXT channels reaching datarates up to 28.3 Gbps. 1.1.1 PLLs Each Intel Stratix 10 L-Tile/H-Tile transceiver bank includes the following TX Phase Locked Loops (PLLs): Two Advanced Transmit () PLLs Two Fractional PLLs () Two Clock Multiplier Unit (CMU) PLLs (Located in channel 1 and channel 4 of each bank) Table 1. Transmitter PLLs in Stratix 10 L-Tile/H-Tile Devices PLL Type Characteristics Best jitter performance LC tank based voltage controlled oscillator (VCO) Supports fractional synthesis mode (in cascade mode only) Used for both bonded and non-bonded channel configurations Fractional PLL () Ring oscillator based VCO Supports fractional synthesis mode Used for both bonded and non-bonded channel configurations Clock Multiplier Unit (CMU) PLL or Channel PLL (1) Ring oscillator based VCO Used as an additional clock source for non-bonded applications (1) The CMU PLL or Channel PLL of channel 1 and channel 4 can be used as a transmitter PLL or as a clock data recovery () block. The channel PLL of all other channels (0, 2, 3, and 5) can only be used as a. 4

1 Transceiver Layout The total number of TX PLLs per tile is: Eight s (2 s per bank * 4 banks per tile) Eight s (2 s per bank * 4 banks per tile) Eight CMU PLLs (2 CMU PLLs per bank * 4 banks per tile) 5

1 Transceiver Layout Figure 2. Stratix 10 PLLs and Clock Networks in Two s of Intel Stratix 10 L-Tile/H- Tile The, and CMU PLLs can drive the x1 clock network to support non-bonded transceivers. The PLL and can drive the x6 clock network to support bonded transceivers within the bank. The x6 clock network can drive the x24 clock network in adjacent banks, allowing s and s to support up to 24 bonded transceiver channels. The x1, x6, and x24 clock networks are described in the Transceiver Clock Network section. Transceiver x1 Clock Lines x6 Clock Lines x24 Clock Lines CH5 CH4 CH3 Local /CMU Local Local PLL CH2 CH1 CH0 Local /CMU Local Local PLL CH5 CH4 CH3 Local /CMU Local Local Transceiver PLL CH2 CH1 CH0 Local /CMU Local Local PLL Note: For further details on, refer to "PLL and Clock Networks" chapter in Intel Stratix 10 L- and H-Tile Transceiver PHY User Guide. 6

1 Transceiver Layout 1.1.1.1 Related Links Transmitter Clock Network on page 8 PLL and Clock Networks The s can be used for bonded and non-bonded applications. The s can access x1, x6, and x24 clock lines. There are spacing rules between two s running at the same VCO frequency. You can find the VCO frequency by looking at your PLL IP Platform Designer parameter. For more details, refer to Transceiver Clock Network and Spacing Requirements. Figure 3. Block Diagram Dedicated Reference Clock Pin Reference Clock Network Receiver Input Pin PLL Cascade Output Core Clock Network Reference Clock Multiplexer Input Reference Clock N Counter refclk fbclk PFD Up Down Charge Pump and Loop Filter Lock Detector VCO pll_locked L Counter 1 2 2 M Counter /2 Note: 1. The Delta Sigma Modulator is engaged only when the is used in fractional mode. Delta Sigma Modulator (1) 1.1.1.2 Related Links Transmitter Clock Network on page 8 The s can be used for bonded and non-bonded applications. The s can access x1, x6, and x24 clock lines. There are no spacing rules between s regardless of their VCO frequencies. Figure 4. Block Diagram Reference Clock Multiplexer Dedicated Reference Clock Pin Reference Clock Network Receiver Input Pin PLL Cascade Output Core Clock Network N Counter PFD Charge Pump and Loop Filter VCO /2 Lock Detector L Counter pll_locked Transmitter Clock Network M Counter /2 C Counter EMIB Clock Divider /1 /2 Delta Sigma Modulator /4 Cascade Network 1.1.1.3 CMU PLL CMU PLLs can only be used for non-bonded applications and can only access the x1 clock lines. 7

1 Transceiver Layout When using a CMU PLL in channel 1 or channel 4 of a bank, that channel is no longer available to receive data, but the channel can still be used for transmitting data. Figure 5. CMU PLL Block Diagram Reference Clock Network Receiver Input Pin Reference Clock Multiplexer Input Reference Clock User Control (LTR/LTD) N Counter refclk fbclk Lock to Reference Controller PFD Up Down Charge Pump and Loop Filter Lock Detector VCO L Counter Lock to Reference PLL Lock Status Output M Counter 1.1.2 Transmitter Clock Network The transmitter clock network routes the clock from the transmitter PLL to one or more transmitter channels. It provides two types of clocks to the transmitter channel: High-Speed Serial Clock - high-speed clock for the serializer Low-Speed Parallel Clock - low-speed clock for the serializer and the PCS In a bonded channel configuration, both the serial clock and the parallel clock are routed from the transmitter PLL to the transmitter channels. In a non-bonded channel configuration, only the serial clock is routed to the transmitter channels, while the parallel clock is generated locally within each channel. To support various bonded and non-bonded clocking configurations, three types of transmitter clock network lines are available: x1 clock lines: Span a single bank within a tile and are used for non-bonded channel clocking only x6 clock lines: Span a single bank within a tile and are used for bonded channel clocking x24 clock lines: Span all banks within a tile and are used for both PMA bonded and PMA-PCS bonded transceiver channels. All clock lines are contained within a single tile and cannot span across multiple tiles. 8

1 Transceiver Layout Figure 6. x1 Clock Lines x1 Network 1 1 0 0 Ch 5 Ch 4 CMU or Ch 3 Ch 2 Ch 1 CMU or Ch 0 9

1 Transceiver Layout Figure 7. x6 Clock Lines x6 Network x6 Top x6 Bottom Ch 5 Ch 4 CMU or Ch 3 Ch 2 Ch 1 CMU or Ch 0 10

1 Transceiver Layout Figure 8. x24 Clock Lines 3 x6 Top x6 Bottom Ch 5 Ch 4 CMU or 1 Ch 3 Ch 2 0 Ch 1 CMU or Ch 0 x24 Up x24 Down 2 x6 Top x6 Bottom 1 0 Ch 5 Ch 4 CMU or Ch 3 Ch 2 Ch 1 CMU or Ch 0 x24 Up x24 Down 1 x6 Top x6 Bottom Ch 5 Ch 4 CMU or 1 Ch 3 Ch 2 0 Ch 1 Ch 0 CMU or x24 Up x24 Down 0 x6 Top x6 Bottom Ch 5 Ch 4 CMU or 1 Ch 3 Ch 2 0 Ch 1 Ch 0 CMU or There are two x24 lines available per tile: x24 Up: Routes clocks to transceiver banks located above the current bank x24 Down: Routes clocks to transceiver banks located below the current bank When using the x24 lines, the maximum channel span is two banks above and two banks below the master bank containing the instantiated TX PLL. If using the x24 clock lines across all four banks within the tile, the TX PLL must be instantiated in one of the middle banks to comply with the channel span requirements. 11

1 Transceiver Layout Related Links PLL and Clock Networks Channel Bonding 1.1.2.1 Bonded Transceiver Channels - Guidelines for VCCR_GXB and VCCT_GXB Table 2. Voltage Requirements For transceiver channels that require bonding via the x6/x24 transceiver clock networks, refer to this table for specific voltage requirements. Channel Type Transceiver Link Type Data Rate V CCR_GXB /V CCT_GXB Typical Min Typical Max GX Chip to Chip and Backplane 1 Gbps to 16 Gbps 16 Gbps to 17.4 Gbps 1 V 1.03 V 1.06 V 1.1 V 1.12 V 1.14 V GXT Chip to Chip and Backplane > 17.4 Gbps N/A (Bonding is not supported) For non-bonded transceiver channels, refer to the "Transceiver Power Supply Operating Conditions" in the Intel Stratix 10 Device Datasheet. Related Links Intel Stratix 10 Device Datasheet 1.1.3 GXT Clock Network Both the L-Tile and H-Tile contains the GXT clock network. The GXT clock network allows an to drive up to six transmitter channels four in its bank and two in an adjacent bank. The GXT clock network is used for data rates above 17.4 Gbps. For L-Tile and H-Tile GXT channel specification, please refer to "GXT Channels" section for more details. Note: Intel Stratix 10 L-Tile ES1 does not support GXT clock network. 12

1 Transceiver Layout Figure 9. Top GXT Network Reach If the is in the upper triplet, its drive span is all four GXT channels within its own bank and channels ch0 and ch1 of the bank above. Ch 5 Ch 4 CMU or Ch 3 Ch 2 Ch 1 CMU or Ch 0 1 Ch 5 Ch 4 1 CMU or Ch 3 Ch 2 Ch 1 CMU or Ch 0 0 13

1 Transceiver Layout Figure 10. Bottom GXT Network Reach If the is in the bottom triplet, its drive span is all four GXT channels within its own bank and channels ch3 and ch4 from the bank below. Ch 5 Ch 4 CMU or Ch 3 Ch 2 Ch 1 1 0 CMU or Ch 0 Ch 5 Ch 4 CMU or Ch 3 Ch 2 Ch 1 CMU or Ch 0 0 Related Links GXT Channels on page 22 14

1 Transceiver Layout 1.1.4 Calibration The transceiver is calibrated at device power on. The OSC_CLK_1 signal is used for device configuration and by transceiver calibration logic. OSC_CLK_1 must be driven by a free running 25 MHz, 100 MHz, or 125 MHz clock source if the transceiver tiles are used. The internal FPGA oscillator cannot be used for transceiver calibration. The clock source must be stable at FPGA device configuration and should continue to run during device operation. 15

2 Tile Architecture Constraints 2.1 Transceiver Channel Placement The Intel Stratix 10 product family introduces several transceiver tile variants to support a wide variety of protocol implementations. Table 3. Channel Types There are a total of 24 channels available per tile. You can configure them as either GX channels or as a combination of GX and (up to 16) GXT channels as long as the total does not exceed 24. You can use a GXT channel as a GX channel, but it would be subject to all of the GX channel placement constraints. Feature L-Tile Transceivers H-Tile Transceivers E-Tile Transceivers Maximum Datarate (Chip-to-chip) Maximum Datarate (Backplane) GX (2) 17.4 Gbps GXT (2) 26.6 Gbps GX and GXT 12.5 Gbps GX 17.4 Gbps GXT 28.3 Gbps GXE 57.8 Gbps Pulse Amplitude Modulation (PAM-4) GXE 30 Gbps Non- Return to Zero (NRZ) Related Links L-Tile/H-Tile Building Blocks 2.1.1 Possible Combinations of GX and GXT Channels This section describes the possible combination of GX and GXT channels for L-Tile ES1 and L-Tile/H-Tile. Related Links Usage Model When Driving GXT Channels on page 31 2.1.1.1 Possible Combinations of GX and GXT Channels in H-Tile Table 4. Combination 1: 4 GXT and 2 GX Channels Channel Type Number of Channels per Channel Capability for H-Tile Chip-to-Chip Backplane GX 2 12.5 Gbps N/A GXT (3) 4 28.3 Gbps 28.3 Gbps (2) Refer to the L-Tile/H-Tile Building Blocks section for further descriptions of GX and GXT channels. (3) If you use GXT channel data rates, the VCCR_GXB and VCCT_GXB voltages must be set to 1.12 V. Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Intel. Intel customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. *Other names and brands may be claimed as the property of others. ISO 9001:2008 Registered

2 Tile Architecture Constraints Figure 11. Example Combination 1: 4 GXT and 2 GX Channels ch5 ch4 ch3 ch2 REFCLK1 ch1 ch0 REFCLK0 GXT channel GX channel Note: You cannot use for GX channels when using more than 2 GXT channels per bank Table 5. Combination 2: 3 GXT and 3 GX Channels Channel Type Number of Channels per Channel Capability for H-Tile Chip-to-Chip Backplane GX 3 12.5 Gbps N/A GXT (3) 3 28.3 Gbps 28.3 Gbps Figure 12. Example Combination 2: 3 GXT and 3 GX Channels ch5 ch4 ch3 ch2 REFCLK1 ch1 ch0 REFCLK0 GXT channel GX channel Note: You cannot use for GX channels when using more than 2 GXT channels per bank Table 6. Combination 3: 2 GXT and 4 GX Channels Channel Type Number of Channels per Channel Capability for H-Tile Chip-to-Chip Backplane GX 4 12.5 Gbps N/A GXT (3) 2 28.3 Gbps 28.3 Gbps 17

2 Tile Architecture Constraints Figure 13. Example Combination 3: 2 GXT and 4 GX Channels ch5 ch4 ch3 ch2 REFCLK1 ch1 ch0 REFCLK0 GXT channel GX channel Table 7. Combination 4: 1 GXT and 5 GX Channels Channel Type Number of Channels per Channel Capability for H-Tile Chip-to-Chip Backplane GX 5 12.5 Gbps N/A GXT (3) 1 28.3 Gbps 28.3 Gbps Figure 14. Example Combination 4: 1 GXT and 5 GX Channels ch5 ch4 ch3 ch2 REFCLK1 ch1 ch0 REFCLK0 GXT channel GX channel Note: You can place the single GXT channel in channel locations 0, 1, 3 or 4 2.1.1.2 Possible Combinations of GX and GXT Channels in L-Tile Production Table 8. Combination 1: 4 GXT and 0 GX Channels Channel Type Number of Channels per Channel Capability for L-Tile Production Chip-to-Chip Backplane GX 0 N/A N/A GXT (4) 4 26.6 Gbps N/A 18

2 Tile Architecture Constraints Figure 15. Example Combination 1: 4 GXT and 0 GX Channels ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 Note: You cannot use for GX channels when using more than 2 GXT channels per bank Table 9. Combination 2: 3 GXT and 1 GX Channels Channel Type Number of Channels per Channel Capability for L-Tile Production Chip-to-Chip Backplane GX 1 12.5 Gbps 12.5 Gbps GXT (4) 3 26.6 Gbps N/A Figure 16. Example Combination 2: 3 GXT and 1 GX Channels ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 GXT channel GX channel Note: You cannot use for GX channels when using more than 2 GXT channels per bank Table 10. Combination 3: 2 GXT and 2 GX Channels Channel Type Number of Channels per Channel Capability for L-Tile Production Chip-to-Chip Backplane GX 2 12.5 Gbps 12.5 Gbps GXT (4) 2 26.6 Gbps N/A (4) If you use GXT channel data rates, the VCCR_GXB and VCCT_GXB voltages must be set to 1.12 V. 19

2 Tile Architecture Constraints Figure 17. Example Combination 3: 2 GXT and 2 GX Channels ch5 ch4 ch3 ch2 REFCLK1 ch1 ch0 REFCLK0 GXT channel GX channel Table 11. Combination 4: 1 GXT and 3 GX Channels Channel Type Number of Channels per Channel Capability for L-Tile Production Chip-to-Chip Backplane GX 3 12.5 Gbps 12.5 Gbps GXT (4) 1 26.6 Gbps N/A Figure 18. Example Combination 4: 1 GXT and 3 GX Channels ch5 ch4 ch3 ch2 REFCLK1 ch1 ch0 REFCLK0 GXT channel GX channel Note: You can place the single GXT channel in channel locations 0, 1, 3 or 4 2.1.2 GX Channels The Intel Stratix 10 GX transceiver channels can support data rates up to 17.4 Gbps for chip-to-chip applications, and 12.5 Gbps for backplane applications. The Intel Stratix 10 transceiver clocking architecture supports both bonded and nonbonded transceiver channel configurations. Channel bonding is used to minimize the clock skew between multiple transceiver channels. For Intel Stratix 10 transceivers, the term bonding you can refer to PMA bonding as well as PMA-PCS bonding. 20

2 Tile Architecture Constraints 2.1.2.1 Non-bonded GX Channels Non-bonded channels can be placed anywhere within the transceiver tile. Separate PHY IP cores, TX PLL, and REFCLK sources are required for each tile even if the transceivers are running at the same datarate with the same functionality. 2.1.2.2 Bonded GX Channels Bonding across multiple transceiver tiles is not supported. All bonded channels must be placed within the same transceiver tile. A maximum of 24 channels can be bonded. When PMA bonding is enabled, the channels do not need to be placed contiguously in the transceiver tile. When both PMA and PCS bonding are enabled, the channels must be placed contiguously in transceiver tile and in ascending order. Figure 19. x4 Configuration The figure below shows a way of placing 4 bonded channels. In this case, the logical PCS Channel number 2 must be specified as Physical channel 4 of bank 0. 3 2 1 0 CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 Data CH CH Data CH Data CH PLL PLL PLL PLL Transceiver bank 1 Transceiver bank 0 Logical Channel Physical Channel 21

2 Tile Architecture Constraints Figure 20. Mix and Match GX Channels Design Example Example Intel Stratix 10 L-Tile/H-Tile, with Interlaken, 10GBaseKR, PCIe transceivers instantiated. Tile Transceiver 3 6.25 GHz M x6 Interlaken 12.5G Interlaken 12.5G Interlaken 12.5G Interlaken 12.5G Interlaken 12.5G Interlaken 12.5G Transceiver 2 5.15625 GHz x24 x1 Interlaken 12.5G Interlaken 12.5G Interlaken 12.5G Interlaken 12.5G 10GBASE-KR 10GBASE-KR Transceiver 1, 0.625 GHz x1 x24 1.25G 1.25G 1.25G 1.25G PCIe HIP Gen 1/2/3 x8 PCIe HIP Gen 1/2/3 x8 Transceiver 0 2.5 GHz mcgb_aux_clk0 4 GHz M x6 PCIe HIP Gen 1/2/3 x8 PCIe HIP Gen 1/2/3 x8 PCIe HIP Gen 1/2/3 x8 PCIe HIP Gen 1/2/3 x8 PCIe HIP Gen 1/2/3 x8 PCIe HIP Gen 1/2/3 x8 Legend Interlaken12.5G 10GBASE-KR PCIe HIP Gen 1/2, 2.5 GHz PCIe HIP Gen 3, 4 GHz 2.1.3 GXT Channels The Intel Stratix 10 GXT channels are supported in L-Tile ES2, L-Tile Production and Intel Stratix 10 H-Tile transceivers. These channels are not available in Intel Stratix 10 L-Tile ES1 transceivers. For more information on different channel types and the datarates supported by them, please refer to table "Channel Types" in the "L-Tile and H-Tile Overview" chapter. 22

2 Tile Architecture Constraints Figure 21. Intel Stratix 10 L-Tile GXT Channel Location REFCLK_GXBz_CHT GXBz_TX/RX_CH5 GXBz_TX/RX_CH4 GXBz_TX/RX_CH3 GXBz_TX/RX_CH2 GXBz_TX/RX_CH1 GXBz_TX/RX_CH0 REFCLK_GXBz_CHB REFCLK_GXBy_CHT GXBy_TX/RX_CH5 GXBy_TX/RX_CH4 GXBy_TX/RX_CH3 GXBy_TX/RX_CH2 GXBy_TX/RX_CH1 GXBy_TX/RX_CH0 REFCLK_GXBy_CHB REFCLK_GXBx_CHT GXBx_TX/RX_CH5 GXBx_TX/RX_CH4 GXBx_TX/RX_CH3 GXBx_TX/RX_CH2 GXBx_TX/RX_CH1 GXBx_TX/RX_CH0 REFCLK_GXBx_CHB REFCLK_GXBw_CHT GXBw_TX/RX_CH5 GXBw_TX/RX_CH4 GXBw_TX/RX_CH3 GXBw_TX/RX_CH2 GXBw_TX/RX_CH1 GXBw_TX/RX_CH0 REFCLK_GXBw_CHB Tile CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 x1/x6 x1/x6 x1/x6 x1/x6 x24 PCIe Hard IP x16 EMIB 6-Pack Separation within Tile (x1/x6 Lines Do Not Cross These Lines) Tile Bottom Left Tile Middle Left Tile Top Left Tile Bottom Right Tile Middle Right Tile Top Right GXBw GXB1C GXB1G GXB1K GXB4C GXB4G GXB4K GXBx GXB1D GXB1H GXB1L GXB4D GXB4H GXB4L GXBy GXB1E GXB1I GXB1M GXB4E GXB4I GXB4M GXBz GXB1F GXB1J GXB1N GXB4F GXB4J GXB4N Notes: 1. Refer to table Channel Types for GXT channel capabilities. Transceiver Channel Transceiver Channel and CMU PLL PCIe Hard IP x8 Maximum Lanes Supported on EAP and Initial ES 23

2 Tile Architecture Constraints Figure 22. Intel Stratix 10 H-Tile GXT Channel Location REFCLK_GXBz_CHT GXBz_TX/RX_CH5 GXBz_TX/RX_CH4 GXBz_TX/RX_CH3 GXBz_TX/RX_CH2 GXBz_TX/RX_CH1 GXBz_TX/RX_CH0 REFCLK_GXBz_CHB REFCLK_GXBy_CHT GXBy_TX/RX_CH5 GXBy_TX/RX_CH4 GXBy_TX/RX_CH3 GXBy_TX/RX_CH2 GXBy_TX/RX_CH1 GXBy_TX/RX_CH0 REFCLK_GXBy_CHB REFCLK_GXBx_CHT GXBx_TX/RX_CH5 GXBx_TX/RX_CH4 GXBx_TX/RX_CH3 GXBx_TX/RX_CH2 GXBx_TX/RX_CH1 GXBx_TX/RX_CH0 REFCLK_GXBx_CHB REFCLK_GXBw_CHT GXBw_TX/RX_CH5 GXBw_TX/RX_CH4 GXBw_TX/RX_CH3 GXBw_TX/RX_CH2 GXBw_TX/RX_CH1 GXBw_TX/RX_CH0 REFCLK_GXBw_CHB Tile CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 CH5 CH4 CH3 CH2 CH1 CH0 x1/x6 x1/x6 x1/x6 x1/x6 x24 PCIe Hard IP x16 EMIB 6-Pack Separation within Tile (x1/x6 Lines Do Not Cross These Lines) Tile Bottom Left Tile Middle Left Tile Top Left Tile Bottom Right Tile Middle Right Tile Top Right GXBw GXB1C GXB1G GXB1K GXB4C GXB4G GXB4K GXBx GXB1D GXB1H GXB1L GXB4D GXB4H GXB4L GXBy GXB1E GXB1I GXB1M GXB4E GXB4I GXB4M GXBz GXB1F GXB1J GXB1N GXB4F GXB4J GXB4N Notes: 1. Refer to table Channel Types for GXT Channel Capabilities. Transceiver Channel Transceiver Channel and CMU PLL PCIe Hard IP x8 Maximum Lanes Supported on EAP and Initial ES 24

2 Tile Architecture Constraints Figure 23. GXT and GX Channel Placement Example for L-Tile L-Tile ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 3 ch5 ch4 ch3 ch2 ch1 ch0 JESD (C2C) JESD (C2C) REFCLK1 REFCLK0 2 ch5 ch4 ch3 ch2 ch1 ch0 PCIe Hard IP (BP) REFCLK1 REFCLK0 1 ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 0 BP C2C = 26.6 Gbps channel = 12.5 Gbps channel = PCIe Gen3 x16 = Backplane = Chip-to-chip 25

2 Tile Architecture Constraints Figure 24. GXT and GX Channel Placement Example for H-Tile H-Tile ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 3 ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 2 ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 1 ch5 ch4 ch3 ch2 ch1 ch0 JESD (C2C) JESD (C2C) JESD (C2C) JESD (C2C) JESD (C2C) REFCLK1 REFCLK0 0 C2C = 28.3 Gbps channel = 12.5 Gbps channel = Chip-to-chip 26

2 Tile Architecture Constraints Figure 25. GXT and GX Channel Placement Example with PCIe Interface for H-Tile H-Tile ch5 ch4 ch3 ch2 ch1 ch0 REFCLK1 REFCLK0 3 ch5 ch4 ch3 ch2 ch1 ch0 JESD (C2C) JESD (C2C) JESD (C2C) REFCLK1 REFCLK0 2 ch5 ch4 ch3 ch2 ch1 ch0 JESD (C2C) JESD (C2C) Unusable Unusable REFCLK1 REFCLK0 1 ch5 ch4 ch3 ch2 ch1 ch0 PCIe Hard IP (BP) Unusable Unusable REFCLK1 REFCLK0 0 BP C2C = 28.3 Gbps channel = 12.5 Gbps channel = PCIe Gen3 x4 = Backplane = Chip-to-chip Refer to the Intel Stratix 10 Device Datasheet for more information about performance specifications. Related Links L-Tile and H-Tile Overview on page 4 Intel Stratix 10 Device Datasheet 27

2 Tile Architecture Constraints 2.1.4 Reference Clock Guidelines for L-Tile and H-Tile The transmitter PLL and the clock data recovery () block need an input reference clock source to generate the clocks required for transceiver operation. The input reference clock must be stable and free-running at device power-up for proper PLL calibrations. Intel Stratix 10 transceiver PLLs have five possible input reference clock sources, depending on jitter requirements: Dedicated reference clock pins Receiver input pins Reference clock network PLL cascade output ( only) Core clock network ( only) Note: Each core clock network reference clock pin cannot drive s located on multiple L/H-Tiles Intel recommends using the dedicated reference clock pins and the reference clock network for the best jitter performance. For the best jitter performance, Intel recommends placing the reference clock as close as possible to the transmitter PLL. The following protocols require the reference clock to be placed in same bank as the transmitter PLL: OTU2e, OTU2, OC-192 and 10G PON 6G and 12G SDI Note: Figure 26. For optimum performance of GXT channel, the reference clock of transmitter PLL is recommended to be from a dedicated reference clock pin in the same bank. Input Reference Clock Sources Reference Clock Network Dedicated refclk pin RX pin 5 RX pin 4 Fractional PLL () (2) Core Clock (3) Input Reference Clock, Channel PLL (CMU PLL/), or Serial Clock RX pin 0 (1) Note : (1) Any RX pin in the same bank can be used as an input reference clock. (2) The output of another PLL can be used as an input reference clock source during PLL cascading. Intel Stratix 10 transceivers support to and to cascading. Refer to PLL Cascading Clock Network for more details on PLL cascading. (3) Core Clock present only for. Note: In Intel Stratix 10 devices, the FPGA fabric core clock network can be used as an input reference source for only. 28

2 Tile Architecture Constraints The input reference clock is a differential signal. Intel recommends using the dedicated reference clock pin in the same bank as the transmitter PLL for optimal jitter performance. The input reference clock must be stable and free-running at device power-up for proper PLL operation and PLL calibration. If the reference clock is not available at device power-up, then PLL must be recalibrated when the reference clock is available. Figure 27. Dedicated Reference Clock Pins and Other Reference Clock Sources In Intel Stratix 10 L-Tile and H-Tile devices, dedicated reference clock pins and reference clock network can be used by the transmitter PLL ( and ). 1 CH5 PLL Reference Clock Network From PLL Cascading Clock Network CH4 CMU PLL Reference Clock Network 1 CH3 PLL Refclk 0 CH2 PLL From PLL Cascading Clock Network 0 CH1 CMU PLL CH0 PLL Reference Clock Network Input Reference Clock to the PLLs can come from either the Reference Clock Network or the PLL Cascading Clock Network Refclk and can receive the Input Reference Clock from a Dedicated refclk Pin Related Links 2.1.5 PLL Placement Input Reference Clock Source Implementing PLL Cascading 2.1.5.1 Spacing Requirements When using multiple s operating at the same VCO frequency or within 100 MHz of each other, you must observe the spacing requirements listed in the following table. 29

2 Tile Architecture Constraints Table 12. Spacing Requirements Conditions L-Tile ES1 L-Tile/H-Tile Production Two s providing the serial clock for PCIe/PIPE (PHY Interfaces for PCI Express) Gen3 4 (skip 3 PLLs) 2 (skip 1 PLL) to spacing for non PCIe VCO frequency dependent. Refer to "Intel Stratix 10 L-Tile ES1 Transceiver PHY User Guide" for more details None for datarates > 17.4 Gbps (GXT) For two s located in the same bank and driving GX channels: 2 PLL apart (skip 1) for datarates < 17.4 Gbps (GX) For two s located in separate banks and driving GX channels: None There are no placement restrictions between two different tiles. Figure 28. Placement Example Acceptable Spacing Spacing Rule Violated f VCO = 10312.5 MHz Spacing is acceptable f VCO = 10312.5 MHz 3 2 1 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 1 0 f VCO = 10312.5 MHz Violates spacing rules f VCO = 10312.5 MHz 3 2 1 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 10GE Channel 1 0 Related Links Intel Stratix 10 L- and H-Tile Transceiver PHY User Guide 2.1.5.2 Spacing Requirements Table 13. - Spacing Requirements When using and operating at the same VCO frequency or within 100 MHz, you must observe the spacing requirements listed in this table. to Spacing Spacing Requirement to spacing Skip 1 OR None if L counter 2 30

2 Tile Architecture Constraints Figure 29. Placement Example _3 _3 _3 _3 _2 _2 _2 _2 _1 _1 _1 _1 _0 _0 _0 _0 If _0, 1, or both run at the same VCO frequency as _1, this placement is not allowed. If _2 runs at the same VCO frequency as _1, this placement is OK. 2.1.5.3 Usage Model When Driving GXT Channels If IP is configured as the Main (Local output) the PLL Clock Generation Block (M) cannot be used. If IP is configured as an Adjacent (selecting input from below/above), the M in the 3-pack cannot be used. In the same 3-pack as a Main or Adjacent, the can be configured to drive the x1 clock lines. 31

2 Tile Architecture Constraints Figure 30. Restrictions for GX and M x1 Clock Lines x6 Clock Lines x24 Clock Lines Transceiver CH5 Local CH4 /CMU Local PLL CH3 Local CH2 Local CH1 /CMU Local PLL CH0 Local Transceiver CH5 Local CH4 /CMU Local PLL Main CH3 Local CH2 Local CH1 /CMU Local PLL Adjacent CH0 Local GXT Channels driven by the Main and Adjacent Related Links Using the for GXT Channels GXT Implementation Usage Restrictions for GX & M 2.1.5.4 Simplex Channel Merging You can merge the following logical instances into a single physical channel: 32

2 Tile Architecture Constraints RX-only PHY and TX-only PHY instances CMU PLL and TX-only PHY instances Figure 31. Simple Channel Merging: RX-only PHY and TX-only PHY Logical Physical User Logic Reconfiguration Interface 0 Native PHY IP Core Reconfiguration Interface 1 Native PHY IP Core TX Channel RX Channel User Logic Reconfiguration Interface 1 Native PHY IP Core Reconfiguration Interface 0 merged into Reconfiguration Interface1 TX Channel RX Channel Merging QSF: from Reconfiguration Interface 0 to Reconfiguration Interface 1 Figure 32. Channel Merging: CMU PLL and TX-only PHY instances Logical Physical User Logic Reconfiguration Interface 0 Transceiver PLL IP Core Reconfiguration Interface 1 Native PHY IP Core CMU TX Channel User Logic Reconfiguration Interface 0 Native PHY IP Core Reconfiguration Interface 1 merged into Reconfiguration Interface 0 CMU TX Channel Merging QSF: from Reconfiguration Interface 1 to Reconfiguration Interface 0 Rules for Merging Reconfiguration interface (reconfig_*) of both instances to be merged must be driven by the same source. QSF assignments are needed to specify which two reconfiguration interfaces you want to merge. Option 1: Using reconfiguration interface names set_instance_assignment -name XCVR_RECONFIG_GROUP 0 -to topdesign:topdesign_inst <TX only instance name>*ct1_hssi_avmm1_if_inst->inst_ct1_xcvr_avmm1 set_instance_assignment -name XCVR_RECONFIG_GROUP 0 -to topdesign:topdesign_inst <RX only instance name>*ct1_hssi_avmm1_if_inst->inst_ct1_xcvr_avmm1 Option 2: Using pin names set_instance_assignment -name XCVR_RECONFIG_GROUP 1 to tx[0] set_instance_assignment -name XCVR_RECONFIG_GROUP 0 to rx[0] 33

2 Tile Architecture Constraints The simplex channels cannot be merged if any of the following options are enabled in one or both simplex instances: Altera Debug Endpoint (ADME) Optional reconfiguration logic Embedded reconfiguration streamer Shared reconfiguration interface Related Links Reconfiguration Interface and Dynamic Reconfiguration 2.1.5.5 TX PLL Restrictions when Using PCIe Intel recommends that the remaining channels of the tile in L-Tile ES1/L-Tile Production (PIPE) are to be driven by if 4 or more channels of PCIe are used at Gen2 or Gen3 speeds. Using to drive these channels helps achieve better performance. Intel Quartus Prime will issue a critical warning if is used to drive the remaining channels. 2.2 Unsupported Dynamic Reconfiguration Features The following is a list of the unsupported dynamic reconfiguration features: Reconfiguration from a bonded configuration to a non-bonded configuration, or vice versa Reconfiguration from a bonded protocol to another bonded protocol Reconfiguration from PCIe (with Hard IP) to PCIe (without Hard IP), or non-pcie bonded protocol switching clock generation block (M) reconfiguration Switching between two master s Serialization factor changes on bonded channels TX PLL switching on bonded channels 2.3 Intel Stratix 10 L-Tile Transceiver to H-Tile Transceiver Migration All of the L-Tile transceiver constraints apply to H-Tile transceivers as well. The H-Tile transceivers have no further restrictions than the L-Tile transceivers, with the exception of the GXT channels. If you plan to use GXT channels in the H-Tile, the V CCR_GXB and V CCT_GXB pins on that tile must be set to 1.12 V. Note: When migrating from L-Tile to H-Tile transceivers, use the Stratix 10 Early Power Estimator (EPE) tool to validate your regulator sizing. The placement constraints for the GXT channels, which are available in L-Tile Production and H-Tile transceivers, are mentioned in the GXT Channels section. 34

2 Tile Architecture Constraints Related Links GXT Channels on page 22 2.4 Thermal Guidelines Optimal thermal performance can be achieved by reducing the power density within the transceiver tile. Placing many high data rate channels next to each other results in high power density areas within a tile. Following a general guideline of minimizing power density will result in a less complex, and cheaper cooling solution for the FPGA. For best thermal performance you can minimize power density by picking transceiver channel locations early on. Follow these guidelines when placing your transceiver channels within a tile: Spread out channels as much as possible If all channels in a tile are used, intersperse low and high data rate channels The middle of the tile has the best thermal performance, followed by the bottom and then the top of each tile when looking at the Pin Planner The latest Intel Stratix 10 Early Power Estimator (EPE) contains a Thermal worksheet to help you determine the impact of transceiver placement on your thermal solution requirements. Prior to finalizing your board design you should analyze your transceiver channel placement using the Intel Stratix 10 EPE to ensure it is thermally optimal. Note: Contact your local FAE to have Intel run a thermal analysis of your board design after you have determined placement of all transceiver channels. 35

3 PCIe Guidelines 3.1 PCIe Hard IP There is one PCIe Hard IP available per transceiver tile. 3.1.1 Channel Placement for PCIe Hard IP The PCIe lane 0 is always mapped to ch0 of the transceiver tile. Channel 0 of the transceiver tile = 0, Channel 0. The PCIe x1, x2, x4 and x8 configurations always consume a total of eight transceiver channels. CvP Support Only the bottom left transceiver tile supports configuration via protocol (CvP). Figure 33. Transceiver Channel Usage for PCIe x1, X2, x4, x8 and x16 PCIe x1 PCIe x2 PCIe x4 PCIe x8 PCIe x16 16 Channels Usable 23 23 23 23 23 8 Channels Usable 16 Channels Usable 16 Channels Usable 16 Channels Usable 16 15 8 8 8 8 7 7 7 Channels 6 Channels 4 Channels 7 7 PCIe Hard IP x16 Unusable Unusable Unusable 4 2 PCIe Hard IP x8 1 3 PCIe Hard IP x1 PCIe Hard IP x2 1 PCIe Hard IP x4 0 0 0 0 0 Transceiver Tile Transceiver Tile Transceiver Tile Transceiver Tile Transceiver Tile For L-Tile ES1 and L-Tile Production (PIPE only), any transceiver channel running at data rate above 6.5 Gbps that shares a tile with an active PCI Express interface that are Gen2 or Gen3 capable and configured with more than 2 lanes (Gen2/3 x4, x8, x16) may observe momentary bit errors (BER) during a PCI Express rate change event (PCIe link training both up and down, i.e., link down and start of link training ). Transceiver channels that share a tile with active PCI Express interfaces that are only Gen1 capable will not be impacted. Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Intel. Intel customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. *Other names and brands may be claimed as the property of others. ISO 9001:2008 Registered

3 PCIe Guidelines 3.1.2 PLL Placement for PCIe Hard IP If the PCIe Hard IP is configured as Gen1/Gen2 capable IP, the is used as a transmitter PLL. If the PCIe Hard IP is configured as Gen3 capable IP, then is used as a transmitter PLL when running at Gen1/Gen2 speeds. is used as a transmitter PLL when running at Gen3 speeds. Figure 34. PLL Placement for Gen1 and Gen2 x1/ x2/ x4/ x8 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 Ch 15 Ch 14 Ch 13 Ch 12 Ch 11 Ch 10 Ch 9 Ch 8 Ch 7 Ch 6 Ch 5 Ch 4 Ch 3 Ch 2 Ch 1 Ch 0 PCIe Hard IP HRC connects to 0 37

3 PCIe Guidelines Figure 35. PLL Placement for Gen1 and Gen2 x16 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 Ch 15 Ch 14 Ch 13 Ch 12 Ch 11 Ch 10 Ch 9 Ch 8 Ch 7 Ch 6 Ch 5 Ch 4 Ch 3 Ch 2 Ch 1 Ch 0 PCIe Hard IP HRC connects to 0 middle transceiver bank Figure 36. PLL Placement for Gen3 x1/x2/x4/x8 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 (Gen3) PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 Ch 15 Ch 14 Ch 13 Ch 12 Ch 11 Ch 10 Ch 9 Ch 8 Ch 7 Ch 6 Ch 5 Ch 4 Ch 3 Ch 2 Ch 1 Ch 0 PCIe Hard IP HRC connects to 0 & PLL0 38

3 PCIe Guidelines Figure 37. PLL Placement for Gen3 x16 1 PLL1 0 PLL0 1 PLL1 0 PLL0 1 PLL1 0 PLL0 (Gen3) 1 PLL1 0 PLL0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PMA Channel 5 PMA Channel 4 PMA Channel 3 PMA Channel 2 PMA Channel 1 PMA Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 PCS Channel 5 PCS Channel 4 PCS Channel 3 PCS Channel 2 PCS Channel 1 PCS Channel 0 Ch 15 Ch 14 Ch 13 Ch 12 Ch 11 Ch 10 Ch 9 Ch 8 Ch 7 Ch 6 Ch 5 Ch 4 Ch 3 Ch 2 Ch 1 Ch 0 PCIe Hard IP HRC connects to 0 & PLL1 middle transceiver bank TX PLL Guidelines When Using PCIe 1. The remaining channels of the tile in L-Tile ES1 are recommended to be driven by if 4 or more channels of PCIe are used at Gen2 or Gen3 speeds. Using to drive these channels helps achieve better performance. Intel Quartus Prime will issue a critical warning if is used to drive the remaining channels. Table 14. TX PLL Guidelines When Using PCIe PCIE CONFIG Recommended PLL selection for remaining channels PCIE GEN 1 (All lane widths) Any PLL PCIE GEN 2 (x4,x8,x16) (5) PCIE GEN 3 (x4,x8,x16) (5) 2. When instantiating PIPE interfaces and PCIe Hard IP in the same transceiver tile, be aware of and - spacing rules. For more details refer to PLL Placement section. Related Links PLL Placement on page 29 (5) Quartus will issue a critical warning if FPLL is used instead of 39

3 PCIe Guidelines 3.2 PHY Interface for PCIe Express (PIPE) This can be used when you want flexible channel placement or to interface the Intel Stratix 10 PCIe PHY with existing 3rd party PCIe IP cores. 3.2.1 Channel Placement for PIPE For L-Tile ES1 and L-Tile Production (PIPE only), any transceiver channel running at data rate above 6.5 Gbps that shares a tile with an active PCI Express interface that are Gen2 or Gen3 capable and configured with more than 2 lanes (Gen2/3 x4, x8, x16) may observe momentary bit errors (BER) during a PCI Express rate change event (PCIe link training both up and down, i.e., link down and start of link training ). Transceiver channels that share a tile with active PCI Express interfaces that are only Gen1 capable will not be impacted. This is applicable for L-Tile ES1 and L-Tile Production (PIPE only). For details on channel placement for PIPE, refer to the section "How to place channels for PIPE configurations" in" Intel Stratix 10 Transceiver PHY User Guide" Related Links How to Place Channels for PIPE Configurations 3.2.2 PLL Placement for PIPE When instantiating PIPE interfaces and PCIe Hard IP in the same transceiver tile, be aware of and - spacing rules. For more details refer to PLL Placement section. TX PLL Guidelines When Using PCIe 1. Intel recommends that the remaining channels of the tile in L-Tile ES1/L-Tile Production (PIPE only) are to be driven by if 4 or more channels of PCIe are used at Gen2 or Gen3 speeds. Using to drive these channels helps achieve better performance. Intel Quartus Prime will issue a critical warning if is used to drive the remaining channels. Table 15. TX PLL Guidelines When Using PCIe PCIE CONFIG Recommended PLL selection for remaining channels PCIE GEN 1 (All lane widths) Any PLL PCIE GEN 2 (x4,x8,x16) (6) PCIE GEN 3 (x4,x8,x16) (6) 2. For details on PLL placement for PIPE, refer to the section "How to Connect TX PLLs for PIPE Gen1, Gen2, and Gen3 Modes" in" Intel Stratix 10 Transceiver PHY User Guide" Related Links PLL Placement on page 29 How to Connect TX PLLs for PIPE Gen1, Gen2, and Gen3 Modes (6) Quartus will issue a critical warning if FPLL is used instead of 40

4 Document Revision History Date Version Changes November, 2017 2017.11.06 Made the following changes: Added a new diagram "Intel Stratix 10 L-Tile ES2 Production GXT Channel Placement" Updated the "Channel Types" table to include L-Tile channels Updated the " Spacing Requirements" and "- Spacing Requirements" table Updated the "Thermal Guidelines" section January, 2017 2017.01.13 Made the following change: December 2016 2016.12.19 Made the following changes: September 2016 2016.09.20 Initial release Made the following updates in the "Mix and Match GX Channels Design Example" diagram: Changed PCIe Gen 1/2/3 x8 to PCIe HIP Gen 1/2/3x8 Changed PCIe Gen 1/2, 2.5 GHz to PCIe HIP Gen 1/2, 2.5 GHz Changed PCIe Gen 3, 4 GHz to PCIe HIP Gen 3, 4 GHz Updated the description for "TX PLL Restrictions when Using PCIe x16" topic Updated the description for "PCIe Hard IP Placement" topic Restrictions stated when one or more channels in a bank are used for PCIe/ PIPE Gen3 Updated steps in "How to Place Channels for PIPE Configurations" topic Changed value of Logical PCS Channel # from 1 to 0 in "Logical PCS Channel for PIPE Configuration" table Added a note "Each core clock network reference clock pin cannot drive s located on multiple L/H-Tiles" Added a new diagram "x4 Configuration" in "Bonded GX Channels" topic to explain the ascending order of the channel placement Added a new section: GXT Channels Placement Clarified the spacing requirements and listed them in the " Spacing Requirements" table. Intel Corporation. All rights reserved. Intel, the Intel logo, Altera, Arria, Cyclone, Enpirion, MAX, Nios, Quartus and Stratix words and logos are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel warrants performance of its FPGA and semiconductor products to current specifications in accordance with Intel's standard warranty, but reserves the right to make changes to any products and services at any time without notice. Intel assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Intel. Intel customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. *Other names and brands may be claimed as the property of others. ISO 9001:2008 Registered