Stitching UVM Testbenches into Integration-Level

Similar documents
Universal Verification Methodology(UVM)

Three Steps to Unified SoC Design and Verification by Shabtay Matalon and Mark Peryer, Mentor Graphics

Development of UVM based Reusabe Verification Environment for SHA-3 Cryptographic Core

An Introduction to Universal Verification Methodology

Design and Verification of Slave Block in Ethernet Management Interface using UVM

Boost Verification Results by Bridging the Hardware/Software Testbench Gap

A comprehensive approach to scalable framework for both vertical and horizontal reuse in UVM verification

An Evaluation of the Advantages of Moving from a VHDL to a UVM Testbench by Shaela Rahman, Baker Hughes

Tackling Verification Challenges with Interconnect Validation Tool

Easy migration between frameworks using UVM Multi- Language (UVM-ML) Dr. Mike Bartley, Test and Verification Solutions

Module- or Class-Based URM? A Pragmatic Guide to Creating Verification Environments in SystemVerilog. Paradigm Works, Inc. Dr.

Integrate Ethernet QVIP in a Few Hours: an A-to-Z Guide by Prashant Dixit, Questa VIP Product Team, Mentor Graphics

Small, Maintainable Tests

Making it Easy to Deploy the UVM by Dr. Christoph Sühnel, frobas GmbH

Advanced Verification Topics. Bishnupriya Bhattacharya John Decker Gary Hall Nick Heaton Yaron Kashai Neyaz Khan Zeev Kirshenbaum Efrat Shneydor

My Testbench Used to Break! Now it Bends: Adapting to Changing Design Configurations

Easier UVM Functional Verification for Mainstream Designers

Stacking UVCs Methodology. Revision 1.2

Three Things You Need to Know to Use the Accellera PSS

width: 10, 20 or 40-bit interface maximum number of lanes in any direction

UVM: The Next Generation in Verification Methodology

Graph-Based IP Verification in an ARM SoC Environment by Andreas Meyer, Verification Technologist, Mentor Graphics Corporation

Complex Signal Processing Verification under DO-254 Constraints by François Cerisier, AEDVICES Consulting

User Experience with UVM

DVCon India 2016 Abstract submission template. Taking UVM to wider user base the open-source way Name: Nagasundaram Thillaivasagam

VERIFICATION OF RISC-V PROCESSOR USING UVM TESTBENCH

Simplifying UVM in SystemC

Maximize Vertical Reuse, Building Module to System Verification Environments with UVMe

Comprehensive AMS Verification using Octave, Real Number Modelling and UVM

Reuse MATLAB Functions and Simulink Models in UVM Environments with Automatic SystemVerilog DPI Component Generation

Staffan Berg. European Applications Engineer Digital Functional Verification. September 2017

UVM for VHDL. Fast-track Verilog for VHDL Users. Cont.

Comprehensive CDC Verification with Advanced Hierarchical Data Models

Efficient Verification of Mixed-Signal SerDes IP Using UVM

Jump-Start Software-Driven Hardware Verification with a Verification Framework

Contents 1 Introduction 2 Functional Verification: Challenges and Solutions 3 SystemVerilog Paradigm 4 UVM (Universal Verification Methodology)

UVM in System C based verification

Open Verification Methodology (OVM)

Incisive Enterprise Verifier

SPECMAN-E TESTBENCH. Al. GROSU 1 M. CARP 2

FPGA chip verification using UVM

Bring IP verification closer to SoC

Verification of Clock Domain Crossing Jitter and Metastability Tolerance using Emulation

Power Up Hardware/Software Verification Productivity by Matthew Ballance, Mentor Graphics

UVM Ready: Transitioning Mixed-Signal Verification Environments to Universal Verification Methodology

Universal Verification Methodology (UVM) Module 5

Slaying the UVM Reuse Dragon Issues and Strategies for Achieving UVM Reuse

Accelerating RTL Simulation Techniques by Lior Grinzaig, Verification Engineer, Marvell Semiconductor Ltd.

Simulation-Based FlexRay TM Conformance Testing an OVM success story

Slicing Through the UVM s Red Tape

DO-254 Testing of High Speed FPGA Interfaces by Nir Weintroub, CEO, and Sani Jabsheh, Verisense

Verification of AHB Protocol using UVM

Universal Verification Methodology (UVM) 10:05am 10:45am Sharon Rosenberg UVM Concepts and Architecture

Mastering Unexpected Situations Safely. Chassis & Safety Vehicle Dynamics

Optimizing Emulator Utilization by Russ Klein, Program Director, Mentor Graphics

UVM Ready: Transitioning Mixed-Signal Verification Environments to Universal Verification Methodology

Portable Stimulus Driven SystemVerilog/UVM verification environment for the verification of a high-capacity Ethernet communication endpoint

What are the characteristics of Object Oriented programming language?

Perplexing Parameter Permutation Problems? Immunize Your Testbench

Simplified UVM for FPGA Reliability UVM for Sufficient Elemental Analysis in DO-254 Flows by Shashi Bhutada, Mentor Graphics

Equivalence Validation of Analog Behavioral Models

Maximize Vertical Reuse, Building Module to System Verification Environments with UVM e

DEVELOPMENT AND VERIFICATION OF AHB2APB BRIDGE PROTOCOL USING UVM TECHNIQUE

Will Everything Start To Look Like An SoC?

Verification of Digital Systems, Spring UVM Basics

Tough Bugs Vs Smart Tools - L2/L3 Cache Verification using System Verilog, UVM and Verdi Transaction Debugging

SystemVerilog Assertions in the Design Process 213

AN INTRODUCTION TO UNIT TESTING

ASIC world. Start Specification Design Verification Layout Validation Finish

Is Power State Table Golden?

Parameters and OVM Can t They Just Get Along?

A Practical Solution to Fixing Netlist X-Pessimism

SystemVerilog UVM. Student Workbook

A Generic UVM Scoreboard by Jacob Andersen, CTO, Kevin Seffensen, Consultant and UVM Specialist, Peter Jensen, Managing Director, SyoSil ApS

Software Design Patterns. Background 1. Background 2. Jonathan I. Maletic, Ph.D.

Signavio Process Manager. Collaborative process design for the entire organization

Verification Prowess with the UVM Harness

Verification of Advanced High Speed Bus in UVM Methodology

OVM/UVM Update. Universal Verification Methodology. Open Verification Methodology. Tom Fitzpatrick Verification Technologist Mentor Graphics Corp.

Applying Design Patterns to accelerate development of reusable, configurable and portable UVCs. Accellera Systems Initiative 1

Fast Track to Productivity Using Questa Verification IP by David Aerne and Ankur Jain, Verification Technologists, Mentor Graphics

CREATIVE ASSERTION AND CONSTRAINT METHODS FOR FORMAL DESIGN VERIFICATION

Addressing Verification Bottlenecks of Fully Synthesized Processor Cores using Equivalence Checkers

UVM-SystemC Standardization Status and Latest Developments

Practical experience in automatic functional coverage convergence and reusable collection infrastructure in UVM verification

WACC Report. Zeshan Amjad, Rohan Padmanabhan, Rohan Pritchard, & Edward Stow

For a long time, programming languages such as FORTRAN, PASCAL, and C Were being used to describe computer programs that were

The Need for Speed: Understanding design factors that make multicore parallel simulations efficient

Formal Contribution towards Coverage Closure. Deepak Pant May 2013

Want a Boost in your Regression Throughput? Simulate common setup phase only once.

Automating Root-Cause Analysis to Reduce Time to Find Bugs by Up to 50%

Low-Power Verification Methodology using UPF Query functions and Bind checkers

Responding to TAT Improvement Challenge through Testbench Configurability and Re-use

Allegro Design Authoring

Seven Separate Sequence Styles Speed Stimulus Scenarios

Shortest path to the lab. Real-world verification. Probes provide observability

System-Level Verification Platform using SystemVerilog Layered Testbench & SystemC OOP

Extending Digital Verification Techniques for Mixed-Signal SoCs with VCS AMS September 2014

Samsung and Cadence. Byeong Min, Master of Infrastructure Design Center, System LSI Business, Samsung. The Customer. The Challenge. Business Challenge

8. Best Practices for Incremental Compilation Partitions and Floorplan Assignments

Transcription:

Stitching UVM Testbenches into Integration-Level Wayne Yun Advanced Micro Devices, Inc. +1-289-695-1968 Wayne.Yun@amd.com David Chen Advanced Micro Devices, Inc. +1-289-695-1162 Dave.Chen@amd.com Oliven Xie Advanced Micro Devices, Inc. +86-21-6160-1838x25643 Oliven.Xie@amd.com Abstract - UVM is designed for reuse. UVM agents, sequences, and scoreboards are widely reused. Many commercial UVM components and test suites are available. These components are successfully reused vertically and horizontally. However, vertical reuse of sequences needs to be planned carefully and not every sequence can be reused automatically. If the need of reusing a sequence is realized after it has been implemented, extra effort is generally needed to refine it, and changes will propagate to exiting testbenches which have already deployed the sequence. The proposed solution is a new architecture of an integration-level testbench where sequences from other testbenches are reused. The architecture, named as multi-test, stitches lower level tests together, and enables the reuse of a sequence without changing its code regardless of whether it was planned for reuse or not. Thus, the initial plan and work of reuse can be eliminated too. Another benefit is vertical reuse of UVM tests. Multi-test architecture introduces a new paradigm of reuse based on, and goes beyond, textbook UVM, and reduces overall verification effort further. I. INTRODUCTION Reuse is one of the most important methods of handling the ever growing complexity of verification. Universal Verification Methodology (UVM) [1] [2] is designed for reuse. UVM Components (UVC) are reused horizontally and vertically [9]. A. UVM Horizontal and Vertical Reuse Horizontal reuse generally has two scenarios. The first case is to reuse UVC from project to project. The second case is to reuse between parallel testbenches, for example from one block testbench to another block testbench. In horizontal reuse, a UVC is used as it was original inted. UVM agents are mostly reused in active mode. Vertical reuse generally happens at integration-level testbenches. When design blocks, IPs, and clusters are integrated together, often verification components from lower level testbenches are reused at a higher level testbench. Vertical reuse usually requires alteration. This requirement is a consequence of integration of design parts at integration-level testbenches. An interfaces driven by a UVC at a lower level testbench could be connected to and driven by another design block at higher levels. Typically, the UVC is changed to passive mode. Sometimes, sequences are given more constraints. There are also layered sequencer cases in which sequencers of UVM agents or lower levels are used as lower layer sequencers. UVM reuse has been a big success, especially for the horizontal case. In addition to reuse inside an organization, UVM agents, even test suites, are offered commercially. UVM reuse boosts verification quality and improves verification efficiency throughout the industry. Vertical reuse is more complicated. However, thanks to the design of UVM classes, most types of UVCs are widely reused. Passive UVCs like agents, monitors, checkers, and scoreboards are directly reused following standard methodology. Active UVM agents and their sequences are generally reused well. Other stimuli generating parts, like sequencers and sequences, need to be planned ahead before reuse. UVM defines virtual sequencer and virtual sequences for reuse. Certain rules have to be observed to make them reusable vertically. AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies. 2017 Advanced Micro Devices, Inc. All rights reserved.

B. Vertical Reuse of UVM Stimuli In a UVM testbench, stimuli are generated by sequences. There are two categories of sequences virtual sequences and non-virtual sequences. Virtual sequences can run on any sequencer or standalone, but they cannot generate any sequence-items. On the other hand, non-virtual sequences can generate sequence-items, but they have to run on specific types of sequencers. Generated sequence-items are passed to drivers from sequencers, and drivers convert them to signal toggles at interfaces. In this chain of stimulus generation, virtual sequences are positioned to be reused vertically. They can start non-virtual sequences to generate sequence-items. To enable correct behavior at a reusing testbench, a well-defined boundary is needed from virtual sequences to non-virtual sequences and sequencers. The boundary makes it possible to supply a compatible set of sequences and sequencers at another testbench. There are other boundaries that should be well defined, too, in order to access design configurations, design states, resource managers, etc. If a virtual sequence bypasses any of these boundaries at the IP testbench, the issue is only demonstrated at a testbench reusing it. This testbench may not have the same direct access path as the IP testbench. Fixing these portability issues not only takes extra effort, but also delays projects. These reuse boundaries have to be implemented at every testbench reusing these sequences. Deping on the complexity of boundaries, the implementations may take effort and make reuse less beneficial. It is often at the early stage of a project that the list of virtual sequences is decided. Extra work is frequently seen if any new virtual sequence is needed at later stages. Following textbook UVM reuse [1] [2] [3] [4] [5] [6] virtual sequences, virtual sequencers, and checkers are separated from an IP-level testbench, then integrated together again at the reusing testbench. There could be less effort if they are not separated at all. In terms of vertical reuse of UVM tests and testbenches, there is no defined methodology found in textbooks. UVM provides little help for how to combine testbenches from IP blocks into testbenches for subsystems and systems [8]. The above analysis shows that there is room for an alternative UVM vertical reuse methodology, though UVM virtual sequences and passive UVCs are successfully deployed in many cases. Multi-test architecture, which is detailed below, reuses UVM tests and environments in active mode, simplifies above scenarios, and provides more flexibility. II. MULTI-TEST ARCHITECTURE The name multi-test comes from the fact that there are multiple lower level tests instantiated in an integration- level test. This architecture is not described in UVM textbooks. A. Multiple Tests in One Simulation UVM test is the unique, top UVM component that a user can control in a simulation. When IP testbenches are simply compiled together, there will be only one IP test that can run for each simulation barring porting issues. However, it is implied that a higher level test must run simultaneously with all lower level ones if the higher level test is built from these lower level ones. Multi-test architecture implements this scheme by instantiating lower level tests inside the higher level test. Logically, the higher level test has internal structure, and some sub-components are lower level tests. Figure 1 shows one possible implementation of an SoC test. The SoC test is derived from an SoC test base class which in turn is directly or indirectly derived from the uvm_test class. The SoC base class handles common aspects of a group of tests, like instantiating IP tests, configuring them, etc. Derived SoC tests focus more on different scenarios. Figure 2 is pseudo code demonstrating the overall structure. UVM kernel creates a test by its class name, the same method should be used by the SoC test base class to maximize compatibility. The handle to an IP test should be casted to a proper class type so that its internal components can be hierarchically referenced, for example, referencing a sequencer. The hierarchy of UVM components created by an IP test is a genuine active IP environment. It provides everything an IP sequence needs in the same way as the IP-level testbench. The IP sequence can run on the same sequencer as it did at the IP-level testbench. Thus, any IP sequence, regardless of being planned for re-use or not, can be reused at the SoC level. The IP test is also reused at the SoC-level testbench.

An IP UVM environment might need certain adjustments to adapt to changes surrounding the design it targets. This type of adjustment can be implemented as factory overrides at the SoC test base class. Figure 1 Multi-Test Architecture is based on UVM // Base class of all or a group of SoC tests. Instead of creating a uvm_env, it // instantiates one IP test for every instance of IP design class SOC_TEST_BASE exts uvm_test; // Handles to IP level tests IP1_TEST m_ip1_test; IP2_TEST m_ip2_test; IP3_TEST m_ip3_test; // Names of IP level tests, used to create IP test instances string m_name_ip1_test; string m_name_ip2_test; string m_name_ip3_test; // Some IP testbench components are replaced using UVM factory extern function void factory_override(); // UVM build phase function void build_phase(uvm_phase phase); // Override IP internal components factory_override(); // Create IP tests using UVM factory if (!$cast(m_ip1_test, create_component(m_name_ip1_test, ip1_test )) begin `uvm_fatal( SOC_TEST_BASE, Failed to create IP1_TEST ) if (!$cast(m_ip2_test, create_component(m_name_ip2_test, ip2_test )) begin `uvm_fatal( SOC_TEST_BASE, Failed to create IP2_TEST ) if (!$cast(m_ip3_test, create_component(m_name_ip3_test, ip3_test )) begin `uvm_fatal( SOC_TEST_BASE, Failed to create IP3_TEST )

function class : SOC_TEST_BASE // SOC test is derived from SOC_TEST_BASE, focuses on test scenario rather than // environment structure class SOC_TEST exts SOC_TEST_BASE;... class Figure 2 Pseudo Code of Multi-Test Architecture B. Resolving Driver Conflicts At an integration-level testbench, design IPs are connected to each other. A connected interface of a design IP is driven by another design IP as opposed to verification code at its IP-level testbench. The stimulus from verification code has to be removed at this interface to simulate the integrated design correctly. UVM User Guideline [1] defines the passive mode of UVM agent for this scenario. The sequencer and driver of an agent are not created when the agent is in passive mode. However, the sequencer may be needed to run IP level sequences by a test of multi-test architecture. Instead of passive mode, multi-test architecture suggests to replace the driver with a dummy one using factory override. Figure 3 is a pseudo code example of over-riding a driver. // Derive a class from the agent s driver, overload the run_phase task so as not // to drive the inferface. class muted_driver exts agent_driver; task run_phase(uvm_phase phase); // Drive hi-z to all output signals vif.out_signals = hz; // Complain about any item forever begin seq_item_port.get_next_item(req); `uvm_error( unexpected item, Unexpected sequence item is received. ) seq_item_port.item_done(); task class // UVM factory is used to replace the original driver with the derived one function void SOC_TEST_BASE::factory_override();... // Type override is used in this example, instance override is used in Fig 5. set_type_override_by_type(agent_driver::get_type(), muted_driver::get_type());... function Figure 3 Mute Driver to Avoid Driving Conflict C. Redirecting Stimulus Sometimes stimuli generated by an IP test at one interface needs to be redirected to another interface because that original interface is driven by another design block at the integration-level testbench. Changes of this type require one interface to be driven by multiple verification sources. If sequencers required by all sequences from different IP testbenches are of the same class or a base class, these sequences can run on the same sequencer. Otherwise, a more general solution is the layered sequencer structure documented in UVM User Guide [1]. This group of structure patterns was originally inted to handle multi-layer protocols [10] [11]. Under multi-test architecture, it is repurposed to merge stimuli from multiple

verification environments. Figure 4 shows the overall structure of one possible implementation, and Figure 5 is the corresponding pseudo code example. Agent_inst_1 Sequencer Agent_inst_2 Sequencer Driver Driver Figure 4 A Structure Merging Stimuli from Two Environments // A sequence sing out a sequence_item class forwarding_seq exts agent_seq; agent_item m_item; task body(); start_item(m_item); finish_item(m_item); task class // Derive a driver to redirect sequence_items class redirecting_driver exts agent_driver; // Handle to the sequencer in the agent which controls the interface agent_sequencer m_ctrl_sqr; task run_phase(uvm_phase phase); forwarding_seq m_fwd_seq = new( m_fwd_seq ); forever begin // Get a sequence_item seq_item_port.get_next_item(req); // Pass the sequence_item to the sequence m_fwd_seq.m_item = req; // Run the sequence to s the sequence_item m_fwd_seq.start(m_ctrl_sqr); // Flag done seq_item_port.item_done(); task class // Set UVM factory to create the redirecting driver inside the agent function void SOC_TEST_BASE::factory_override(); set_inst_override_by_type( relative.path.to.driver, agent_driver::get_type(), redirecting_driver::get_type()); function Figure 5 Redirecting Stimuli The work adapting IP environment is reusable, especially for adaption of UVM agents.

III. INTEGRATION-LEVEL TESTBENCH Multi-test architecture brings extra flexibility and reuse opportunity to an integration-level testbench while maintaining UVM structure, compatibility, and feeling. A. Test An integration-level test is very flexible following multi-test architecture. Verification engineers can choose the most efficient one based on the situation. 1) The test can be composed from a set of IP tests. In this case, an SoC scenario is decomposed to a set of IP scenarios. Each IP test is chosen to implement its sub-scenario with control information from the configuration database. This scheme works best when only minimum synchronization is required among IP tests. 2) The test can be composed from a set of IP sequences. An SoC test can create IP-level sequences, and run them on their original sequencers inside IP environments under IP tests. The SoC test can create any type of synchronization among IP sequences. 3) The test can be composed from new SoC sequences, new IP tests and sequences. New SoC-level sequences, IP tests, and IP sequences can be defined and created by an SoC test. This scheme is best for customizing IP scenarios. 4) The test can be composed from a mix of existing IP tests, existing IP sequences, new IP tests, new IP sequences, and new SoC sequences. Choice of customization, reuse, or new development can be made indepently for every IP test, and IP sequence. This flexibility provides a very efficient way for creating high quality tests. B. Environment Most portions of the integration-level UVM environment are logically composed from IP ones. IP environments are customized with UVM factory override for integration-level purpose. Any UVC only created at the integration-level, should be added to the integration-level UVM environment. C. Synchronization Synchronization between IP tests can be done in many ways. UVM phasing [12] can be used at a coarsegrain level. Any type of synchronization can be implemented by an integration-level sequence calling IP sequences. D. Register Model Multi-test architecture suggests the integration-level register model to be hierarchical and have a branch for every IP [7]. Each IP test and environment will get a handle of its branch. Any register access request will be forwarded to a root map and a register adaptor at the integration-level UVM environment. E. Subsystem Testbench A group of closely related IPs are often organized as a subsystem testbench in addition to the full SoC testbench. If both of the subsystem and the full SoC testbench are of multi-test architecture, the subsystem testbench is only a re-package of the full SoC one, and the full SoC one reuses all work on IP tests and environments from the subsystem. This relationship enables timely construction of an unplanned subsystem testbench late in a project. Potentially, subsystem and full SoC tests and testbenches can be generated by a script which is enabled by the regular adapting pattern. F. Testbench Bring-up To bring-up a multi-test testbench, its IP tests could be brought up serially - one IP test is brought up after another. The bring-up could also be in parallel, or in groups, or in stages since IP tests are indepent from each other. This flexibility makes it possible to optimize between resources and schedule. IV. FRIENDLY IP-LEVEL TESTBENCH UVM compliance is the only hard requirement for an IP-level testbench to be reused under multi-test architecture. Two best practices of UVM are critical to multi-test architecture: 1) Using UVM factory to create components and objects. This structure gives an integration-level testbench an opportunity to customize lower level testbenches.

2) Every access to UVM configdb should reference to a UVM component. The hierarchical name of a UVM component will be changed at an integration-level testbench. A hard coded path often cannot be resolved to the correct scope at an integration-level testbench. Below are some guidelines for an IP-level testbench. They make adaption work at the integration level easier. 1) An IP-level testbench should not override the sequencer of an agent, instead, create a local virtual sequencer, or pass a helping class to sequences. It is simpler to merge stimuli when the same sequencer is used by multiple IP environments, section II.C has more details. 2) An IP environment should accept handles of UVM agents. An integration-level testbench can pass in the handle of an agent created outside the IP environment, and the IP environment should use it instead of creating its own agent. This mechanism reduces testbench complexity, ensures same monitoring traffic is seen by all IP environments, and simplifies merging stimuli from multiple IP environments. 3) Handle variables to sequencers should be defined at an IP environment. Thus, IP code can use them to reference sequencers rather than hard coded hierarchical references. This provides a convenient and consistent way for an integration-level testbench to replace a sequencer and ease merging stimuli. 4) All handles pointing to testbench components should be public. Thus, they can be used, or changed by an integration-level testbench. V. A NEW PARADIGM OF REUSE Multi-test architecture enables reusing UVM testbenches from the lowest level to intermediate integration and full chip levels. It is a new vertical reuse methodology for UVM verification environments. One general goal of verification environments is to verify as many corner cases as possible at lowest cost. Multi-test architecture enables new reuse patterns, provides more trade-off options, and optimizes cost. 1) A UVM test following multi-test architecture integrates lower level tests rather than creating new code. The adaption of the IP-level environment follows a regular pattern, and adapting code could be reused. This is a crucial step toward efficient creation of integration-level testbenches and test cases. 2) Many choices are provided for an integration-level test. An array of IP tests and sequences are available for direct reuse and extension. The most efficient combination can be chosen to reduce overall effort and improve verification quality. 3) Subsystem and full SoC testbenches can share the work of adapting IP-level tests. This attribute eases creation of new subsystem testbenches. 4) Since the IP-level test is reused at higher levels, the knowledge of it is shared. The higher level environment becomes familiar to IP engineers too. The learning curve can be shortened, and communication and collaboration between levels can be more efficient. 5) Third party UVM testbenches can also be reused. Multi-test architecture doesn t require any specific reuse boundary, structure or component for a testbench to be reused. UVM classes are the boundary and mechanism for reuse. Every testbench can be reused as long as it observes UVM rules. 6) A testbench of multi-test architecture follows the rules of UVM. It is possible to reuse it as an IPlevel test at a higher level testbench. 7) Reusability is tested by IP-level tests. An IP test and environment are expected to run in the same way. Everything that works at the IP-level should work at integration-level in theory. Multi-test architecture creates a new paradigm of reuse based on UVM. IP-level tests and sequences can be reused without modification rather than being ported or rewritten. Meanwhile, the passive checking capability of IP-level is still available at the integration-level.

VI. SUMMARY Lower level UVM testbenches can be stitched together to make a higher level one following multi-test architecture. Multi-test architecture is based on UVM and complementary to textbook reuse. Multi-test architecture suggests for the higher level test to instantiate one lower level test for every instance of any lower level design, and to resolve driving conflicts by overriding UVM drivers with muted or redirecting versions of them. UVM stimuli, sequences and tests, are naturally reused in vertical direction following multi-test architecture. Some benefits are highlighted below. 1) Multi-test architecture of a higher level testbench enables reuse of lower level sequences and tests without special coding or testing at lower level testbenches. 2) Minimum, in theory none, change is required to be made to lower level code. 3) Reuse is based on UVM classes and OOP, compatible with any UVM testbench. 4) Flexible and simple creation of a higher level test. 5) Ease of construction of subsystem testbenches. 6) The higher level testbench can bring-up its lower level tests in parallel. 7) Familiar environment to owners of lower level testbenches, requiring less learning, and encourages faster debug. 8) Lower cost, better execution. Multi-test architecture was successfully deployed in 2 subsystem testbenches. IP tests were brought-up in parallel. Subsystem tests were easily made from configurable IP tests. Most subsystem tests were only different sets of configurations of IP tests. The time spent in test creation and debug was reduced significantly. Both testbenches ran UVM 1.2 with Synopsys VCS 2015.09. ACKNOWLEDGMENT Authors thank all colleagues who participated in discussions, provided feedbacks, contributed to the implementation of pilot testbenches and supported this innovation. Special thanks go to Gord Caruk and other industry experts who reviewed this paper and helped to improve it. Your effort is much appreciated! REFERENCES [1] Accellera, UVM User Guide, v1.1, www.uvmworld.org [2] Accellera, UVM Reference Guide, v1.1d, www.uvmworld.org [3] Verification Academy, UVM, verificationacademy.com [4] Doulos, UVM, www.doulos.com [5] Synopsys, UVM, www.synopsys.com [6] Cadence, UVM, www.cadence.com [7] Wayne Yun, UVM Register Modelling at the Integration-Level Testbench, DVCon US 2016 [8] Brian Bailey, What s Next For UVM http://semiengineering.com/whats-next-for-uvm [9] Mark Litterick, Jason Sprott, and Janathan Bromley, Advanced UVM Tutorial Taking Reuse to the Next Level, DVCon Europe 2015 [10] Mentor Graphics, Layering in UVM, Verification Horizons, Volume 7 Issue 3 [11] Janick Bergeron, Beyond UVM: Creating Truly Reusable Protocol Layering, DVCon US 2013 [12] John Aynsley, Run-Time Phasing in UVM: Ready for the Big Time or Dead in the Water DVCon US 2015