Guidelines for Successful SoC Verification in OVM/UVM
By Moataz El-Metwally, Mentor Graphics
Cairo Egypt
Abstract :
With the increasing adoption of OVM/UVM, there is a growing
demand for guidelines and best practices to ensure successful SoC
verification. It is true that the verification problems did not
change but the way the problems are approached and the structuring
of the solutions, i.e. verification environments, depends much on
the methodology. There are two key categories for SoC verification
guidelines: process, enabled by tools, and methodology. The process
guidelines are about what you need to do and in what order, while
the methodology guidelines are about how to do it. This paper will
first describe the basic tenets of OVM/UVM, and then it tries to
summarize key guidelines to maximize the benefits of using state of
the art verification methodology such as OVM/UVM.
The BASIC TENETS of OVM/UVM
1. Functionality encapsulation
OVM [1] promotes composition and reuse by encapsulating
functionality in a basic block called ovm_component. This basic
block contains a run task, i.e a functional block that can consume
time that acts as an execution thread responsible for implementing
functionality as simulation progress.
2. Transaction-Level Modeling (TLM)
OVM/UVM uses TLM standard to describe communication between
verification components in an OVM/UVM environment. Because OVM/UVM
standardizes the way components are connected, components are
interchangeable as long as they provide and require the same
interfaces. One of the main advantages of using TLM is in
abstracting the pin and timing details. A transaction, the unit of
information exchange between TLM components, encapsulates the
abstract view of stimulus that can be expanded by a lower-level
component. One the pitfalls that can undermine the value of TLM is
adding excessive timing details by generating transaction and
delivering them on each clock cycle.
3. Using sequences for stimulus generation
The transactions need to be generated by an entity in the
verification environment. Relying on a component to generate the
transactions is limiting because it will require changing the
component each time a different sequence of transactions is
required. Instead OVM/UVM allows for flexibility by introducing
ovm_sequence. ovm_sequence is a wrapper object around a function
called body(). It is very close to an OOP pattern called
"functor" that wraps a function
in an object to allow it to be passed as a parameter but
SystemVerilog does not support operator overloading [1].
ovm_sequence when started, register itself with an ovm_sequencer
which is an ovm_component that acts as the holder of different
sequences and can connect to other ovm_components. The ovm_sequence
and ovm_sequencer duo provides the flexibility of running different
streams of transactions without having to change the component
instantiation.
4. Configurability
Configurability, an enabler to productivity and reuse, is a key
element in OVM/UVM. In OVM/UVM, user can change the behavior of an
already instantiated component by three means: configuration API,
Factory overrides and callbacks.
5. Layering
Layering is a powerful concept in which every level takes care
of the details at specific layers. OVM layering can be applied to
components, which can be called hierarchy and composition, and to
configuration and to stimulus. Typically there is a correspondence
between layering of components and objects. Layering stimulus, on
the other hand, can reduce the complexity of stimulus
generation.
6. Emphasis on reuse (vertical and
horizontal)
All the tenets mentioned above lead to another important goal
which is reuse. Extensibility, configurability and layering
facilitate reuse. Horizontal reuse refers to reusing Verification
IPs (VIPs) across projects and vertical reuse describes the ability
to use block-level VIPs in cluster and chip level verification
environments.
PROCESS GUIDELINES
1. Ordering of development tasks
The natural process for developing OVM/UVM verification
environment is bottom-up. Blocks are first verified in block-level
environments, and then the integration of the blocks into SoC is
verified in chip-level testbench. Some refers to this methodology
as IP-centric methodology because the blocks are considered IPs
[4]. The focus of block-level verification is to verify the blocks
thoroughly, while the chip-level is focused on verifying the
integration of the blocks and the application scenarios. A
bottom-up verification approach has several benefits:
- Localization of bugs: finding bugs easily
- Easier to test all the block modes at the block-level
- Confidence in the block-level allowing them to be reused in
several projects.
In this section we describe the recommended ordering for
development of verification environment elements. Such ordering
must be in mind when developing executable verification plans.
Table 1: Components Development Order
Interfaces
Agents
Transaction
Configuration
Agent Skeleton
Transactors
Basic Sequences
Block level Subsystem
Configuration
Virtual Sequencer
Initial Sequences/Tests
Scoreboards & Protocol Checkers
Coverage Model
Constrained Random Sequences/Tests
Chip Level
Integration of Subsystem environments
Chip-Level Sequences/Tests
It is worth noting the following:
- Once transaction fields are defined and implemented, the agent
skeleton can be automatically generated. - Transactors refer to drivers and monitors
- The reason for having the scoreboards &
protocol checkers early on is to make sure that what was developed
is functioning - Coverage model needs to be before the constrained random tests
to guide the test development and eliminate redundancy. This is a
corner stone of Coverage Driven verification. The coverage model
not only guides the test writing effort but rather gives is a
metric for verification progress and closure. - Each block/subsystem/cluster verification environment and tests
act as a VIP for this block.
2. Use code and template generators
Whether you are relying on script or elaborate OVM template
generators, these generators are keys to increase the productivity
of verification engineers, reduce errors and increase code
conformity. Code generators are also used to generate register
models from specification thus automating the creation of these
models
3. Qualify your VIPs
Qualify your VIP during development and before releasing them.
First, several tools can conduct static checking on your VIP
components for common errors and conformance to coding styles. They
can also provide statistics about the size of your code, checks and
covergroups.
Second, typically a simulator can provide statistics about
memory consumption and performance bottlenecks of your VIP.
Although SystemVerilog has automatic garbage collection, you can
still have memory leaks because you keep a reference to dynamically
allocated objects somewhere and forget about them.
Third, your VIPs should be robust to user mistakes whether in
connections or proper use. You need to have sanity checks that can
flag early a user error.
Finally, peer review is still beneficial to point-out issues
that are missed in the other steps.
4. Incremental integration
As described in the introduction, OVM/UVM facilitates
composition and layering. Several components/agents can form an
environment and two or more environments can form a higher level
environment. Incremental integration is important to reduce
debugging time.
5. Better regression management and result
analysis
The usual scripts that compile and run testcases come short when
running complex OVM/UVM SoC verification environment. Typical
requirements on run management is to keep track of seeds, log files
of different tests, execution time, flexibility of running
different groups of tests and running on local machine or grid.
Once a regression is run we end up with data that needs to be
processed to come out for useful information such as which tests
passed/failed, common failure messages, which tests were more
efficient and which seeds produced better coverage.
6. Communication and change management
Communication between verification engineers and specification
owners should be captured in an issue tracking tool to avoid losing
the information along the project. Also verification engineers need
mechanism to share what they learn between each other, Wikis serve
as good vehicles to knowledge sharing.
Change management is the other crucial element. By change
management we are not only referring to code version management but
the way the changes in RTL and block-level environments are handled
in cluster or chip level environments.
METHODOLOGY GUIDELINES
1. CPU modeling
SoCs typically have one or more software programmable component
such as microcontroller, microprocessor or DSP. Processor Driven
Verification refers to using either a functional model of the
processor or RTL model to verify the functionality of the SoCs.
This approach is useful to verify the firmware interactions and
certain application scenarios. However, for thorough verification
of subsystems/cluster this approach can be costly in terms of
effort, complexity, and simulation time. This paper proposes two
level approach: for the verification of subsystems use a
pin-accurate and protocol accurate Bus Functional Model (BFM), this
will enable rapid development of the verification environment and
tests and at the same time gives flexibility to the verification
engineer in creating the environment and test. The BFM usually
comes as VIP for the specific bus standard that the processor
connects to. While the VIP usually models the standard interface
faithfully, the processor might have extra side-band signals and
interrupt. There are two approaches to this: the VIP can model in a
generic way the side-band and interrupt controller behavior through
the use of configuration, transactions and sequences. The other
approach is to model the functionalities in different agents for
side-band signals and interrupts. This increases the burden on the
development and requires synchronization between different
agents.
For the verification of firmware interaction, such as
boot-loading or critical application scenarios, the RTL model or a
full functional model can be used guarantee that firmware is
validated versus the hardware it is going to run on and that the
hardware.
2. Environment Reuse
Environments should be self-contained having only knowledge
about its components and global elements and can communicate only
through configuration mechanism, TLM connections or global events
such as reset event. Following these rules, an environment at the
block-level can be reused at the chip-level making the chip-level
environment the integration of block-level environments.
3. Sequence Reuse
It is important to write sequences with eye on reusing them. In
OVM/UVM, there are two types of sequences: sequence which sends
transactions and sequences that starts sequences on sequencers. The
latter is called a virtual sequence. Below is further
classification of the sequences based on the functionality:
- Basic agent sequence: this sequence allows the user to control
the fields of a transaction that sent by the basic sequence from
outside. The basic agent sequence acts as an interface or API to
randomize or set the fields of the transactions sent by a higher
layer which is usually the virtual sequence. - Register read/write sequences: these are sequences that try to
write and read address mapped registers in the DUT. Two important
rules need to be considered: they should have API that is
independent of the bus protocol and rely on use the name of the
register rather than address. A register package can be used to
lookup the register address by name. For Example: OVM register
package built-in sequences [5] supports this kind of abstraction.
It is also expected that the UVM register package will support
these rules. Abiding by these rules make these sequences reusable
and maintainable because there is no need to update the sequence
each time a register address changes. - DUT configuration sequences: some verification engineer try to
provide sequences that abstracts the different configurations of
the DUT into enum fields to ease the burden on the test writer.
This way the test writer does not need to know about which register
to write and with what value. These sequences are still reusable at
the chip-level. - Virtual sequences on accessible interfaces at chip-level: These
sequences are reusable from block-level to chip-level; some of them
can be used to verify the integration into full-chip. - Virtual sequences on internal interfaces that are not visible
at the chip-level: Special attention should be paid for sequences
generating stimulus on interfaces that are no longer visible at the
chip-level.
Although goals are different between block and chip level
testing, some virtual sequences from block-level can be reused at
chip-level as integration tests. Interfaces that become internal at
the chip-level can be usually stimulated through some external
interface. In order to make the last type of virtual sequences
reusable at chip-level, it is better to plan ahead to abstract the
data from the protocol. For example in Figure 1 of SoC diagram
peripherals 1 through N are on peripheral bus which might be using
a different protocol than the system bus. There are two approaches
to make the sequences reusable:
Use functional abstraction by defining functions in the virtual
sequence that can be overridden like:
write(register_name, value);
read(register_name, value);
Or rely on a layering technique like ovm_layering[3]. In this
approach, a layering agent sits on top of a lower level agent and
it forwards high-level transactions that can be translated by the
low-level agent according to the bus standard. The high-level agent
can be connected to a different low-level agent without any change
to the high-level sequences.
Figure 1: Typical SoC Block Diagram
4. Scoreboards
A critical component of self-checking testbenches is the
scoreboard that is responsible for checking data integrity from
input to output. A scoreboard is a TLM component, care should be
taken not activate on a cycle by cycle basis but rather at the
transaction level. In OVM/UVM, the scoreboard is usually connected
to at least 2 analysis ports one from the monitors on the input(s)
side and the other on the output(s) Figure 2 depicts these
connections. A Scoreboard operation can be summarized in the
following equations:
Expected = TF(Input Transaction);
Compare(Actual , Expected);
TF : Transfer function representing the DUT functionality
from inputs to outputs
Sometimes the operation is described as predictor-comparator.
Where the predictor computes the next output (transfer function)
and the comparator checks the actual versus predicted (compare
function). Usually the transfer function is not static but can
change depending on the configuration of the devices. In SoC, most
peripherals have memory-mapped registers that are used for
configuration and status. These devices are usually called
memory-mapped peripherals and they pose two challenges:
- DUT transfer function and data-flow might change based on the
configuration - Status bits should be verified
The common solution to the first one is to have a handle of the
memory-map model and connect an analysis port from the
configuration bus monitor to the scoreboard. On reception of new
transaction on this analysis port, the scoreboard updates the
peripheral's registerfile model and then uses it to update the
transfer function accordingly. This approach has one disadvantage;
each peripheral scoreboard has to implement the same functionality
and needs to connect to the configuration bus monitor. A better
approach is that the registerfile updates occur in a central
component on the bus. To eliminate the need for the connections to
the bus monitor, the register package can have an analysis port on
each registerfile model. Each Scoreboard can connect to this
registerfile model internally without the need for external
connections. One of the requirements on the UVM register package is
to have update notification method [6].
The second challenge is status bit verification. Status bits are
usually modeled in the register model and register model can act as
a predictor of the value of status bits. This requires that the
scoreboard predicts changes to status bits, update the register
models and on register reads the value read from the DUT is
compared versus the register model.
There are other aspects to consider when implementing the
scoreboards:
- Data flow analysis: data flow can change based on
configuration, or data flow can come from several inputs towards
the output. - Scoreboard connection technique: Scoreboards can be connected
to monitors using one of two ways: through ovm_imps in the
scoreboard or through ovm_exports and tlm_analysis_fifos: the
latter requires a thread on each tlm_analysis_fifo to get
transactions while the former executes in the context of the
caller. - Threaded or thread-less: the scoreboard can have 0 or more
threads depending on a number of factors such as the connection
method, the complexity of synchronization and experience of the
developer. As a general rule, verification engineers should avoid
spawning unnecessary threads in the scoreboard.
At the SoC level, there are two approaches to organize
scoreboards with End-to-End and Multi-step [2]. Figure 3 depicts
the difference between the two. The multi-step approach has several
advantages over the end-to-end:
- By product of the block-level to chip-level reuse.
- The checking task is simpler since it is divided over several
components each concerned with specific block. - Easy to localize bugs at block-level since the violating block
scoreboard will flag the error
Figure 2: Scoreboard Connection in OVM
Figure 3: End-to-End vs. Multi-Step Scoreboard
CONCLUSION
OVM/UVM is a powerful verification methodology. To maximize the
value achieved by adopting OVM/UVM there is a need for guidelines.
These guidelines are not only for the methodology deployment but
also for the verification process. This paper tried to summarize
some of the pitfalls and tradeoffs and provide guidelines for
successful SoC verification. The set of guidelines in this paper
can help you plan ahead your SoC verification environment, avoid
pitfalls and increase productivity.
REFERENCES
- Mark Glasser, Open Verification Methodology Cookbook, Springer
2009 - Sasan Iman and Sunita Jushi, The e-Hardware Verification
Language, Springer 2004. - Rich Edelman et al., You Are In a Maze of Twisty Little
Sequences, All Alike � or Layering Sequences for Stimulus
Abstraction, DVCON 2010. - Victor Besyakov et al., Constrained Random Test Environment for
SoC Verification using
Guidelines for Successful SoC Verification in OVM/UVM的更多相关文章
- (转)新手学习System Verilog & UVM指南
从刚接触System Verilog以及后来的VMM,OVM,UVM已经有很多年了,随着电子工业的逐步发展,国内对验证人才的需求也会急剧增加,这从各大招聘网站贴出的职位上也可以看出来,不少朋友可能想尽 ...
- Cracking Digital VLSI Verification Interview 第四章
目录 Hardware Description Languages Verilog SystemVerilog 对Cracking Digital VLSI Verification Intervie ...
- ASIC 前端功能验证等级与对应年薪划分[个人意见] (2011-07-04 15:33:35
下面的讨论转载自eetop,我选取了一些有意义的讨论,加了我的评注. 楼主zhhzhuawei认为 ===================================== 对于ASIC的前端功能验 ...
- Scoring and Modeling—— Underwriting and Loan Approval Process
https://www.fdic.gov/regulations/examinations/credit_card/ch8.html Types of Scoring FICO Scores V ...
- UVM/OVM中的factory【zz】
原文地址:http://bbs.eetop.cn/viewthread.php?tid=452518&extra=&authorid=828160&page=1 在新的项目中再 ...
- ( 转)UVM验证方法学之一验证平台
在现代IC设计流程中,当设计人员根据设计规格说明书完成RTL代码之后,验证人员开始验证这些代码(通常称其为DUT,Design Under Test).验证工作主要保证从设计规格说明书到RTL转变的正 ...
- UART IP和UVM的验证平台
UART是工程师在开发调试时最常用的工具的,其通信协议简单.opencores 网站提供了兼容16550a的UART IP其基本特性如下: uart16550 is a 16550 compatibl ...
- 【转】uvm 与 system verilog的理解
http://www.cnblogs.com/loves6036/p/5779691.html 数字芯片和FPGA的验证.主要是其中的功能仿真和时序仿真. 验证中通常要搭建一个完整的测试平台和写所需要 ...
- FPGA验证之SystemVerilog+UVM
[转载]https://blog.csdn.net/lijiuyangzilsc/article/details/50879545 数字芯片和FPGA的验证.主要是其中的功能仿真和时序仿真. ...
随机推荐
- KMP hihoCoder1015 KMP算法
人太蠢,,看了一天的KMP.. 刚開始看训练指南的,,后来才惊奇的发现原来刘汝佳写的f数组并非Next数组! 总认为和之前看过的全然不一样.. . 后来又百度了一下KMP,研究了非常久,然后用自己的逻 ...
- point-position2修改版
说明: 在共面直线测试中,由于计算误差等原因,共面条件判断不准,但计算结果依然正确. // point-position2.cpp : 定义控制台应用程序的入口点. #include "st ...
- 2、easyUI-创建 CRUD可编辑dataGrid(表格)
在介绍这节之前,我们先看一下效果图: 双击可以进入编辑
- lucene中的IndexWriter.setMaxFieldLength()
lucene中的IndexWriter.setMaxFieldLength() 老版本的Lucene中,IndexWriter的maxFieldLength是指一个索引中的最大的Field个数. 这个 ...
- java随手记
javaagent可以hook字节码处理 java -javaagent:jebloader.jar -jar xxx.jar 结合javassist,可以动态替换方法内容 import java.i ...
- 《从零开始学Swift》学习笔记(Day 14)——字符串的插入、删除和替换
原创文章,欢迎转载.转载请注明:关东升的博客 对应可变字符串可以插入.删除和替换,String提供了几个方法可以帮助实现这些操作.这些方法如下: splice(_:atIndex:).在索引位置插入字 ...
- SpringBoot-------实现多数据源Demo
之前SpringBoot出来时候就看了下Springboot,感觉的确精简掉了配置文件! 还是很方便的!没办法,我只是个菜鸟! 什么怎么启动Springboot什么的就不说了, 具体的Demo地址我都 ...
- 解决\build\outputs\apk\dream-debug.apk does not exist on disk错误
\build\outputs\apk\dream-debug.apk does not exist on disk.错误,apk一直装不到手机里. 最有效的解决方法:Build>Buid APK
- Slyce,这家硅谷创业公司的来头你知道吗
Slyce,也许你没听过,一家硅谷创业公司,旨在帮助运动员和其他社会名流组织.优化社交媒体,过滤粉丝的声音,让明星更好的在社交媒体上和他们互动.但是如果如果说库里,那你应该就知道了,拿到了上届NBA总 ...
- ubuntu常见错误--Could not get lock /var/lib/dpkg/lock解决(转)
通过终端安装程序sudo apt-get install xxx时出错: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource t ...