Spock - Document - 03 - Data Driven Testing
Data Driven Testing
Oftentimes, it is useful to exercise the same test code multiple times, with varying inputs and expected results. Spock’s data driven testing support makes this a first class feature.
Introduction
Suppose we want to specify the behavior of the Math.max
method:
class MathSpec extends Specification {
def "maximum of two numbers"() {
expect:
// exercise math method for a few different inputs
Math.max(1, 3) == 3
Math.max(7, 4) == 7
Math.max(0, 0) == 0
}
}
Although this approach is fine in simple cases like this one, it has some potential drawbacks:
Code and data are mixed and cannot easily be changed independently
Data cannot easily be auto-generated or fetched from external sources
In order to exercise the same code multiple times, it either has to be duplicated or extracted into a separate method
In case of a failure, it may not be immediately clear which inputs caused the failure
Exercising the same code multiple times does not benefit from the same isolation as executing separate methods does
Spock’s data-driven testing support tries to address these concerns. To get started, let’s refactor above code into a data-driven feature method. First, we introduce three method parameters (called data variables) that replace the hard-coded integer values:
class MathSpec extends Specification {
def "maximum of two numbers"(int a, int b, int c) {
expect:
Math.max(a, b) == c
...
}
}
We have finished the test logic, but still need to supply the data values to be used. This is done in a where:
block, which always comes at the end of the method. In the simplest (and most common) case, the where:
block holds a data table.
Data Tables
Data tables are a convenient way to exercise a feature method with a fixed set of data values:
class MathSpec extends Specification {
def "maximum of two numbers"(int a, int b, int c) {
expect:
Math.max(a, b) == c
where:
a | b | c
1 | 3 | 3
7 | 4 | 7
0 | 0 | 0
}
}
The first line of the table, called the table header, declares the data variables. The subsequent lines, called table rows, hold the corresponding values. For each row, the feature method will get executed once; we call this an iteration of the method. If an iteration fails, the remaining iterations will nevertheless be executed. All failures will be reported.
Data tables must have at least two columns. A single-column table can be written as:
where:
a | _
1 | _
7 | _
0 | _
Isolated Execution of Iterations
Iterations are isolated from each other in the same way as separate feature methods. Each iteration gets its own instance of the specification class, and the setup
and cleanup
methods will be called before and after each iteration, respectively.
Sharing of Objects between Iterations
In order to share an object between iterations, it has to be kept in a @Shared
or static field.
NOTE
|
Only @Shared and static variables can be accessed from within a where: block. |
Note that such objects will also be shared with other methods. There is currently no good way to share an object just between iterations of the same method. If you consider this a problem, consider putting each method into a separate spec, all of which can be kept in the same file. This achieves better isolation at the cost of some boilerplate code.
Syntactic Variations
The previous code can be tweaked in a few ways. First, since the where:
block already declares all data variables, the method parameters can be omitted.[1] Second, inputs and expected outputs can be separated with a double pipe symbol (||
) to visually set them apart. With this, the code becomes:
class MathSpec extends Specification {
def "maximum of two numbers"() {
expect:
Math.max(a, b) == c
where:
a | b || c
1 | 3 || 3
7 | 4 || 7
0 | 0 || 0
}
}
Reporting of Failures
Let’s assume that our implementation of the max
method has a flaw, and one of the iterations fails:
maximum of two numbers FAILED
Condition not satisfied:
Math.max(a, b) == c
| | | | |
| 7 4 | 7
42 false
The obvious question is: Which iteration failed, and what are its data values? In our example, it isn’t hard to figure out that it’s the second iteration that failed. At other times this can be more difficult or even impossible. [2] In any case, it would be nice if Spock made it loud and clear which iteration failed, rather than just reporting the failure. This is the purpose of the @Unroll
annotation.
Method Unrolling
A method annotated with @Unroll
will have its iterations reported independently:
@Unroll
def "maximum of two numbers"() {
...
@Unroll
the default?One reason why @Unroll
isn’t the default is that some execution environments (in particular IDEs) expect to be told the number of test methods in advance, and have certain problems if the actual number varies. Another reason is that @Unroll
can drastically change the number of reported tests, which may not always be desirable.
Note that unrolling has no effect on how the method gets executed; it is only an alternation in reporting. Depending on the execution environment, the output will look something like:
- maximum of two numbers[0] PASSED
- maximum of two numbers[1] FAILED
- Math.max(a, b) == c
- | | | | |
- | 7 4 | 7
- 42 false
- maximum of two numbers[2] PASSED
This tells us that the second iteration (with index 1) failed. With a bit of effort, we can do even better:
@Unroll
def "maximum of #a and #b is #c"() {
...
This method name uses placeholders, denoted by a leading hash sign (#
), to refer to data variables a
, b
, and c
. In the output, the placeholders will be replaced with concrete values:
- maximum of 3 and 5 is 5 PASSED
- maximum of 7 and 4 is 7 FAILED
- Math.max(a, b) == c
- | | | | |
- | 7 4 | 7
- 42 false
- maximum of 0 and 0 is 0 PASSED
Now we can tell at a glance that the max
method failed for inputs 7
and 4
. See More on Unrolled Method Names for further details on this topic.
The @Unroll
annotation can also be placed on a spec. This has the same effect as placing it on each data-driven feature method of the spec.
Data Pipes
Data tables aren’t the only way to supply values to data variables. In fact, a data table is just syntactic sugar for one or more data pipes:
...
where:
a << [1, 7, 0]
b << [3, 4, 0]
c << [3, 7, 0]
A data pipe, indicated by the left-shift (<<
) operator, connects a data variable to a data provider. The data provider holds all values for the variable, one per iteration. Any object that Groovy knows how to iterate over can be used as a data provider. This includes objects of type Collection
, String
, Iterable
, and objects implementing the Iterable
contract. Data providers don’t necessarily have to be the data (as in the case of a Collection
); they can fetch data from external sources like text files, databases and spreadsheets, or generate data randomly. Data providers are queried for their next value only when needed (before the next iteration).
Multi-Variable Data Pipes
If a data provider returns multiple values per iteration (as an object that Groovy knows how to iterate over), it can be connected to multiple data variables simultaneously. The syntax is somewhat similar to Groovy multi-assignment but uses brackets instead of parentheses on the left-hand side:
@Shared sql = Sql.newInstance("jdbc:h2:mem:", "org.h2.Driver")
def "maximum of two numbers"() {
expect:
Math.max(a, b) == c
where:
[a, b, c] << sql.rows("select a, b, c from maxdata")
}
Data values that aren’t of interest can be ignored with an underscore (_
):
...
where:
[a, b, _, c] << sql.rows("select * from maxdata")
Data Variable Assignment
A data variable can be directly assigned a value:
...
where:
a = 3
b = Math.random() * 100
c = a > b ? a : b
Assignments are re-evaluated for every iteration. As already shown above, the right-hand side of an assignment may refer to other data variables:
...
where:
where:
row << sql.rows("select * from maxdata")
// pick apart columns
a = row.a
b = row.b
c = row.c
Combining Data Tables, Data Pipes, and Variable Assignments
Data tables, data pipes, and variable assignments can be combined as needed:
...
where:
a | _
3 | _
7 | _
0 | _
b << [5, 0, 0]
c = a > b ? a : b
Number of Iterations
The number of iterations depends on how much data is available. Successive executions of the same method can yield different numbers of iterations. If a data provider runs out of values sooner than its peers, an exception will occur. Variable assignments don’t affect the number of iterations. A where:
block that only contains assignments yields exactly one iteration.
Closing of Data Providers
After all iterations have completed, the zero-argument close
method is called on all data providers that have such a method.
More on Unrolled Method Names
An unrolled method name is similar to a Groovy GString
, except for the following differences:
Expressions are denoted with
#
instead of$
[3], and there is no equivalent for the${…}
syntax.Expressions only support property access and zero-arg method calls.
Given a class Person
with properties name
and age
, and a data variable person
of type Person
, the following are valid method names:
def "#person is #person.age years old"() { // property access
def "#person.name.toUpperCase()"() { // zero-arg method call
Non-string values (like #person
above) are converted to Strings according to Groovy semantics.
The following are invalid method names:
def "#person.name.split(' ')[1]" { // cannot have method arguments
def "#person.age / 2" { // cannot use operators
If necessary, additional data variables can be introduced to hold more complex expression:
def "#lastName"() { // zero-arg method call
...
where:
person << [new Person(age: 14, name: 'Phil Cole')]
lastName = person.name.split(' ')[1]
}
setup:
block, but not in any conditions.Spock - Document - 03 - Data Driven Testing的更多相关文章
- What is Data Driven Testing? Learn to create Framework
What is Data Driven Testing? Data-driven is a test automation framework which stores test data in a ...
- Spock - Document -04- Interaction Based Testing
Interaction Based Testing Peter Niederwieser, The Spock Framework TeamVersion 1.1 Interaction-based ...
- Spock - Document -02 - Spock Primer
Spock Primer Peter Niederwieser, The Spock Framework TeamVersion 1.1 This chapter assumes that you h ...
- Spock - Document -05 - Extensions
Extensions Peter Niederwieser, The Spock Framework TeamVersion 1.1 Spock comes with a powerful exten ...
- [转]Table-Driven and Data Driven Programming
What is Table-Driven and Data-Driven Programming? Data/Table-Driven programming is the technique of ...
- Python DDT(data driven tests)模块心得
关于ddt模块的一些心得,主要是看官网的例子,加上一点自己的理解,官网地址:http://ddt.readthedocs.io/en/latest/example.html ddt(data driv ...
- Spock - Document -01- introduction & Getting Started
Introduction Peter Niederwieser, The Spock Framework TeamVersion 1.1 Spock is a testing and specific ...
- Spock - Document -06 - Modules
Modules Peter Niederwieser, The Spock Framework TeamVersion 1.1 Guice Module Integration with the Gu ...
- [PyData] 03 - Data Representation
Ref: http://blog.csdn.net/u013534498/article/details/51399035 如何在Python中实现这五类强大的概率分布 考虑下在mgrid上画二维概率 ...
随机推荐
- node-express-1
安装: express v4.0以后的安装: npm install express-generator -g 建立项目 express -t ejs blog 安装依赖 cd blog && ...
- 作为新手在学习SSM+Easyui过程中遇到一系列问题
对于初学SSM来说,如果不熟悉SSM中SpringMVC对数据处理,会造成很大的困扰, SSM中对前台页面放在WEB-INF下,对于读取外部信息,例如导入easyui的js文件.以及不能直接进行跳转. ...
- pyCharm中BeautifulSoup应用
BeautifulSoup 是第三方库的工具,它包含在一个名为bs4的文件包中,需要额外安装,安装方式 非常简单,进入python的安装目录,再进入scripts子目录,找到pip程序, pip in ...
- Timer 的学习
Timer 实例化多个对象就会启动多个线程 TimerTask 中 捕获异常为基类Exception,那么出现异常后就继续执行.及时报错 TimerTask中未捕获异常或者捕获异常与程序抛出异常不一致 ...
- 运行Office 2007安装程序提示:"找不到Office.zh-cn\OfficeMUI.xml"(转载)亲测
去网上查结果原来是Office 2007和Visual Studio 2008 Authoring Component组件相冲突,网上说用VS.Net 2008光盘WCU\WebDesignerCor ...
- 最近学习了Sqlite3数据库,写一下操作应用以及命令
首先使用Flask-SQLAlchemy管理数据库 使用pip安装:pip install flask-sqlalchemy 接着要配置数据库,定义模型 关于数据库的操作就不再写了.... 使用Fla ...
- 使用Babel将单独的js文件 中的 ES6转码为ES5
如果你并没有接触过ES6,当你看到下面的代码时,肯定是有点懵逼的(这是什么鬼?心中一万头神兽奔腾而过),但是你没看错,这就是ES6.不管你看不看它,它都在这里. 1 2 3 4 5 6 7 8 9 ...
- SpringIOC容器装配Bean
Spring 的core Container(Spring的核心容器)有四大部分:bean.context.core.expression 在进行Bean的配置时候,需要添加四个jar包 如下: 分别 ...
- 最近学习的 Node.js 之 http
利用 http 模块开始写简单的web服务. 模块: const http=require('http'); const fs=require('fs'); const path=require('p ...
- this与super
this是引用,表示当前对象,堆中每一个对象都有this,保存的地址指向自身:super不是引用,是表示当前对象的父类特征. this可以使用在构造方法中,即this(..),必须出现在代码第一行,代 ...