Quality in the Test Automation Review Process and Design Review Template
About this document
- Prerequisite knowledge/experience: Software Testing, Test Automation
- Applicable Microsoft products: Visual Studio Team System, .NET
- Intended audience: Software Testers
Definitions and Terms
Test automation – test code written to carry out the execution of a test case in an automated or at least semi-automated fashion
Data-driven testing (DDT) – test cases that are executed multiples times—once for each input of a given set of data
Positive testing – testing normal situations that should not result in any errors or exceptions thrown
Negative testing—testing failure conditions and/or edge cases that expect an error or exception as a result
Test harness – also known as automated test framework, which consists of a test case repository and a test execution engine
Article Summary
This article discusses the need for writing good test automation and some guidance documents that can help facilitate an automation review process
Full Article
Are you automating test cases? If so, then your team should have some sort of process that ensures test automation is written well. Many people argue that since test automation is not shipping code, the code quality level is unimportant.
I would argue strongly against that point of view. Here’s why: Sure, the code doesn’t ship to the customers. Therefore, yes, customers won’t be discovering bugs in it. However, we’re talking about the code that is used to test the code that is shipping to the customer. If the test code has a bug in it then how do we know that it didn’t miss a bug thatwas in the shipping code? The bottom line is we don’t. The quality of test automation is critical to validating the quality of shipping code. Furthermore, just like the code that is shipped, test code has a maintenance life of its own. Good design, use of design patterns, and refactoring is just as valuable on test code as it is for shipping code since someone is going to be modifying or enhancing it somewhere down the line.
For these reasons, the Microsoft.com team (as well as other teams across the company) has a virtual team of engineers focused on test automation with the goal of ”increasing test automation efficacy without introducing too much process overhead”. “Increasing test automation efficacy” is extremely broad so we’ve translated that vision into more specific objectives:
- Introduce a more planned approach to developing test automation
- Increase the Return on Investment of test automation
- Increase the quality of test automation design and code
- Promote sharing of test automation best practices
- Increase awareness of test automation code that is available and reusable
- Decrease maintenance costs of test automation code
- Ensure test automation plans are comprehensive and cover more than just functional testing (i.e. performance, security)
In order to achieve these objectives, the team’s first priority was to develop a test automation review process that would facilitate the way we create test automation code. What we came up with is a process that includes two major milestones: a test automation design review and a test automation code review.
The below section is that of a Test Automation Design Review Template that is filled out by the tester designing the automation and then submitted to a review team. This helps the tester cover all the bases of a good design as well as document their intention so that the reviewer can easily understand what the automation design is and provide feedback accordingly. I hope these documents will be helpful for you to use in your own automation review process!
Test Automation Design Review Template
Project Name:
Design Author(s):
<<section guidance>>
The purpose of this template is two-fold: first, to get you thinking strategically about your automation design for a new project or component and second, to standardize the documentation approach so it is easier for others to review. It is meant to be filled out before you start writing your automation and prior to asking others to review your design. It contains a list of template sections that will help you structure your automation design and address different aspects of it. Please note that this is only meant to spur high-level thinking about the automation design and in no way should replace the rigorous level of detail that goes into identifying specific test cases and execution scenarios. Since the purpose of filling out this document is primarily for the design review, it is not necessarily expected to be a living document that is kept up-to-date at all times.
The text enclosed in the “section guidance” tags is meant to give you guidance around understanding the purpose of each template section and also assist you in filling it out. The section guidance snippets can be deleted once the section has been completed.
<<end of section guidance>>
1. Test Projects
Questions to answer:
What projects will be created?
What is the intent of each project?
<<section guidance>>
Definition: A standard method of categorizing test code into different projects which gives structure and implies the purpose.
Required Projects:
(More detail is provided in Test Code Layout below)
1. [ProjectName]Tests - contains test methods (no shared or common libraries)
2. [ProjectName]TestLibrary - contains test library code (no test methods)
Optional Projects:
This is a list of projects that have been created for current or past releases. It is used to organize similar pieces of code that need to be separate from the two required projects above.
1. Console app
Purpose - to allow someone to quickly re-run the same test over and over (call the test from the main.cs file's Main() routine, then simply hit F5. This also creates a sandbox where the user can play and modify code temporarily without having to check out any test files or libraries.
Creation - A console app is added to the solution, with one simple Main.cs file. This file is checked into source control as "[Main.cs]". Each new enlistment will need to copy it local to a new file called Main.cs which is not checked into source control. Rather, the file is kept locally and never checked in. The console app project is set as the "Default Startup Project".
2. Web proxy library
Purpose - to abstract out complex code and code otherwise unrelated to the other projects. This allowed for better organization and clear boundaries of what code does what.
3. WebPage Library or API
Purpose - to create a programmatic interface for testing a set of webpages that encapsulates common operations frequently used in a number of test cases. Then, instead of updating all test cases that commonly execute the same sequence of steps, the WebPage library wraps this functionality so that breaking changes only need to be updated in one place.
<<end of section guidance>>
2. Test Code Layout
Questions to answer:
How will projects be structured in source?
<<section guidance>>
Definition: A uniform organization scheme that allows a user to quickly identify what code belongs where and how to find it. This applies to both test code (code automating test cases), test libraries (common code shared by test cases), and other forms of code.
NOTE: The level of detail here will obviously vary according to what is known up front, but the more detail one can include here the better.
General guidelines:
1. Reusable test code is in a separate project from test cases
2. A folder should only have similar items in it
3. Files should be granular enough that multiple people working on the project at the same time will not need to check out the same file at the same time.
4. The schema should be used by all projects so that users will have a common framework and not have to learn new structures with each new project
Folder structure:
Main directory just for this solution, preferably off $[Project Name]\Main\Test\
Folders: One per project (See below)
Files: One or more solution files, and a ReadMe.Txt or ReadMe.Docx (containing any special instructions for layout, setting up, or using the automation).
Sample Project Layout:
[ProjectName]Tests.csproj
- [Feature1Folder]
- - [Feature1][MethodGroupName1]Tests.cs
- - [Feature1][MethodGroupName2]Tests.cs
- - [Feature1]DataDrivenTests.xls
[ProjectName]TestLibrary.csproj
- Settings.cs or App.config
- [Feature1Folder]
- - [Feature1][MethodGroupName1]Lib.cs
- - [Feature1][MethodGroupName2]Lib.cs
<<end of section guidance>>
3. Automation Architecture
<<section guidance>>
Include architecture diagram of your automation here.
Note: Required for all major releases
<<end of section guidance>>
4. Designing for Code Reuse
Questions to answer:
What existing code do you plan to leverage in your design?
What reusable functionality do you plan to contribute and how will this be shared?
What reusable methods do you plan to add to higher-level test libraries (refer to section guidance)?
How do you plan to structure your Project Test Library (one layer, two layers etc.)?
<<section guidance>>
There are 4 main levels of test code abstraction which facilitate re-use:
- Test Methods
- Project Test Library
- Customer/Adopter Test Libraries
- Shared Test Libraries
Test Methods
These are the individually implemented test cases. Any duplicate or copied code should instead refactored and moved up to the test library.
Project Test Library
The test library is used for our internal testing. There can be multiple levels of abstraction within the test library itself, especially in UI automation, where there is a logical layer and a physical layer. Functionality that would be useful outside of just the internal testing process (i.e. a customer could use it to run tests) should be moved up to the next level of code abstraction, Customer/Adopter Test Libraries.
Customer/Adopter Test Libraries
The Customer/Adopter Test Libraries can be used by other people to quickly and easily access functionality in the product from a test automation perspective. These libraries should be scrutinized in a somewhat more rigorous manner since the intent is to give it away externally. It is also beneficial to have some accompanying documentation.
Shared Test Libraries
Shared Test Libraries are automation libraries which are system and project agnostic. There should be no dependencies in these DLLs, other than .Net Frameworks. Any library code that is generic enough to apply to multiple projects/scenarios should be added to these. For example, common stuff like SQL helper objects, Event Log checking, etc is prime for this level of abstraction.
<<end of section guidance>>
5. Security Design Considerations
Questions to answer:
How will your design accommodate the security testing necessary for this project? (If none, please explain why…)
<<section guidance>>
The purpose of this section is to document any special design considerations in your automation that are related to testing the security of the product. Also, this section contains tips about security related precautions we need to take in order to keep our testing environment secure.
Secure Testing Environment
Personally Identifiable Information (PII) - there are times where production data must be used for testing purposes. For example, the test might require actual production-like data distribution. In cases such as this, we need to sanitize the data before importing it into our test environment.
<<end of section guidance>>
6. Performance Design Considerations
Questions to answer:
How will your design be fit for performance testing?
<<section guidance>>
Reusing Functional Tests for Performance Testing
Displaying Exception Information
We typically do functional testing from the VSTS IDE, which automatically displays information about any exceptions that occur during execution. However, we often do performance testing from the VS Command Prompt where this information is not automatically displayed. In order for the performance tester to be able to see this information without having to break into a debugger, we should create a new test method specifically for performance testing (preferably prefixed with a naming convention that identifies it as such) that wraps the existing test method that was used for functional testing. The body of the new method should have nothing but a call to the functional test method surrounded by a ‘try’ block. The ‘catch’ block should write the exception to the console (so that the perf tester knows what went wrong) and just re-throw the exception.
Performance Measurement & Logging
In performance testing we are usually only interested in the actual execution time of the unit under test. It would not be desirable to include overhead code such as test method setup, creation of data to execute the test, and verification of the expected and actual data / state / behavior etc. since we do not want to corrupt the results. If this requirement differs from, for example, another measurement time that must be inclusive of some of these other things, the code should be written in a way that easily allows the performance tester to easily change where timing begins and ends.
Extensive performance logging requirements might require a tool other than VSTS for customized logging. One suggestion is to use log4net. It allows multiple outputs such as DB, file, console etc. We may implement a wrapper for this in the future to enable asynchronous logging. Whatever the tool used, there should be a configuration switch to turn logging on or off.
SQL Server
Tables used for testing, whether functional or performance, should have the appropriate indexes. Although the performance measurement results shouldn’t be affected by, for instance, the time it takes to get test data out of a table, it could speed up execution time and therefore allow us to run tests with more load if necessary.
Other Best Practices (Append to the list whenever new information is known)
This should be self-evident so take it as a reminder. Document your code with comments that include troubleshooting information. This helps the performance tester with known ‘gotchas’ and other things that may come up.
Have one central place to edit configurable values like connection strings, SqlCommandTimeOut, I/O paths.
Be aware that test methods that use data source attributes are usually so inefficient that they cannot be reused as load tests. In other words, they are too slow to be able to generate enough load/RPS.
For data driven tests, consider buffering data at client prior to the test run. This helps avoid the overhead of going back and forth between client and data source while performance testing.
If the functional tests call ASPX pages, instrument the pages to have QueryString parameters that can be fed test data for specific scenarios during performance runs.
<<end of section guidance>>
8. Process Checklist
Steps (mark each upon completion)
[ ] Prepare automation design document using template
[ ] Automation design reviewed by manager
[ ] Automation design reviewed by review team
[ ] All action items from design reviews completed9. Tracking Info
Date Name Document Action (Drafted, Updated, Review etc.) About the Authors:
Devin A. Rychetnik is currently working as a Software Development Engineer in Test II for the Windows Marketplace for Mobile team. In addition to testing, his 9 years of experience in software includes development, project management and security. He is currently finishing a Masters Degree in Software Development from the Open University of England and is a certified Six Sigma Green Belt and Project Management Professional (PMP).
Quality in the Test Automation Review Process and Design Review Template的更多相关文章
- Plant Design Review Based on AnyCAD
Plant Design Review Based on AnyCAD eryar@163.com Abstract. AVEVA Review is used to 3D model visuali ...
- CAD格式DWF嵌入到自己的网页中展示--Autodesk Design Review
网页上嵌入CAD图纸,用的 Autodesk Design Review控件嵌入IE, 网上的 dwf viewer方式没成功. Head之间 <script type="text/j ...
- 开发中Design Review和Code Review
一.Design Review 详解 翻译为设计评审,也就是对需求设计进行审核,防止出现异常问题,例如下面的这些 可用性 外部依赖有哪些?如果这些外部依赖崩溃了我们有什么处理措施? 我们SLA是什么? ...
- 敏捷软件开发实践-Code Review Process(转)
介绍: 在敏捷软件开发中,从代码的产生速度上来看,要比 传统Waterfall产生速度高很多.因为我们把时间安排的更加紧凑了.那么这么多的代码,如何能保证这些代码质量呢?很多人可能直接想到静态代码检测 ...
- SAP Process Integration - High Level ERP/Integration Process --- Cargill Process Concept Design
Customer Industry: Commercial off-the-shelf (COTS) application ,, Food Ingredients or Agricultural S ...
- Peer Code Reviews Made Easy with Eclipse Plug-In
欢迎关注我的社交账号: 博客园地址: http://www.cnblogs.com/jiangxinnju/p/4781259.html GitHub地址: https://github.com/ji ...
- App 被拒 -- App Store Review Guidelines (2015)中英文对照
Introduction(简介) We're pleased that you want to invest your talents and time to develop applications ...
- 15个最佳的代码评审(Code Review)工具
代码评审可以被看作是计算机源代码的测试,它的目的是查找和修复引入到开发阶段的应用程序的错误,提高软件的整体素质和开发者的技能.代码审查程序以各种形式,如结对编程,代码抽查等.在这个列表中,我们编制了1 ...
- Project Management Process
Project Management ProcessDescription .............................................................. ...
随机推荐
- .NET设计模式(7):创建型模式专题总结(Creational Pattern)
):创建型模式专题总结(Creational Pattern) 创建型模式专题总结(Creational Pattern) --.NET设计模式系列之七 Terrylee,2006年1月 转载: ...
- 研究WCF并发及处理能力的控制
WCF 跟并发 性能相关的几个配置:1.系统控制的客户端网络连接并发(如果服务端也需要并发请求的话这个参数也是需要的): <configuration> ...
- MySQL execute dynamic sql script.
SET @sql = (SELECT IF( (SELECT COUNT(*) FROM usher_network_log ) > 1000000, "SELECT 0", ...
- 使用DataList 分页方法
什么是DataList我想应该不需要解释了,接下来分享本人在项目里使用到的通过DataList进行分页展示方法. 首先在ASPX页面添加一个DataList(后面都简称DL)控件,示例代码如下: &l ...
- CSS3圆角(border-radius)
CSS3中的border-radius支持IE9+,chrome,firefox 2.3+,以及safari3.2+浏览器. border-radius可直接使用,无需加前缀(注意:firefox13 ...
- JavaScript学习总结【1】、初识JS
1.什么是 JavaScript? JavaScript 是一门跨平台.面向对象的动态的弱类型的轻量级解释型语言,是一种基于对象和事件驱动并具有相对安全性的客户端脚本语言.应用于 HTML 文档能够在 ...
- jQuery监听键盘事件及相关操作使用
一.首先需要知道的是: 1.keydown() keydown事件会在键盘按下时触发. 2.keyup() keyup事件会在按键释放时触发,也就是你按下键盘起来后的事件 3.keypress() k ...
- Android Broadcast管理
- 练习2 F题 - 平方和与立方和
Time Limit:1000MS Memory Limit:32768KB 64bit IO Format:%I64d & %I64u Description 给定一 ...
- Prototype:Copy和Clone
原型模式在C#中的实现比较直接,因为只需要继承了IClone的接口,就可以通过重写Clone方法,调用MemberwiseClone()来实现ProtoType的方式. class Test:IClo ...