Basic Information

  • Authors: Jooyong Yi, Shin Hwei Tan, Sergey Mechtaev, Marcel Böhme, Abhik Roychoudhury
  • Publication: EMSE'17
  • Conclusion: In general, with the increase of traditional test suite metrics, the reliability of repairs tend to increase. In particular, such a trend is most strongly observed in statement coverage. Their results imply that the traditional test suite metrics proposed for software testing can also be used for automated program repair to improve the reliability of repairs.

Interesting Points

Correlation between Mutation Testing and Automated Program Repair:

To some extent, automated program repair and mutation testing are very similar. It can be viewed that automated program repair “mutates" the original program, this time in an attempt to find a repair. As in mutation testing, mutants that fail to pass all tests in the provided test-suite are considered buggy (hence, incorrect repairs). This conceptual similarity between mutation testing and automated program repair suggests the plausibility of using the mutation score to measure the quality of a test-suite not only for mutation testing but also for automated program repair. Just as a higher mutation score is associated with a better fault-detection ability in mutation testing, it appears plausible to associate a higher mutation score with a better ability to guide a reliable repair.

There is not only similarity but also duality between mutation testing and automated program repair. As pointed out by Weimer et al (2013), “our confidence in mutant testing increases with the set of non-redundant mutants considered, but our confidence in the quality of a program repair gains increases with the set of non-redundant tests." Note that mutation score measures the non-redundancy of killed mutants, not the non-redundancy of tests capable of killing mutants. We introduce a new metric called capable-tests ratio in the next section that measures the non-redundancy of capable tests.

Measure quality of Test-suite quality and Repair

This paper mainly explore the correlation between quality of automated program repair (APR) and test-suite.

To evaluate quality of APR, traditional metrics (i.e., 1) statement coverage, 2) branch coverage, 3) test-suite size, 4) mutation score) and capable-tests ratio are used.

RQs and Results

RQ1: : Is there a negative correlation between the metrics of a testsuite and the regression ratio of automatically generated repairs? In other words, are generated repairs less likely to cause regressions, as test-suite metrics increase?

As the traditional test-suite metrics (statement coverage, branch coverage, test-suite size, and mutation score) increase, the regression rate of automatically generated repairs generally decreases, showing the promise of using the traditional test-suite metrics to control the regression ratio of automatically generated repairs. Capable-tests ratio does not seem as useful as the traditional metrics in controlling the quality of generated repairs.

RQ2: Which test-suite metric is most strongly correlated with the regression ratio of automatically generated repairs?

In our experiments, statement coverage is, on average, more strongly correlated with regression ratio than other metrics we investigate. Our results suggest that to reduce the regression ratio, increasing statement coverage is more promising than improving the other test-suite metrics.

RQ3: Is there a negative correlation between the metrics of a test-suite and the repairability of automated program repair? In other words, should repairability be sacrificed in an attempt to obtain a higher-quality repair via a higher-quality testsuite?

Our experimental results are inconclusive about the correlation between test-suites and repairability. However, we note that increasing test-suite metric does not always decrease repairability. Im some subjects, positive correlations were observed between test-suite metrics and repairability, indicating that as the test-suite metrics increase, repairability tends to increase.

RQ4: Is there a negative correlation between the metrics of a test-suite and repair time? In other words, should more time be spent in an attempt to obtain a higher-quality repair via a higher-quality test-suite?

Our experimental results are inconlusive about the correlation between test-suites and repair time. However, we note that increasing test-suite metric does not always increase repair time. In some subjects, negative correlations were observed between test-suite metrics and repair time, indicating that as the test-suite metrics increase, repair time tends to decrease.

Different Repair Algorithm: SEMFIX

Our experimental results from SEMFIX generally coincide with our finding from the GENPROG experiment, despite the differences in repair algorithms and fault localization techniques. The traditional test-suite metrics are, overall, negatively correlated with regression ratio, similar to our GENPROG experimental results. In particular, **statement coverage** is again shown to be most strongly correlated with regression ratio.

[EMSE'17] A Correlation Study between Automated Program Repair and Test-Suite Metrics的更多相关文章

  1. Reading List on Automated Program Repair

    Some resources: https://www.monperrus.net/martin/automatic-software-repair 2017 [ ] DeepFix: Fixing ...

  2. [Benchmark] Codeflaws: A Programming Competition Benchmark for Evaluating Automated Program Repair Tools

    Basic Information Publication: ICSE'17 Authors: Shin Hwei Tan, Jooyong Yi, Yulis, Sergey Mechtaev, A ...

  3. One example to understand SemFix: Program Repair via Semantic Analysis

    One example to understand SemFix: Program Repair via Semantic Analysis Basic Information Authors: Ho ...

  4. paho_c_pub 使用方法

    Latest Paho Status (2) 摘自:http://modelbasedtesting.co.uk/ I last wrote about the state of Paho in Oc ...

  5. A Great List of Windows Tools

    Windows is an extremely effective and a an efficient operating system. Like any other operating syst ...

  6. docker入门级详解

    Docker 1 docker安装 yum install docker [root@topcheer ~]# systemctl start docker [root@topcheer ~]# mk ...

  7. C#Lambda表达式演变和Linq的深度解析

    Lambda 一.Lambda的演变 Lambda的演变,从下面的类中可以看出,.Net Framwork1.0时还是用方法实例化委托的,2.0的时候出现了匿名方法,3.0的时候出现了Lambda. ...

  8. hadoop 2.7.3本地环境运行官方wordcount

    hadoop 2.7.3本地环境运行官方wordcount 基本环境: 系统:win7 虚机环境:virtualBox 虚机:centos 7 hadoop版本:2.7.3 本次先以独立模式(本地模式 ...

  9. Manual——Test (翻译1)

    LTE Manual ——Logging(翻译) (本文为个人学习笔记,如有不当的地方,欢迎指正!) 1.17.3 Testing framework(测试框架)   ns-3 包含一个仿真核心引擎. ...

随机推荐

  1. 【map离散&容斥】Ghosts @Codeforces Round #478 (Div. 2) D

    传送门 题意:给你一条直线的斜率a和截距b,和某一时刻n个在直线上的点的横坐标,以及沿坐标轴方向的速度.问你这些点在(-∞,+∞)的时间内的碰撞次数. solution 设两个点在t时刻相碰,有: x ...

  2. JSP(4)—Cookie创建及简单案例(自动登录)

    Cookie的创建: 创建一个JSP页面,第一次访问时显示没有Cookie,正在创建,再次访问就会自动显示cookie的名称,并设置cookie过期时间 <% //在javaweb规范中使用Co ...

  3. mysql select into 不支持

    不支持的 select * into order_new from orders 改为 Create table order_new(select * from orders)  

  4. hive sql 里面的注释方式

    如果建表ddl 用 comment 这个没问题 那么在sql 语句里面呢,这个貌似不像mysql 那样能用 # // /* */ (左边这些都不行) 其实用 -- comment 就行啦 貌似上面的- ...

  5. C# Task的GetAwaiter和ConfigureAwait

    个人感觉Task 的GetAwaiter和ConfigureAwait也是比较好理解的,首先看看他们的实现 public class Task<TResult> : Task { //Ge ...

  6. SpringDataJpa学习

    # SpringBoot Jdbc JPA JPA是`Java Persistence API`的简称,中文名Java持久层API,是JDK 5.0注解或XML描述对象-关系表的映射关系,并将运行期的 ...

  7. 【.NET 深呼吸】在 .net core app 中使用 Composition

    .NET 中的 Composition ,即 MEF.MEF 说得简单一点,就是它可以在运行阶段动态地发现类型,用于组件扩展方面特别合适. .NET Core App 的默认框架并不提供 MEF 有关 ...

  8. Java内存管理:Java内存区域 JVM运行时数据区

    转自:https://blog.csdn.net/tjiyu/article/details/53915869 下面我们详细了解Java内存区域:先说明JVM规范定义的JVM运行时分配的数据区有哪些, ...

  9. PHP异步扩展Swoole笔记(1)

    安装Swoole扩展 通过pecl安装, 系统中最好已经有http2依赖, 如果是Ubuntu, 可以直接通过apt安装nghttp2, 如果是Centos或者需要自己编译, 在Github下载ngh ...

  10. SQLServer Always On FCI 脑裂及可疑状态修复

    FCI 双节点集群,因为晚上集群节点间的网络中断过.两个节点都觉得还有一个节点宕机,在各节点的集群管理中都看到对方已宕机. 连接到集群IP.提示 msdb 数据库有问题: watermark/2/te ...