Exploratory Undersampling for Class-Imbalance Learning
Abstract - Undersampling is a popular method in dealing with class-imbalance problems, which uses only a subset of the majority class and thus is very efficient. The main deficiency is that many majority class examples are ignored. We propose two algorithms to overcome this deficiency. EasyEnsemblesamples several subsets from the majority class, trains a learner using each of them, and combines the outputs of those learners. BalanceCascadetrains the learners sequentially, where in each step, the majority class examples that are correctly classified by the current trained learners are removed from further consideration. Experimental results show that both methods have higher Area Under the ROC Curve, F-measure, and G-mean values than many existing class-imbalance learning methods. Moreover, they have approximately the same training time as that of undersampling when the same number of weak classifiers is used, which is significantly faster than other methods.
Index Terms - Class-imbalance learning, data mining, ensemble learning, machine learning, undersampling
I. INTRODUCTION
In many real-world problems, the data sets are typically imbalanced, i.e., some classes have much more instances than others. The level of imbalance (ratio of size of the majority class to minority class) can be as huge as . It is noteworthy that class imbalance is emerging as an important issue in designing classifiers.
Imbalance has a serious impact on the performance of classifiers. Learning algorithm that do not consider class imbalance tend to be overwhelmed by the majority class and ignore the minority class. For example, in a problem with imbalance level of , a learning algorithm that minimizes error rate could decide to classify all examples as the majority class in order to achieve a low error rate of
. However, all minority class examples will be wrong classified in this case. In problems where the imbalance level is huge, class imbalance must be carefully handled to build a good classifier.
Class imbalance is also closely related to cost-sensitive learning, another important issue in machine learning. Misclassifying a minority class instance is usually more serious than misclassifying a majority class one. For example, approving a fraudulent credit card application is more costly than declining a credible one. Breiman et al. pointed out that training set size, class priors, cost of errors in different classes, and placement of decision boundaries are all closely connected. In fact, many existing methods for dealing with class imbalance rely on connections among these four components. Sampling methods handle class imbalance by varying the minority and majority class sizes in the training set. Cost-sensitive learning deals with class imbalance by incurring different costs for the two classes and is considered as an important class of methods to handle class imbalance. More details about class-imbalance learning methods are presented in Section II.
In this paper, we examine only binary classification problems by ensembling classifiers built from multiple under sampled training sets. Undersampling is an efficient method for class-imbalance learning. This method uses a subset of the majority class to train the classifier. Since many majority class examples are ignored, the training set becomes more balanced and the training process becomes faster. However, the main drawback of undersampling is that potentially useful information contained in these ignored examples is neglected. The intuition of our proposed methods is then to wisely explore these ignored data while keeping the fast training speed of understanding.
We propose two way to use these data. One straightforward way is to sample several subsets independently from (the majority class), use these subsets to train classifiers separately, and combine the trained classifiers. Another method is to use trained classifiers to guide the sampling process for subsequent classifiers. After we have trained
classifiers, examples correctly classified by them will be removed from
. Experiments on 16 UCI data sets show that both methods have higher Area Under the receiver operating characteristics (ROC) Curve (AUC), F-measure, and G-mean values than many existing class-imbalance learning methods.
III. EasyEnsemble AND BalanceCascade
As was shown by Drummond and Holte, undersampling is an efficient strategy to deal with class imbalance. However, the drawback of undersampling is that it throws away many potentially useful data. In this section, we propose two strategies to explore the majority class examples ignored by undersampling: EasyEnsemble and BalanceCascade.
A. EasyEnsemble
Given the minority training set
EasyEnsemble is probably the most straightforward way to further exploit the majority class examples ignored by undersampling, i.e., examples in EasyEnsemble is shown in Algorithm 1.
Algorithm 1 The EasyEnsemble algorithm
1: {Input: A set of minority class examples
2:
3: repeat
4:
5: Randomly sample a subset
6: Learn
7: until
8: Output: An ensemble
The idea behind EasyEnsemble is simple. Similar to the Balanced Random Forests, EasyEnsemble generates
The output of EasyEnsemble is a single ensemble, but it looks like an “ensemble of ensembles”. It is known that boosting mainly reduces bias, while bagging mainly reduces variance. Several works, combine different ensemble strategies to achieve stronger generalization. MultiBoosting, combines boosting with bagging/wagging by using boosted ensembles as base learners. Stochastic Gradient Boosting and Cocktail Ensemble also combine different ensemble strategies. It is evident that EasyEnsemble has benefited from the combination of boosting and a bagging-like strategy with balanced class distribution.
Both EasyEnsemble and Balanced Random Forests try to use balanced boostrap samples; however, the former uses the samples to generate boosted ensembles, while the latter uses the samples to train decision trees randomly. Costing also uses multiple samples of the original training set. Costing was initially proposed as a cost-sensitive learning method, while EasyEnsemble is proposed to deal with class imbalance directly. Moreover, the working style of EasyEnsemble is quite different from costing. For example, the costing method samples the examples with probability in proportion to their costs (rejection sampling). Since this is a probability-based sampling method, no positive example will definitely appear in all the samples (in fact, the probability of a positive example appearing in all the samples is small). While in EasyEnsemble, all the positive examples will definitely appear in all the samples. When the size of minority class is very small, it is important to utilize every minority class example.
B. BalanceCascade
EasyEnsemble is an unsupervised strategy to explore
Algorithm 2 The BalanceCascade algorithm
1: {Input: A set of minority class examples
2:
3: repeat
4:
5: Randomly sample a subset
6: Learn
The ensemble's threshold is
7: Adjust
8: Remove from
9: until
10: Output: A single ensemble
This method is called BalanceCascade since it is somewhat similar to the cascade classifier. The majority training set
BalanceCascade is similar to EasyEnsemble in their structures. The main difference between them is the lines 7 and 8 of Algorithm 2. Line removes the true majority class examples from
Exploratory Undersampling for Class-Imbalance Learning的更多相关文章
- 【软件分析与挖掘】ELBlocker: Predicting blocking bugs with ensemble imbalance learning
摘要: 提出一种方法——ELBlocker,用于自动检测出Blocking Bugs(prevent other bugs from being fixed). 难度在于这些Blocking Bugs仅 ...
- 【Machine Learning】如何处理机器学习中的非均衡数据集?
在机器学习中,我们常常会遇到不均衡的数据集.比如癌症数据集中,癌症样本的数量可能远少于非癌症样本的数量:在银行的信用数据集中,按期还款的客户数量可能远大于违约客户的样本数量. 比如非常有名的德国信 ...
- 机器学习类别不平衡处理之欠采样(undersampling)
类别不平衡就是指分类任务中不同类别的训练样例数目差别很大的情况 常用的做法有三种,分别是1.欠采样, 2.过采样, 3.阈值移动 由于这几天做的project的target为正值的概率不到4%,且数据 ...
- Using SMOTEBoost(过采样) and RUSBoost(使用聚类+集成学习) to deal with class imbalance
Using SMOTEBoost and RUSBoost to deal with class imbalance from:https://aitopics.org/doc/news:1B9F7A ...
- kaggle 欺诈信用卡预测——不平衡训练样本的处理方法 综合结论就是:随机森林+过采样(直接复制或者smote后,黑白比例1:3 or 1:1)效果比较好!记得在smote前一定要先做标准化!!!其实随机森林对特征是否标准化无感,但是svm和LR就非常非常关键了
先看数据: 特征如下: Time Number of seconds elapsed between each transaction (over two days) numeric V1 No de ...
- CS100.1x Introduction to Big Data with Apache Spark
CS100.1x简介 这门课主要讲数据科学,也就是data science以及怎么用Apache Spark去分析大数据. Course Software Setup 这门课主要介绍如何编写和调试Py ...
- (转)8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset
8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset by Jason Brownlee on August ...
- (转) Learning from Imbalanced Classes
Learning from Imbalanced Classes AUGUST 25TH, 2016 If you’re fresh from a machine learning course, c ...
- How to handle Imbalanced Classification Problems in machine learning?
How to handle Imbalanced Classification Problems in machine learning? from:https://www.analyticsvidh ...
随机推荐
- NPOI的使用Excel模板导出
private string ExportScMeeting(DataTable source) { string templateFile = Server.MapPath(@"Excel ...
- no route to host
防火墙没有关闭: systemctl stop firewalld
- adb pull apk
adb shell pm list packages adb shell pm path com.tence01.mm find -name *.apk adb pull /data/app/com. ...
- Python之路 day2 文件基础操作
#!/usr/bin/env python # -*- coding:utf-8 -*- #Author:ersa ''' #f,文件句柄;模式 a : append 追加文件内容 f = open( ...
- js中function函数
function:是具备某个功能的方法,方法本身没有意义,只有执行方法才有价值. function: 1 创建一个函数: 2 执行这个方法: 例: 创建 function 方法名(){ 存放某个功能的 ...
- chrome 跨域设置
右击chrome快捷方式,在启动指令后面添加--disable-web-security,然后保存.如下: "C:\Program Files (x86)\Google\Chrome\App ...
- 不在折腾---hive-0.13.1-bin
Hive只在一个节点安装即可 上传tar包 解压 > tar zxvf hive-0.13.1-bin.tar.gz 配置mysql * 检查MySQL是否安装:rpm -qa | grep m ...
- Response.Write("<script>alert('弹出对话框!')</script>") 后跟Response.Redirect("page.aspx");不能弹出对话框,直接跳转页面了 如何解?
Response.Write和Response.Redirect一起用的时候就会这样,write脚本和redirect脚本不能同时使用,这样不会执行脚本,最好使用ClientScript 改进方法: ...
- MVC思想
MVC英文即Model-View-Controller,即把一个应用的输入.处理.输出流程按照Model.View.Controller的方式进行分离,这样一个应用被分成三个层--模型层.视图层.控制 ...
- JAVA字符串05之课程问题解决
(一)古罗马皇帝凯撒在打仗时曾经使用过以下方法加密军事情报:请编写一个程序,使用上述算法加密或解密用户输入的英文字串. 1.设计思想:首先选择是加密字符串还是解密字符串,两种算法相似.如果要加密字符串 ...