ID3/C4.5/Gini Index

*/-->

ID3/C4.5/Gini Index

1 ID3

Select the attribute with the highest information gain.

1.1 Formula

1.2 Formula

Let pi be the probability that an arbitrary tuple in D belongs to class Ci, estimated by |C(i,D)|/|D|.
Expected information need to classify a tuple in D:
$$Info(D) = -\sum\limits_{i=1}^n{P_i}log_2P_i$$
Information needed (after using A to split D into v partitions)to classify D:
$$Info_A(D) = \sum\limits_{j=1}^v\frac{|D_j|}{|D|}Info(D_j)$$
So, information gained by branching on attribute A:
$$Gain(A) = Info(D)-Info_A(D)$$

1.3 Example

age income Student creditrating buyscomputer
<= 30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes excellent yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no

Class P:buyscomputer = "yes"
Class N:buyscomputer = "no"
the number of classs P is 9,while the number of class N is 5.

So,$$Info(D) = -\frac{9}{14}log_2\frac{9}{14} - \frac{5}{14}log_2\frac{5}{14} = 0.940$$

In Attribute age, the number of class P is 2,while the number of class N is 3.
So,$$Info(D_{
Similarly,
$$Info(D_{31...40}) = 0$$,$$Info(D_{>40}) = 0.971$$
Then,$$Info_{age}(D) = \frac{5}{14}Info(D_{40}) = 0.694$$
Therefore,$$Gain(age) = Info(D) - Info_age(D) = 0.246$$
Similarly,
$$Gain(income) = 0.029$$
$$Gain(Student) = 0.151$$
$$Gain(credit_rating) = 0.048$$

1.4 Question

What if the attribute's value is continuous? How can we handle it?
1.The best split point for A
+Sort the value A in increasing order
+Typically, the midpoint between each pair of adjacent values is considered as a possible split point
-(a i +a i+1 )/2 is the midpoint between the values of a i and a i+1
+The point with the minimum expected information requirement for A is selected as the split-point for A
2.Split:
+D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the set of tuples in D satisfying A > split-point.

2 C4.5

C4.5 is a successor of ID3.

2.1 Formula

$$SpiltInfo_A(D) = -\sum\limits_{j=1}^v\frac{|D_j|}{|D|}*log_2\frac{|D_j|}{|D|}$$
Then the GainRatio equals to,
$$GainRatio(A=Gain(A)/SplitInfo(A)$$
The attribute with the maximun gain ratio is selected as the splitting attribute.

3 Gini Index

3.1 Formula

If a data set D contains examples from n classes, gini index gini(D) is defined as
$$gini(D) = 1 - \sum\limits_{j=1}^nP_j^2$$
where pj is the relative frequency of class j in D.
If Data set D is split on A which have n classes.Then
$$gini_A(D) = \sum\limits_{i=1}^n\frac{D_i}{D}gini(D_i)$$
Reduction in Impurity
$$\Delta gini(A) = gini(D)-gini_A(D)$$

4 Summary

ID3/C4.5 isn't suitable for large amount of trainning set,because they have to repeat to sort and scan training set for many times. That will cost much time than other classification alogrithms.
The three measures,in general, return good results but
1.Information gain:
-biased towards multivalued attributes
2.Gain ratio:
-tends to prefer unbalanced splits in which one partition is much smaller than the other.
3.Gini index:
-biased towards multivalued attributes
-has difficulty when # of classes is large
-tends to favor tests that result in equal-sized partitions and purity in both partitions.

5 Other Attribute Selection Measures

1.CHAID: a popular decision tree algorithm, measure based on χ 2 test for independence
2.C-SEP: performs better than info. gain and gini index in certain cases
3.G-statistics: has a close approximation to χ 2 distribution
4.MDL (Minimal Description Length) principle (i.e., the simplest solution
is preferred):
The best tree as the one that requires the fewest # of bits to both
(1) encode the tree, and (2) encode the exceptions to the tree
5.Multivariate splits (partition based on multiple variable combinations)
CART: finds multivariate splits based on a linear comb. of attrs.
Which attribute selection measure is the best?
Most give good results, none is significantly superior than others

Author: mlhy

Created: 2015-10-06 二 14:39

Emacs 24.5.1 (Org mode 8.2.10)

ID3/C4.5/Gini Index的更多相关文章

  1. Theoretical comparison between the Gini Index and Information Gain criteria

    Knowledge Discovery in Databases (KDD) is an active and important research area with the promise for ...

  2. 决策树(ID3,C4.5,CART)原理以及实现

    决策树 决策树是一种基本的分类和回归方法.决策树顾名思义,模型可以表示为树型结构,可以认为是if-then的集合,也可以认为是定义在特征空间与类空间上的条件概率分布. [图片上传失败...(image ...

  3. ID3\C4.5\CART

    目录 树模型原理 ID3 C4.5 CART 分类树 回归树 树创建 ID3.C4.5 多叉树 CART分类树(二叉) CART回归树 ID3 C4.5 CART 特征选择 信息增益 信息增益比 基尼 ...

  4. 多分类度量gini index

    第一份工作时, 基于 gini index 写了一份决策树代码叫ctree, 用于广告推荐. 今天想起来, 好像应该有开源的其他方法了. 参考 https://www.cnblogs.com/mlhy ...

  5. 决策树模型 ID3/C4.5/CART算法比较

    决策树模型在监督学习中非常常见,可用于分类(二分类.多分类)和回归.虽然将多棵弱决策树的Bagging.Random Forest.Boosting等tree ensembel 模型更为常见,但是“完 ...

  6. 用于分类的决策树(Decision Tree)-ID3 C4.5

    决策树(Decision Tree)是一种基本的分类与回归方法(ID3.C4.5和基于 Gini 的 CART 可用于分类,CART还可用于回归).决策树在分类过程中,表示的是基于特征对实例进行划分, ...

  7. 决策树 ID3 C4.5 CART(未完)

    1.决策树 :监督学习 决策树是一种依托决策而建立起来的一种树. 在机器学习中,决策树是一种预测模型,代表的是一种对象属性与对象值之间的一种映射关系,每一个节点代表某个对象,树中的每一个分叉路径代表某 ...

  8. 21.决策树(ID3/C4.5/CART)

    总览 算法   功能  树结构  特征选择  连续值处理 缺失值处理  剪枝  ID3  分类  多叉树  信息增益   不支持 不支持  不支持 C4.5  分类  多叉树  信息增益比   支持 ...

  9. ID3,C4.5和CART三种决策树的区别

    ID3决策树优先选择信息增益大的属性来对样本进行划分,但是这样的分裂节点方法有一个很大的缺点,当一个属性可取值数目较多时,可能在这个属性对应值下的样本只有一个或者很少个,此时它的信息增益将很高,ID3 ...

随机推荐

  1. bzoj 4300绝世好题

    呵呵呵呵 #include<bits/stdc++.h> #define INF 0x7fffffff #define LL long long #define N 100005 usin ...

  2. 安装kubernetes遇见coredns坑

    安装kubernetes遇见问题 kubectl describe pod coredns -n kube-system, 查看发现coredns readiness 一直unhealthy, 并且一 ...

  3. Django——URL详解/Django中URL是如何与urls文件匹配的

    URL标准语法 protocol://hostname[:port]/path/[:parameters][?query]#fragment https://i.cnblogs.com/EditPos ...

  4. poj_2406 KMP寻找重复子串问题

    B - Power Strings Time Limit:3000MS     Memory Limit:65536KB     64bit IO Format:%I64d & %I64u S ...

  5. PAT Advanced 1076 Forwards on Weibo (30) [图的遍历,BFS,DFS]

    题目 Weibo is known as the Chinese version of Twitter. One user on Weibo may have many followers, and ...

  6. nodejs(9)使用arttemplate渲染动态页面

    使用arttemplate渲染动态页面 安装 两个包 npm i art-template express-art-template -S 自定义一个模板引擎 app.engine('自定义模板引擎的 ...

  7. 吴裕雄--天生自然MySQL学习笔记:MySQL NULL 值处理

    MySQL 使用 SQL SELECT 命令及 WHERE 子句来读取数据表中的数据,但是当提供的查询条件字段为 NULL 时,该命令可能就无法正常工作. 为了处理这种情况,MySQL提供了三大运算符 ...

  8. CF round #622 (div2)

    CF Round 622 div2 A.简单模拟 B.数学 题意: 某人A参加一个比赛,共n人参加,有两轮,给定这两轮的名次x,y,总排名记为两轮排名和x+y,此值越小名次越前,并且对于与A同分者而言 ...

  9. Vue.js——5.生命周期

    Vue的生命周期 创建阶段new Vue1,beforeCreate() 表示在实例没有被创建出来之前会执行它加载data和methods2,caeated() data 和methods被初始化了 ...

  10. $_SESSION $_COOKIE

    $_SESSION是临时会话变量,用来储存访问者信息.内容是储存在服务器上面的.比如 $_SESSION["ABC"] = "aaa";那么这个用户访问时,$_ ...