Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood
Abstract
Bayesian networks are a powerful probabilistic representation, and their use for classification has received considerable attention. However, they tend to perform poorly when learned in the standard way. This is attributable to a mismatch between the objective function used (likelihood or a function thereof) and the goal of classification (maximizing accuracy or conditional likelihood). Unfortunately, the computational cost of optimizing structure and parameters for conditional likelihood is prohibitive. In this paper we show that a simple approximation – choosing structures by maximizing conditional likelihood while setting parameters by maximum likelihood – yields good results. On a large suite of benchmark datasets, this approach produces better class probability estimates than naïve Bayes, TAN, and generatively-trained Bayesian networks.
1. Introduction
The simplicity and surprisingly high accuracy of the naïve Bayes classifier have led to its wide use, and to many attempts to extend it. In particular, naïve Bayes is a special case of a Bayesian network, and learning the structure and parameters of an unrestricted Bayesian network would appear to be a logical means of improvement. However, Friedman et al. found that naive Bayes easily outperforms such unrestricted Bayesian network classifiers on a large sample of benchmark datasets. This explanation was that the scoring functions used in standard Bayesian network learning attempt to optimize the likelihood of the entire data, rather than just the conditional likelihood of the class given the attributes. Such scoring results in suboptimal choices during the search process whenever the two functions favor differing changes to the network. The natural solution would then be to use conditional likelihood as the objective function. Unfortunately, Friedman et al. observed that, while maximum likelihood parameters can be efficiently computed in closed form, this is not true of conditional likelihood. The latter must be optimized using numerical methods, and doing so at each search step would be prohibitively expensive. Friedman et al. thus abandoned this avenue, leaving the investigation of possible heuristic alternatives to it as an important direction for future research. In this paper, we show that the simple heuristic of setting the parameters by maximum likelihood while choosing the structure by conditional likelihood is accurate and efficient.
Friedman et al. chose instead to extend naive Bayes by allowing a slightly less restricted structure (one parent per variable in addition to the class) while still optimizing likelihood. They showed that TAN, the resulting algorithm, was indeed more accurate than naive Bayes on benchmark datasets. We compare our algorithm to TAN and naive Bayes on the same datasets, and show that it outperforms both in the accuracy of class probability estimates, while outperforming naive Bayes and tying TAN in classification error.
2. Bayesian Networks
A Bayesian network encodes the joint probability distribution of a set of
2.1 Learning Bayesian Networks
Given an i.i.d. training set
When the structure of the network is known, this reduces to estimating
Since on average adding an arc never decreases likelihood on the training data, using the log likelihood as the scoring function can lead to severe overfitting. This problem can be overcome in a number of ways. The simplest one, which is often surprisingly effective, is to limit the number of parents a variable can have. Another alternative is to add a complexity penalty to the log-likelihood. For example, the MDL method minimizes Bayesian Dirichlet (BD) score:
where
2.2 Bayesian Network Classifiers
The goal of classification is to correctly predict the value of a designated discrete class variable predictors or attributes
naïve Bayes classifier is a Bayesian network where the class has no parents and each attribute has the class as its sole parent. Friedman et al.'s TAN algorithm uses a variant of the Chow and Liu method to produce a network where each variable has one other parent in addition to the class. More generally, a Bayesian network learned using any of the methods described above can be used as a classifier. All of these are generative models in the sense that they are learned by maximizing the log likelihood of the entire data being generated by the model,
conditional log likelihood
Notice discriminative learning, because it would focus on correctly discriminating between classes. The problem with this approach is that, unlike
3. The BNC Algorithm
We now introduce BNC, an algorithm for learning the structure of a Bayesian network classifier by maximizing conditional likelihood. BNC is similar to the hill climbing algorithm of Heckerman et al. except that it uses the conditional log likelihood of the class as the primary objective function. BNC starts from an empty network, and at each step considers adding each possible new arc (i.e., all those that do not create cycles) and deleting or reversing each current arc. BNC pre-discretizes continuous values and ignores missing values in the same way that TAN does.
We consider two versions of BNC. The first,
The second version,
The goal of
Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood的更多相关文章
- 概率图模型(PGM):贝叶斯网(Bayesian network)初探
1. 从贝叶斯方法(思想)说起 - 我对世界的看法随世界变化而随时变化 用一句话概括贝叶斯方法创始人Thomas Bayes的观点就是:任何时候,我对世界总有一个主观的先验判断,但是这个判断会随着世界 ...
- Learning Deconvolution Network for Semantic Segme小结
题目:Learning Deconvolution Network for Semantic Segmentation 作者:Hyeonwoo Noh, Seunghoon Hong, Bohyung ...
- [论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding
[论文阅读笔记] Adversarial Mutual Information Learning for Network Embedding 本文结构 解决问题 主要贡献 算法原理 实验结果 参考文献 ...
- [Scikit-learn] Dynamic Bayesian Network - Conditional Random Field
李航,第十一章,条件随机场 参考:[PGM] Markov Networks 携代码:用 Python 通过马尔可夫随机场(MRF)与 Ising Model 进行二值图降噪[推荐!] CRF:htt ...
- 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)
本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率模型都成立,无论随机变量是离散还是连续,还是两者的结合. 部分图为手写,由于本人字很丑,望见谅 ...
- 条件独立(conditional independence) 结合贝叶斯网络(Bayesian network) 概率有向图 (PRML8.2总结)
转:http://www.cnblogs.com/Dzhouqi/p/3204481.html本文会利用到上篇,博客的分解定理,需要的可以查找上篇博客 D-separation对任何用有向图表示的概率 ...
- [Scikit-learn] Dynamic Bayesian Network - HMM
Warning The sklearn.hmm module has now been deprecated due to it no longer matching the scope and th ...
- 概率图模型(PGM) —— 贝叶斯网络(Bayesian Network)
概率图模型是图论与概率方法的结合产物.Probabilistic graphical models are a joint probability distribution defined over ...
- 3.贝叶斯网络表示(The Bayesian Network Representation)
对于一个n随机变量的联合分布,一般需要2**n-1个参数来表示这个分布.但是,我们可以通过随机变量之间的独立性,减少参数的个数. naive Beyes model: Bayesian Network ...
随机推荐
- bootstrap 实战入门教程(一)
说起响应式前端框架,比较而言,bootstrap还是不错的,虽然可能很多文档都是英文的.今天就整理下自己在学习及使用bootstrap3时的重要知识点和使用案例 参考资料: runoob的bootst ...
- KEGG数据库
参考:KEGG数据库中文教程 - 博奥 &[学习笔记]KEGG数据库 - 微信 学习一个技能最主要的事情你必须知道,那就是能通过它来做什么? KEGG数据库里面有什么? 如何查询某一特定的代 ...
- SQL优化有偿服务
本人目前经营MySQL数据库的SQL优化服务,100块钱一条.具体操作模式 其中第一条,可以通过在微信朋友圈转发链接中的信息(http://www.yougemysqldba.com/discuz/v ...
- 强大的命令行工具wmic
1.wmic=Microsoft Windows Management Instrumentation 2. C:\WINDOWS\system32\wbem 下的东西,特别是.xsl格式化文件,实现 ...
- JQ返回顶部代码分享~~~~
1.jq代码: <script type="text/javascript"> $(function() { $("#tbox").click(sc ...
- javascript中对象在OOP方面的一些知识(主要是prototype和__proto__相关)
在ES6的Class到来之前,先总结下个人对js中prototype属性的理解. 1.构造函数(大写函数名 this 无return) 2.原型对象(函数.prototype) 3.实例对象( ...
- C# 正则表达式,提取字符串
Regex.Match(n1,@"blog\.xxxx\.com/(.+)/blog_(\d+)\.html").Result("$2")
- CodeForces #363 div2 Vacations DP
题目链接:C. Vacations 题意:现在有n天的假期,对于第i天有四种情况: 0 gym没开,contest没开 1 gym没开,contest开了 2 gym开了,contest没开 3 ...
- node.js+socket.io安装
最近做安卓遇到一个网络包的bug,服务端使用node做的,通讯用socket.io,但是服务端没法调试,没办法,还是自己搭建一个服务器端吧,索性买了阿里云的ecs测试,之前也配置过node+socke ...
- html 中自动换行的实现方式
1,<div type="word-wrap: break-word;word-break:break-all;"> </div> 2, <div t ...