The Baum-Welch algorithm is commonly used for training a Hidden Markov Model because of its superior numerical stability and its ability to guarantee the discovery of a locally maximum, Maximum Likelihood Estimator, in the presence of incomplete training
data. Currently, Apache Mahout has a sequential implementation of the Baum-Welch which cannot be scaled to train over large data sets. This restriction reduces the quality of training and constrains generalization of the learned model when used for prediction.
This project proposes to extend Mahout's Baum-Welch to a parallel, distributed version using the Map-Reduce programming framework for enhanced model fitting over large data sets.

Detailed Description:

Hidden Markov Models (HMMs) are widely used as a probabilistic inference tool for applications generating temporal or spatial sequential data. Relative simplicity of implementation, combined with their ability to discover latent domain knowledge have made
them very popular in diverse fields such as DNA sequence alignment, gene discovery, handwriting analysis, voice recognition, computer vision, language translation and parts-of-speech tagging.

A HMM is defined as a tuple (S, O, Theta) where S is a finite set of unobservable, hidden states emitting symbols from a finite observable vocabulary set O according to a probabilistic model Theta. The parameters of the model Theta are defined by the tuple
(A, B, Pi) where A is a stochastic transition matrix of the hidden states of size |S| X |S|. The elements a_(i,j) of A specify the probability of transitioning from a state i to state j. Matrix B is a size |S| X |O| stochastic symbol emission matrix whose
elements b_(s, o) provide the probability that a symbol o will be emitted from the hidden state s. The elements pi_(s) of the |S| length vector Pi determine the probability that the system starts in the hidden state s. The transitions of hidden states are
unobservable and follow the Markov property of memorylessness.

Rabiner [1] defined three main problems for HMMs:

1. Evaluation: Given the complete model (S, O, Theta) and a subset of the observation sequence, determine the probability that the model generated the observed sequence. This is useful for evaluating the quality of the model and is solved using the so called
Forward algorithm.

2. Decoding: Given the complete model (S, O, Theta) and an observation sequence, determine the hidden state sequence which generated the observed sequence. This can be viewed as an inference problem where the model and observed sequence are used to predict
the value of the unobservable random variables. The backward algorithm, also known as the Viterbi decoding algorithm is used for predicting the hidden state sequence.

3. Training: Given the set of hidden states S, the set of observation vocabulary O and the observation sequence, determine the parameters (A, B, Pi) of the model Theta. This problem can be viewed as a statistical machine learning problem of model fitting
to a large set of training data. The Baum-Welch (BW) algorithm (also called the Forward-Backward algorithm) and the Viterbi training algorithm are commonly used for model fitting.

In general, the quality of HMM training can be improved by employing large training vectors but currently, Mahout only supports sequential versions of HMM trainers which are incapable of scaling. Among the Viterbi and the Baum-Welch training methods, the
Baum-Welch algorithm is superior, accurate, and a better candidate for a parallel implementation for two reasons:

(1) The BW is numerically stable and provides a guaranteed discovery of the locally maximum, Maximum Likelihood Estimator (MLE) for model's parameters (Theta). In Viterbi training, the MLE is approximated in order to reduce computation time.


(2) The BW belongs to the general class of Expectation Maximization (EM) algorithms which naturally fit into the Map-Reduce framework
[2], such as the existing Map Reduce implementation of k-means in Mahout.

Hence, this project proposes to extend Mahout's current sequential implementation of the Baum-Welch HMM trainer to a scalable, distributed case. Since the distributed version of the BW will use the sequential implementations of the Forward and the Backward
algorithms to compute the alpha and the beta factors in each iteration, a lot of existing HMM training code will be reused. Specifically, the parallel implementation of the BW algorithm on Map Reduce has been elaborated at great length in
[3] by viewing it as a specific case of the Expectation-Maximization algorithm and will be followed for implementation in this project.

The BW EM algorithm iteratively refines the model's parameters and consists of two distinct steps in each iteration--Expectation and Maximization. In the distributed case, the Expectation step is computed by the mappers and the reducers, while the Maximization
is handled by the reducers. Starting from an initial Theta^(0), in each iteration i, the model parameter tuple Theta^i is input to the algorithm, and the end result Theta^(i+1) is fed to the next iteration i+1. The iteration stops on a user specified convergence
condition expressed as a fixpoint or when the number of iterations exceeds a user defined value.

Expectation computes the posterior probability of each latent variable for each observed variable, weighed by the relative frequency of the observed variable in the input split. The mappers process independent training instances and emit expected state transition
and emission counts using the Forward and Backward algorithms. The reducers finish Expectation by aggregating the expected counts. The input to a mapper consists of (k, v_o) pairs where k is a unique key and v_o is a string of observed symbols. For each training
instance, the mappers emit the same set of keys corresponding to the following three optimization problems to be solved during the Maximization, and their values in a hash-map:

(1) Expected number of times a hidden state is reached (Pi).

(2) Number of times each observable symbol is generated by each hidden state (B).

(3) Number of transitions between each pair of states in the hidden state space (A).

The M step computes the updated Theta(i+1) from the values generated during the E part. This involves aggregating the values (as hash-maps) for each key corresponding to one of the optimization problems. The aggregation summarizes the statistics necessary
to compute a subset of the parameters for the next EM iteration. The optimal parameters for the next iteration are arrived by computing the relative frequency of each event with respect to its expected count at the current iteration. The emitted optimal parameters
by each reducer are written to the HDFS and are fed to the mappers in the next iteration.

The project can be subdivided into distinct tasks of programming, testing and documenting the driver, mapper, reducer and the combiner with the Expectation and Maximization parts split between them. For each of these tasks, a new class will be programmed,
unit tested and documented within the org.apache.mahout.classifier.sequencelearning.hmm package. Since k-means is also an EM algorithm, particular attention will be paid to its code at each step for possible reuse.

A list of milestones, associated deliverable and high level implementation details is given below.

Time-line: April 26 - Aug 15.

Milestones:

April 26 - May 22 (4 weeks): Pre-coding stage. Open communication with my mentor, refine the project's plan and requirements, understand the community's code styling requirements, expand the knowledge on Hadoop and Mahout internals. Thoroughly familiarize
with the classes within the classifier.sequencelearning.hmm, clustering.kmeans, common, vectorizer and math packages.

May 23 - June 3 (2 weeks): Work on Driver. Implement, test and document the class HmmDriver by extending the AbstractJob class and by reusing the code from the KMeansDriver class.

June 3 - July 1 (4 weeks): Work on Mapper. Implement, test and document the class HmmMapper. The HmmMapper class will include setup() and map() methods. The setup() method will read in the HmmModel and the parameter values obtained from the previous iteration.
The map() method will call the HmmAlgorithms.backwardAlgorithm() and the HmmAlgorithms.forwardAlgorithm() and complete the Expectation step partially.

July 1 - July 15 (2 weeks): Work on Reducer. Implement, test and document the class HmmReducer. The reducer will complete the Expectation step by summing over all the occurences emitted by the mappers for the three optimization problems. Reuse the code from
the HmmTrainer.trainBaumWelch() method if possible. Also, mid-term review.

July 15 - July 29 (2 weeks): Work on Combiner. Implement, test and document the class HmmCombiner. The combiner will reduce the network traffic and improve efficiency by aggregating the values for each of the three keys corresponding to each of the optimization
problems for the Maximization stage in reducers. Look at the possibility of code reuse from the KMeansCombiner class.

July 29 - August 15 (2 weeks): Final touches. Test the mapper, reducer, combiner and driver together. Give an example demonstrating the new parallel BW algorithm by employing the parts-of-speech tagger data set also used by the sequential BW
[4]. Tidy up code and fix loose ends, finish wiki documentation.

Additional Information:

I am in the final stages of finishing my Master's degree in Electrical and Computer Engineering from the University of Massachusetts Amherst. Working under the guidance of Prof. Wayne Burleson, as part of my Master's research work, I have applied the theory
of Markov Decision Process (MDP) to increase the duration of service of mobile computers. This semester I am involved with two course projects involving machine learning over large data sets. In the Bioinformatics class, I am mining the RCSB Protein Data Bank
[5] to learn the dependence of side chain geometry on a protein's secondary structure, and comparing it with the Dynamic Bayesian Network approach used in
[6]. In another project for the Online Social Networks class, I am using reinforcement learning to build an online recommendation system by reformulating MDP optimal policy search as an EM problem
[7] and employing Map Reduce (extending Mahout) to arrive at it in a scalable, distributed manner.


I owe much to the open source community as all my research experiments have only been possible due to the freely available Linux distributions, performance analyzers, scripting languages and associated documentation. After joining the Apache Mahout's developer
mailing list a few weeks ago, I have found the community extremely vibrant, helpful and welcoming. If selected, I feel that the GSOC 2011 project will be a great learning experience for me from both a technical and professional standpoint and will also allow
me to contribute within my modest means to the overall spirit of open source programming and Machine Learning.

References:

[1] A tutorial on hidden markov models and selected applications in speech recognition by Lawrence R. Rabiner. In Proceedings of the IEEE, Vol. 77 (1989), pp. 257-286.

[2] Map-Reduce for Machine Learning on Multicore by Cheng T. Chu, Sang K. Kim, Yi A. Lin, Yuanyuan Yu, Gary R. Bradski, Andrew Y. Ng, Kunle Olukotun. In NIPS (2006), pp. 281-288.

[3] Data-Intensive Text Processing with MapReduce by Jimmy Lin, Chris Dyer. Morgan & Claypool 2010.

[4] http://flexcrfs.sourceforge.net/#Case_Study

[5] http://www.rcsb.org/pdb/home/home.do

[6] Beyond rotamers: a generative, probabilistic model of side chains in proteins by Harder T, Boomsma W, Paluszewski M, Frellsen J, Johansson KE, Hamelryck T. BMC Bioinformatics. 2010 Jun 5.

[7] Probabilistic inference for solving discrete and continuous state Markov Decision Processes by M. Toussaint and A. Storkey. ICML, 2006.

MR for Baum-Welch algorithm的更多相关文章

  1. Baum–Welch algorithm

    Baum–Welch algorithm 世界上只有一个巴菲特,也只有一家文艺复兴科技公司_搜狐财经_搜狐网 http://www.sohu.com/a/157018893_649112

  2. Baum Welch估计HMM参数实例

    Baum Welch估计HMM参数实例 下面的例子来自于<What is the expectation maximization algorithm?> 题面是:假设你有两枚硬币A与B, ...

  3. [ML] Concept Learning

    Candidate Elimination Thanks for Sanketh Vedula. This is a good demo to understand candidate elimina ...

  4. [Scikit-learn] Dynamic Bayesian Network - HMM

    Warning The sklearn.hmm module has now been deprecated due to it no longer matching the scope and th ...

  5. HMM基础

    一.HMM建模 HMM参数: 二.HMM的3个假设 (一)马尔科夫假设 (二)观测独立性假设 (三)不变性假设 转移矩阵A不随时间变化 三.HMM的3个问题 (一)概率计算/评估---likeliho ...

  6. [转]语音识别中区分性训练(Discriminative Training)和最大似然估计(ML)的区别

    转:http://blog.sina.com.cn/s/blog_66f725ba0101bw8i.html 关于语音识别的声学模型训练方法已经是比较成熟的方法,一般企业或者研究机构会采用HTK工具包 ...

  7. 转Generative Model 与 Discriminative Model

    没有完全看懂,以后再看,特别是hmm,CRF那里,以及生成模型产生的数据是序列还是一个值,hmm应该是序列,和图像的关系是什么. [摘要]    - 生成模型(Generative Model) :无 ...

  8. Generative Model 与 Discriminative Model

      [摘要]    - 生成模型(Generative Model) :无穷样本==>概率密度模型 = 产生模型==>预测    - 判别模型(Discriminative Model): ...

  9. 生成模型(Generative Model)和 判别模型(Discriminative Model)

    引入 监督学习的任务就是学习一个模型(或者得到一个目标函数),应用这一模型,对给定的输入预测相应的输出.这一模型的一般形式为一个决策函数Y=f(X),或者条件概率分布P(Y|X). 监督学习方法又可以 ...

  10. 生成模型(Generative Model)Vs 判别模型(Discriminative Model)

      概率图分为有向图(bayesian network)与无向图(markov random filed).在概率图上可以建立生成模型或判别模型.有向图多为生成模型,无向图多为判别模型. 判别模型(D ...

随机推荐

  1. IDEA阅读Spark源码

    将spark编译成idea-sbt工程 tar -zxvf spark-1.1.0.tgz cd spark-1.1.0 sbt/sbt gen-idea 等待-- 成功后就能以SBT工程的形式导入i ...

  2. Dynamics CRM2015 The plug-in type does not exist in the specified assembly问题的解决方法

    在用插件工具PluginProfiler调试时,报"The plug-in type  xxxx  does not exist in the specified assembly" ...

  3. Android中利用Camera与Matrix实现3D效果详解

    本文行文目录: 一.Camera与Matrix初步认识 二.Camera与Matrix旋转效果拆分介绍 三.Camera与Matrix实现立体3D切换效果 [csdn地址:http://blog.cs ...

  4. 后端分布式系列:分布式存储-HDFS NameNode 设计实现解析

    接前文 分布式存储-HDFS 架构解析,我们总体分析了 HDFS 架构的主要构成组件包括:NameNode.DataNode 和 Client.本文首先进一步解析 HDFS NameNode 的设计和 ...

  5. Cocos2D iOS之旅:如何写一个敲地鼠游戏(七):弹出地鼠

    大熊猫猪·侯佩原创或翻译作品.欢迎转载,转载请注明出处. 如果觉得写的不好请告诉我,如果觉得不错请多多支持点赞.谢谢! hopy ;) 免责申明:本博客提供的所有翻译文章原稿均来自互联网,仅供学习交流 ...

  6. UNIX网络编程——ICMP报文分析:端口不可达

    ICMP的一个规则是,ICMP差错报文必须包括生成该差错报文的数据报IP首部(包含任何选项),还必须至少包括跟在该IP首部后面的前8个字节(包含源端口和目的端口).在我们的例子中,跟在IP首部后面的前 ...

  7. UNIX网络编程——TCP/IP简介

    一.ISO/OSI参考模型 OSI(open system interconnection)开放系统互联模型是由ISO(International Organization for Standardi ...

  8. 【IOS 开发】Object - C 面向对象 - 类 , 对象 , 成员变量 , 成员方法

    . 一. 类定义 类定义需要实现两部分 : -- 接口部分 : 定义类的成员变量和方法, 方法是抽象的, 在头文件中定义; -- 实现部分 : 引入接口部分的头文件, 实现抽象方法; 1. 接口部分定 ...

  9. python类:描述器Descriptors和元类MetaClasses

    http://blog.csdn.net/pipisorry/article/details/50444769 描述器(Descriptors) 描述器决定了对象属性是如何被访问的.描述器的作用是定制 ...

  10. 【unix网络编程第三版】ubuntu端口占用问题

    <unix网络编程>一书中的代码并不是能直接运行,有时候需要结合各方面的知识来解决,大家在这本书的时候,一定要把代码都跑通,不难你会错过很多学习的机会! 1.问题描述 本人在阅读<U ...