2- You may have question marks in your head, especially regarding where the probabilities in the Expectation step come from. Please have a look at the explanations on this maths stack exchange page.

3- Look at/run this code that I wrote in Python that simulates the solution to the coin-toss problem in the EM tutorial paper of item 1:

P.S The code may be suboptimal, but it does the job.

import numpy as np
import math #### E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* #### def get_mn_log_likelihood(obs,probs):
""" Return the (log)likelihood of obs, given the probs"""
# Multinomial Distribution Log PMF
# ln (pdf) = multinomial coeff * product of probabilities
# ln[f(x|n, p)] = [ln(n!) - (ln(x1!)+ln(x2!)+...+ln(xk!))] + [x1*ln(p1)+x2*ln(p2)+...+xk*ln(pk)] multinomial_coeff_denom= 0
prod_probs = 0
for x in range(0,len(obs)): # loop through state counts in each observation
multinomial_coeff_denom = multinomial_coeff_denom + math.log(math.factorial(obs[x]))
prod_probs = prod_probs + obs[x]*math.log(probs[x]) multinomial_coeff = math.log(math.factorial(sum(obs))) - multinomial_coeff_denom
likelihood = multinomial_coeff + prod_probs
return likelihood # 1st: Coin B, {HTTTHHTHTH}, 5H,5T
# 2nd: Coin A, {HHHHTHHHHH}, 9H,1T
# 3rd: Coin A, {HTHHHHHTHH}, 8H,2T
# 4th: Coin B, {HTHTTTHHTT}, 4H,6T
# 5th: Coin A, {THHHTHHHTH}, 7H,3T
# so, from MLE: pA(heads) = 0.80 and pB(heads)=0.45 # represent the experiments
head_counts = np.array([5,9,8,4,7])
tail_counts = 10-head_counts
experiments = zip(head_counts,tail_counts) # initialise the pA(heads) and pB(heads)
pA_heads = np.zeros(100); pA_heads[0] = 0.60
pB_heads = np.zeros(100); pB_heads[0] = 0.50 # E-M begins!
delta = 0.001
j = 0 # iteration counter
improvement = float('inf')
while (improvement>delta):
expectation_A = np.zeros((5,2), dtype=float)
expectation_B = np.zeros((5,2), dtype=float)
for i in range(0,len(experiments)):
e = experiments[i] # i'th experiment
ll_A = get_mn_log_likelihood(e,np.array([pA_heads[j],1-pA_heads[j]])) # loglikelihood of e given coin A
ll_B = get_mn_log_likelihood(e,np.array([pB_heads[j],1-pB_heads[j]])) # loglikelihood of e given coin B weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of A proportional to likelihood of A
weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) ) # corresponding weight of B proportional to likelihood of B expectation_A[i] = np.dot(weightA, e)
expectation_B[i] = np.dot(weightB, e) pA_heads[j+1] = sum(expectation_A)[0] / sum(sum(expectation_A));
pB_heads[j+1] = sum(expectation_B)[0] / sum(sum(expectation_B)); improvement = max( abs(np.array([pA_heads[j+1],pB_heads[j+1]]) - np.array([pA_heads[j],pB_heads[j]]) ))
j = j+1

Expectation-Maximization in CSharp

Jump to: navigation, search

This example requires Emgu CV 1.5.0.0

using System.Drawing;
using Emgu.CV.Structure;
using Emgu.CV.ML;
using Emgu.CV.ML.Structure; ... int N = ; //number of clusters
int N1 = (int)Math.Sqrt((double)); Bgr[] colors = new Bgr[] {
new Bgr(, , ),
new Bgr(, , ),
new Bgr(, , ),
new Bgr(, , )}; int nSamples = ; Matrix<float> samples = new Matrix<float>(nSamples, );
Matrix<Int32> labels = new Matrix<int>(nSamples, );
Image<Bgr, Byte> img = new Image<Bgr,byte>(, );
Matrix<float> sample = new Matrix<float>(, ); CvInvoke.cvReshape(samples.Ptr, samples.Ptr, , );
for (int i = ; i < N; i++)
{
Matrix<float> rows = samples.GetRows(i * nSamples / N, (i + ) * nSamples / N, );
double scale = ((i % N1) + 1.0) / (N1 + );
MCvScalar mean = new MCvScalar(scale * img.Width, scale * img.Height);
MCvScalar sigma = new MCvScalar(, );
ulong seed = (ulong)DateTime.Now.Ticks;
CvInvoke.cvRandArr(ref seed, rows.Ptr, Emgu.CV.CvEnum.RAND_TYPE.CV_RAND_NORMAL, mean, sigma);
}
CvInvoke.cvReshape(samples.Ptr, samples.Ptr, , ); using (EM emModel1 = new EM())
using (EM emModel2 = new EM())
{
EMParams parameters1 = new EMParams();
parameters1.Nclusters = N;
parameters1.CovMatType = Emgu.CV.ML.MlEnum.EM_COVARIAN_MATRIX_TYPE.COV_MAT_DIAGONAL;
parameters1.StartStep = Emgu.CV.ML.MlEnum.EM_INIT_STEP_TYPE.START_AUTO_STEP;
parameters1.TermCrit = new MCvTermCriteria(, 0.01);
emModel1.Train(samples, null, parameters1, labels); EMParams parameters2 = new EMParams();
parameters2.Nclusters = N;
parameters2.CovMatType = Emgu.CV.ML.MlEnum.EM_COVARIAN_MATRIX_TYPE.COV_MAT_GENERIC;
parameters2.StartStep = Emgu.CV.ML.MlEnum.EM_INIT_STEP_TYPE.START_E_STEP;
parameters2.TermCrit = new MCvTermCriteria(, 1.0e-6);
parameters2.Means = emModel1.GetMeans();
parameters2.Covs = emModel1.GetCovariances();
parameters2.Weights = emModel1.GetWeights(); emModel2.Train(samples, null, parameters2, labels); #region Classify every image pixel
for (int i = ; i < img.Height; i++)
for (int j = ; j < img.Width; j++)
{
sample.Data[, ] = i;
sample.Data[, ] = j;
int response = (int) emModel2.Predict(sample, null); Bgr color = colors[response]; img.Draw(
new CircleF(new PointF(i, j), ),
new Bgr(color.Blue*0.5, color.Green * 0.5, color.Red * 0.5 ),
);
}
#endregion #region draw the clustered samples
for (int i = ; i < nSamples; i++)
{
img.Draw(new CircleF(new PointF(samples.Data[i, ], samples.Data[i, ]), ), colors[labels.Data[i, ]], );
}
#endregion Emgu.CV.UI.ImageViewer.Show(img);
}

ExpectationMaximum的更多相关文章

  1. EM算法原理总结

    EM算法也称期望最大化(Expectation-Maximum,简称EM)算法,它是一个基础算法,是很多机器学习领域算法的基础,比如隐式马尔科夫算法(HMM), LDA主题模型的变分推断等等.本文就对 ...

  2. 机器学习-EM算法笔记

    EM算法也称期望最大化(Expectation-Maximum,简称EM)算法,它是一个基础算法,是很多机器学习领域算法的基础,比如隐式马尔科夫算法(HMM), LDA主题模型的变分推断,混合高斯模型 ...

  3. EM 算法资料

    EM 算法的英文全称是: Expectation-Maximum. EM 算法的步骤 假设 \(Z\) 是隐变量,\(\theta\) 是待定参数. E 步:固定参数 \(\theta\),求 \(Z ...

  4. python机器学习笔记:EM算法

    EM算法也称期望最大化(Expectation-Maximum,简称EM)算法,它是一个基础算法,是很多机器学习领域的基础,比如隐式马尔科夫算法(HMM),LDA主题模型的变分推断算法等等.本文对于E ...

随机推荐

  1. BZOJ 1588 营业额统计 Splay

    主要操作为Splay中插入节点,查找前驱和后继节点. 1: #include <cstdio> 2: #include <iostream> 3: #include <c ...

  2. myEclipse中的web项目直接引入到eclipse中运行

    首先打开项目属性(Properties),如果动态web项目被作为普通java项目引进去,需要首先修改为web项目,如下图: 确定后即可在eclipse中看到转换为了动态的web项目,然后继续属性(P ...

  3. python学习笔记8(表达式和语句)

    一.print 和 import 信息 >>> print 'age:',22 # 用逗号隔开打印多个表达式 age: 22 import somemodule # 从模块导入函数 ...

  4. UIApplication深入研究

    我们偶尔会调用这个类的api来实现一些功能,但是这个类是iOS编程中很重要的一个概念,所以总结以下这个类的信息,不对的地方请留言. UIApplication的核心作用是提供了iOS程序运行期间的控制 ...

  5. 通过android.provider包查看android系统定义的provider.

    原先的2.2的android源码已经不是那么容易找到了,我稍稍搜索了下找到了一两个没速度的死链就失去了兴趣.不过还好忽然发现在android.provider包下包含了常见的provider的使用方法 ...

  6. Linux学习笔记(2)-用户和用户组

    用户(user)和用户组(group)概念 linux是一个多用户操作系统,他允许多个用户登录linux系统进行各自不同的操作.为了方便管理用户不同的权限,组的概念应用而生,一个组可以包含多个用户,共 ...

  7. 如何将word中上下两行文字对齐?

    一.问题来源及描述 本科毕设的时候积累的问题,整理如下. 红头文件下面的署名,上下要对齐. 二.解决办法 经验证,第一次拉标尺要把标尺放在第一行的光标处,为了换行后,再次enter,tab后到与上一行 ...

  8. datagridview 右键选中行 并弹出菜单

    private void dataGridView_OLUsers_CellMouseDown(object sender, DataGridViewCellMouseEventArgs e) { i ...

  9. 时序列数据库武斗大会之什么是 TSDB ?

    本文选自 OneAPM Cloud Insight 高级工程师刘斌博客 . 刘斌,一个才思敏捷的程序员,<第一本 Docker 书>.<GitHub 入门与实践>等书籍译者,D ...

  10. Ubuntu Server 中resolv.conf重启时被覆盖的问题

    /etc/resolv.conf中设置dns之后每次重启Ubuntu Server时该文件会被覆盖,针对这种情况找了一些个解决方法 防止/etc/resolv.conf被覆盖的方法 方法一 1.需要创 ...