Akaike information criterion,AIC是什么?一个用来筛选模型的指标.AIC越小模型越好,通常选择AIC最小的模型.第一句话好记,第二句话就呵呵了,小编有时候就会迷惑AIC越大越好还是越小越好.所以,还是要知其所以然的. 在AIC之前,我们需要知道Kullback–Leibler information或 Kullback–Leiblerdistance.对于一批数据,假设存在一个真实的模型f,还有一组可供选择的模型g1.g2.g3…gi,而K-L 距离就是用模型 gi
Why automatic attractive? large amount of seismic data ; if manually,it depends om experience of analyst; Quliaty can be obscured by several factors such as background and non-stationary noise from diverse sources; 该软件使用的方法为 STA-LTA(针对振幅突变识别很有效): AMP
1. Clustering Analysis Clustering is the process of grouping a set of (unlabeled) data objects into multiple groups or clusters such that objects within a cluster have high similarity, but are very dissimilar to objects in other clusters. Dissimilari
参考:Fitting a Model by Maximum Likelihood 最大似然估计是用于估计模型参数的,首先我们必须选定一个模型,然后比对有给定的数据集,然后构建一个联合概率函数,因为给定了数据集,所以该函数就是以模型参数为自变量的函数,通过求导我们就能得到使得该函数值(似然值)最大的模型参数了. Maximum-Likelihood Estimation (MLE) is a statistical technique for estimating model parameters
原理请观良心视频:机器学习课程 Expectation Maximisation Expectation-maximization is a well-founded statistical algorithm to get around this problem by an iterative process. First one assumes random components (randomly centered on data points, learned from k-means,