sklearn.neighbors.LocalOutlierFactor

class sklearn.neighbors.LocalOutlierFactor(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]

Unsupervised Outlier Detection using Local Outlier Factor (LOF)

The anomaly score of each sample is called Local Outlier Factor.
It measures the local deviation of density of a given sample with
respect to its neighbors.
It is local in that the anomaly score depends on how isolated the object
is with respect to the surrounding neighborhood.
More precisely, locality is given by k-nearest neighbors, whose distance
is used to estimate the local density.
By comparing the local density of a sample to the local densities of
its neighbors, one can identify samples that have a substantially lower
density than their neighbors. These are considered outliers.

Parameters:
n_neighbors : int, optional (default=20)

Number of neighbors to use by default for kneighbors queries.
If n_neighbors is larger than the number of samples provided,
all samples will be used.

algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional

Algorithm used to compute the nearest neighbors:

  • ‘ball_tree’ will use BallTree
  • ‘kd_tree’ will use KDTree
  • ‘brute’ will use a brute-force search.
  • ‘auto’ will attempt to decide the most appropriate algorithm
    based on the values passed to fit method.

Note: fitting on sparse input will override the setting of
this parameter, using brute force.

leaf_size : int, optional (default=30)

Leaf size passed to BallTree or KDTree. This can
affect the speed of the construction and query, as well as the memory
required to store the tree. The optimal value depends on the
nature of the problem.

metric : string or callable, default ‘minkowski’

metric used for the distance computation. Any metric from scikit-learn
or scipy.spatial.distance can be used.

If ‘precomputed’, the training input X is expected to be a distance
matrix.

If metric is a callable function, it is called on each
pair of instances (rows) and the resulting value recorded. The callable
should take two arrays as input and return one value indicating the
distance between them. This works for Scipy’s metrics, but is less
efficient than passing the metric name as a string.

Valid values for metric are:

  • from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’,
    ‘manhattan’]
  • from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’,
    ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’,
    ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’,
    ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’,
    ‘yule’]

See the documentation for scipy.spatial.distance for details on these
metrics:
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html

p : integer, optional (default=2)

Parameter for the Minkowski metric from
sklearn.metrics.pairwise.pairwise_distances. When p = 1, this
is equivalent to using manhattan_distance (l1), and euclidean_distance
(l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.

metric_params : dict, optional (default=None)

Additional keyword arguments for the metric function.

contamination : float in (0., 0.5), optional (default=0.1)

The amount of contamination of the data set, i.e. the proportion
of outliers in the data set. When fitting this is used to define the
threshold on the decision function. If “auto”, the decision function
threshold is determined as in the original paper.

Changed in version 0.20: The default value of contamination will change from 0.1 in 0.20
to 'auto' in 0.22.

novelty : boolean, default False

By default, LocalOutlierFactor is only meant to be used for outlier
detection (novelty=False). Set novelty to True if you want to use
LocalOutlierFactor for novelty detection. In this case be aware that
that you should only use predict, decision_function and score_samples
on new unseen data and not on the training set.

n_jobs : int or None, optional (default=None)

The number of parallel jobs to run for neighbors search.
None means 1 unless in a joblib.parallel_backend context.
-1 means using all processors. See Glossary
for more details.
Affects only kneighbors and kneighbors_graph methods.

Attributes:
negative_outlier_factor_ : numpy array, shape (n_samples,)

The opposite LOF of the training samples. The higher, the more normal.
Inliers tend to have a LOF score close to 1 (negative_outlier_factor_
close to -1), while outliers tend to have a larger LOF score.

The local outlier factor (LOF) of a sample captures its
supposed ‘degree of abnormality’.
It is the average of the ratio of the local reachability density of
a sample and those of its k-nearest neighbors.

n_neighbors_ : integer

The actual number of neighbors used for kneighbors queries.

offset_ : float

Offset used to obtain binary labels from the raw scores.
Observations having a negative_outlier_factor smaller than offset_
are detected as abnormal.
The offset is set to -1.5 (inliers score around -1), except when a
contamination parameter different than “auto” is provided. In that
case, the offset is defined in such a way we obtain the expected
number of outliers in training.

References

[1] Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May).
LOF: identifying density-based local outliers. In ACM sigmod record.

Methods

fit(X[, y]) Fit the model using X as training data.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
__init__(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]
decision_function

Shifted opposite of the Local Outlier Factor of X.

Bigger is better, i.e. large values correspond to inliers.

The shift offset allows a zero threshold for being an outlier.
Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
shifted_opposite_lof_scores : array, shape (n_samples,)

The shifted opposite of the Local Outlier Factor of each input
samples. The lower, the more abnormal. Negative scores represent
outliers, positive scores represent inliers.

fit(X, y=None)[source]

Fit the model using X as training data.

Parameters:
X : {array-like, sparse matrix, BallTree, KDTree}

Training data. If array or matrix, shape [n_samples, n_features],
or [n_samples, n_samples] if metric=’precomputed’.

y : Ignored

not used, present for API consistency by convention.

Returns:
self : object
fit_predict

“Fits the model to the training set X and returns the labels.

Label is 1 for an inlier and -1 for an outlier according to the LOF
score and the contamination parameter.

Parameters:
X : array-like, shape (n_samples, n_features), default=None

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

y : Ignored

not used, present for API consistency by convention.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and 1 for inliers.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and
contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

kneighbors(X=None, n_neighbors=None, return_distance=True)[source]

Finds the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors to get (default is the value
passed to the constructor).

return_distance : boolean, optional. Defaults to True.

If False, distances will not be returned

Returns:
dist : array

Array representing the lengths to points, only present if
return_distance=True

ind : array

Indices of the nearest points in the population matrix.

Examples

In the following example, we construct a NeighborsClassifier
class from an array representing our data set and ask who’s
the closest point to [1,1,1]

>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))

As you can see, it returns [[0.5]], and [[2]], which means that the
element is at distance 0.5 and is the third element of samples
(indexes start at 0). You can also query for multiple points:

>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode=’connectivity’)[source]

Computes the (weighted) graph of k-Neighbors for points in X

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors for each sample.
(default is value passed to the constructor).

mode : {‘connectivity’, ‘distance’}, optional

Type of returned matrix: ‘connectivity’ will return the
connectivity matrix with ones and zeros, in ‘distance’ the
edges are Euclidean distance between points.

Returns:
A : sparse matrix in CSR format, shape = [n_samples, n_samples_fit]

n_samples_fit is the number of samples in the fitted data
A[i, j] is assigned the weight of edge that connects i to j.

Examples

>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
predict

Predict the labels (1 inlier, -1 outlier) of X according to LOF.

This method allows to generalize prediction to new observations (not
in the training set). Only available for novelty detection (when
novelty is set to True).

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and +1 for inliers.

score_samples

Opposite of the Local Outlier Factor of X.

It is the opposite as as bigger is better, i.e. large values correspond
to inliers.

Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.
The score_samples on training data is available by considering the
the negative_outlier_factor_ attribute.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
opposite_lof_scores : array, shape (n_samples,)

The opposite of the Local Outlier Factor of each input samples.
The lower, the more abnormal.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter> so that it’s possible to update each
component of a nested object.

Returns:
self
      </div>
</div>
</div>
<div class="clearer"></div>
</div>
</div> <div class="footer">
&copy; 2007 - 2018, scikit-learn developers (BSD License).
<a href="../../_sources/modules/generated/sklearn.neighbors.LocalOutlierFactor.rst.txt" rel="nofollow">Show this page source</a>
</div>
<div class="rel"> <div class="buttonPrevious">
<a href="sklearn.neighbors.KNeighborsRegressor.html">Previous
</a>
</div>
<div class="buttonNext">
<a href="sklearn.neighbors.RadiusNeighborsClassifier.html">Next
</a>
</div> </div> <script>
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-22606712-2', 'auto');
ga('set', 'anonymizeIp', true);
ga('send', 'pageview');
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script> <script>
(function() {
var cx = '016639176250731907682:tjtqbvtvij0';
var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true;
gcse.src = 'https://cse.google.com/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s);
})();
</script>

LocalOutlierFactor算法回归数据预处理的更多相关文章

  1. sklearn中的数据预处理和特征工程

    小伙伴们大家好~o( ̄▽ ̄)ブ,沉寂了这么久我又出来啦,这次先不翻译优质的文章了,这次我们回到Python中的机器学习,看一下Sklearn中的数据预处理和特征工程,老规矩还是先强调一下我的开发环境是 ...

  2. 借助 SIMD 数据布局模板和数据预处理提高 SIMD 在动画中的使用效率

    原文链接 简介 为发挥 SIMD1 的最大作用,除了对其进行矢量化处理2外,我们还需作出其他努力.可以尝试为循环添加 #pragma omp simd3,查看编译器是否成功进行矢量化,如果性能有所提升 ...

  3. 对数据预处理的一点理解[ZZ]

    数据预处理没有统一的标准,只能说是根据不同类型的分析数据和业务需求,在对数据特性做了充分的理解之后,再选择相关的数据预处理技术,一般会用到多种预处理技术,而且对每种处理之后的效果做些分析对比,这里面经 ...

  4. [数据预处理]-中心化 缩放 KNN(一)

    据预处理是总称,涵盖了数据分析师使用它将数据转处理成想要的数据的一系列操作.例如,对某个网站进行分析的时候,可能会去掉 html 标签,空格,缩进以及提取相关关键字.分析空间数据的时候,一般会把带单位 ...

  5. [机器学习]-[数据预处理]-中心化 缩放 KNN(二)

    上次我们使用精度评估得到的成绩是 61%,成绩并不理想,再使 recall 和 f1 看下成绩如何? 首先我们先了解一下 召回率和 f1. 真实结果 预测结果 预测结果   正例 反例 正例 TP 真 ...

  6. 【sklearn】数据预处理 sklearn.preprocessing

    数据预处理 标准化 (Standardization) 规范化(Normalization) 二值化 分类特征编码 推定缺失数据 生成多项式特征 定制转换器 1. 标准化Standardization ...

  7. sklearn学习笔记(一)——数据预处理 sklearn.preprocessing

    https://blog.csdn.net/zhangyang10d/article/details/53418227 数据预处理 sklearn.preprocessing 标准化 (Standar ...

  8. 数据预处理(Python scikit-learn)

    在机器学习任务中,经常会对数据进行预处理.如尺度变换,标准化,二值化,正规化.至于采用哪种方法更有效,则与数据分布和采用算法有关.不同算法对数据的假设不同,可能需要不同的变换,而且有时无需进行变换,也 ...

  9. 【Sklearn系列】使用Sklearn进行数据预处理

    这篇文章主要讲解使用Sklearn进行数据预处理,我们使用Kaggle中泰坦尼克号事件的数据作为样本. 读取数据并创建数据表格,查看数据相关信息 import pandas as pd import ...

随机推荐

  1. 【KMP求最小周期】POJ2406-Power Strings

    [题意] 给出一个字符串,求出最小周期. [思路] 对KMP的next数组的理解与运用orz ①证明:如果最小周期不等于它本身,则前缀和后缀必定有交叉. 如果没有交叉,以当前的next[n]为最小周期 ...

  2. 浅析HashMap和Hashtable的区别

    HashMap和Hashtable两个类都实现了Map接口,二者保存键值对(key-value对): HashMap和HashTable区别 第一,继承的父类不同.HashMap继承自Abstract ...

  3. ps制作导航条 分割线技巧

    1 用矩形工具画一个像素的矩形(注意不是路径) 2给矩形添加蒙版,用渐变工具对其进行渐变,达到两头渐隐藏的效果. 制作按钮技巧 用矩形工具画出矩形 然后给矩形添加 内发光 渐变叠加 光泽  描边等操作 ...

  4. iptables最常用的规则示例

    iptables v1.4.21 iptables基础 规则(rules)其实就是网络管理员预定义的条件,规则一般的定义为“如果数据包头符合这样的条件,就这样处理这个数据包”.规则存储在内核空间的信息 ...

  5. .net 程式進階除錯教學 - 使用WinDbg

     https://caryhsu.blogspot.com/2011/11/net-windbg.html         從以前一直研究基金方面的資訊,但由於沒有多於的時間常常觀看,再加上碩士時在我 ...

  6. Spring事务管理笔记

    事务的目的就是要保证数据的高度完整性和一致性. 在实际的项目中,大多都是使用注解的方式来实现事物,这里也就简单记录下使用@Transactional方法和注意事项. 在xml中添加配置 1234567 ...

  7. Eclipse 生成WebService客户端代码

    1. 打开Eclipse,新建一个普通的Javaproject,然后在新建的项目上右键点击项目,New---->other---->Web Services -------->Web ...

  8. redux状态管理和react-redux的结合使用

    一:调试 注意:Redux调试工具.谷歌中搜redux同理react 新建store的时候判断window.devToolsExtension使用compose(组合函数)结合thunk插件和wind ...

  9. [Android Pro] Property Animation

    声明:下面的内容需要Android API level 11的支持 Property Animation是如何运作的 首先,来看一下两个不一样的Property Animation场景: 场景一(Li ...

  10. [Android Pro] 创建快捷方式,删除快捷方式,查询是否存在快捷方式

    1: 创建快捷方式 需要权限: <uses-permission android:name="com.android.launcher.permission.INSTALL_SHORT ...