sklearn.neighbors.LocalOutlierFactor

class sklearn.neighbors.LocalOutlierFactor(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]

Unsupervised Outlier Detection using Local Outlier Factor (LOF)

The anomaly score of each sample is called Local Outlier Factor.
It measures the local deviation of density of a given sample with
respect to its neighbors.
It is local in that the anomaly score depends on how isolated the object
is with respect to the surrounding neighborhood.
More precisely, locality is given by k-nearest neighbors, whose distance
is used to estimate the local density.
By comparing the local density of a sample to the local densities of
its neighbors, one can identify samples that have a substantially lower
density than their neighbors. These are considered outliers.

Parameters:
n_neighbors : int, optional (default=20)

Number of neighbors to use by default for kneighbors queries.
If n_neighbors is larger than the number of samples provided,
all samples will be used.

algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional

Algorithm used to compute the nearest neighbors:

  • ‘ball_tree’ will use BallTree
  • ‘kd_tree’ will use KDTree
  • ‘brute’ will use a brute-force search.
  • ‘auto’ will attempt to decide the most appropriate algorithm
    based on the values passed to fit method.

Note: fitting on sparse input will override the setting of
this parameter, using brute force.

leaf_size : int, optional (default=30)

Leaf size passed to BallTree or KDTree. This can
affect the speed of the construction and query, as well as the memory
required to store the tree. The optimal value depends on the
nature of the problem.

metric : string or callable, default ‘minkowski’

metric used for the distance computation. Any metric from scikit-learn
or scipy.spatial.distance can be used.

If ‘precomputed’, the training input X is expected to be a distance
matrix.

If metric is a callable function, it is called on each
pair of instances (rows) and the resulting value recorded. The callable
should take two arrays as input and return one value indicating the
distance between them. This works for Scipy’s metrics, but is less
efficient than passing the metric name as a string.

Valid values for metric are:

  • from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’,
    ‘manhattan’]
  • from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’,
    ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’,
    ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’,
    ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’,
    ‘yule’]

See the documentation for scipy.spatial.distance for details on these
metrics:
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html

p : integer, optional (default=2)

Parameter for the Minkowski metric from
sklearn.metrics.pairwise.pairwise_distances. When p = 1, this
is equivalent to using manhattan_distance (l1), and euclidean_distance
(l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.

metric_params : dict, optional (default=None)

Additional keyword arguments for the metric function.

contamination : float in (0., 0.5), optional (default=0.1)

The amount of contamination of the data set, i.e. the proportion
of outliers in the data set. When fitting this is used to define the
threshold on the decision function. If “auto”, the decision function
threshold is determined as in the original paper.

Changed in version 0.20: The default value of contamination will change from 0.1 in 0.20
to 'auto' in 0.22.

novelty : boolean, default False

By default, LocalOutlierFactor is only meant to be used for outlier
detection (novelty=False). Set novelty to True if you want to use
LocalOutlierFactor for novelty detection. In this case be aware that
that you should only use predict, decision_function and score_samples
on new unseen data and not on the training set.

n_jobs : int or None, optional (default=None)

The number of parallel jobs to run for neighbors search.
None means 1 unless in a joblib.parallel_backend context.
-1 means using all processors. See Glossary
for more details.
Affects only kneighbors and kneighbors_graph methods.

Attributes:
negative_outlier_factor_ : numpy array, shape (n_samples,)

The opposite LOF of the training samples. The higher, the more normal.
Inliers tend to have a LOF score close to 1 (negative_outlier_factor_
close to -1), while outliers tend to have a larger LOF score.

The local outlier factor (LOF) of a sample captures its
supposed ‘degree of abnormality’.
It is the average of the ratio of the local reachability density of
a sample and those of its k-nearest neighbors.

n_neighbors_ : integer

The actual number of neighbors used for kneighbors queries.

offset_ : float

Offset used to obtain binary labels from the raw scores.
Observations having a negative_outlier_factor smaller than offset_
are detected as abnormal.
The offset is set to -1.5 (inliers score around -1), except when a
contamination parameter different than “auto” is provided. In that
case, the offset is defined in such a way we obtain the expected
number of outliers in training.

References

[1] Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May).
LOF: identifying density-based local outliers. In ACM sigmod record.

Methods

fit(X[, y]) Fit the model using X as training data.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
__init__(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]
decision_function

Shifted opposite of the Local Outlier Factor of X.

Bigger is better, i.e. large values correspond to inliers.

The shift offset allows a zero threshold for being an outlier.
Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
shifted_opposite_lof_scores : array, shape (n_samples,)

The shifted opposite of the Local Outlier Factor of each input
samples. The lower, the more abnormal. Negative scores represent
outliers, positive scores represent inliers.

fit(X, y=None)[source]

Fit the model using X as training data.

Parameters:
X : {array-like, sparse matrix, BallTree, KDTree}

Training data. If array or matrix, shape [n_samples, n_features],
or [n_samples, n_samples] if metric=’precomputed’.

y : Ignored

not used, present for API consistency by convention.

Returns:
self : object
fit_predict

“Fits the model to the training set X and returns the labels.

Label is 1 for an inlier and -1 for an outlier according to the LOF
score and the contamination parameter.

Parameters:
X : array-like, shape (n_samples, n_features), default=None

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

y : Ignored

not used, present for API consistency by convention.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and 1 for inliers.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and
contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

kneighbors(X=None, n_neighbors=None, return_distance=True)[source]

Finds the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors to get (default is the value
passed to the constructor).

return_distance : boolean, optional. Defaults to True.

If False, distances will not be returned

Returns:
dist : array

Array representing the lengths to points, only present if
return_distance=True

ind : array

Indices of the nearest points in the population matrix.

Examples

In the following example, we construct a NeighborsClassifier
class from an array representing our data set and ask who’s
the closest point to [1,1,1]

>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))

As you can see, it returns [[0.5]], and [[2]], which means that the
element is at distance 0.5 and is the third element of samples
(indexes start at 0). You can also query for multiple points:

>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode=’connectivity’)[source]

Computes the (weighted) graph of k-Neighbors for points in X

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors for each sample.
(default is value passed to the constructor).

mode : {‘connectivity’, ‘distance’}, optional

Type of returned matrix: ‘connectivity’ will return the
connectivity matrix with ones and zeros, in ‘distance’ the
edges are Euclidean distance between points.

Returns:
A : sparse matrix in CSR format, shape = [n_samples, n_samples_fit]

n_samples_fit is the number of samples in the fitted data
A[i, j] is assigned the weight of edge that connects i to j.

Examples

>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
predict

Predict the labels (1 inlier, -1 outlier) of X according to LOF.

This method allows to generalize prediction to new observations (not
in the training set). Only available for novelty detection (when
novelty is set to True).

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and +1 for inliers.

score_samples

Opposite of the Local Outlier Factor of X.

It is the opposite as as bigger is better, i.e. large values correspond
to inliers.

Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.
The score_samples on training data is available by considering the
the negative_outlier_factor_ attribute.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
opposite_lof_scores : array, shape (n_samples,)

The opposite of the Local Outlier Factor of each input samples.
The lower, the more abnormal.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter> so that it’s possible to update each
component of a nested object.

Returns:
self
      </div>
</div>
</div>
<div class="clearer"></div>
</div>
</div> <div class="footer">
&copy; 2007 - 2018, scikit-learn developers (BSD License).
<a href="../../_sources/modules/generated/sklearn.neighbors.LocalOutlierFactor.rst.txt" rel="nofollow">Show this page source</a>
</div>
<div class="rel"> <div class="buttonPrevious">
<a href="sklearn.neighbors.KNeighborsRegressor.html">Previous
</a>
</div>
<div class="buttonNext">
<a href="sklearn.neighbors.RadiusNeighborsClassifier.html">Next
</a>
</div> </div> <script>
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-22606712-2', 'auto');
ga('set', 'anonymizeIp', true);
ga('send', 'pageview');
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script> <script>
(function() {
var cx = '016639176250731907682:tjtqbvtvij0';
var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true;
gcse.src = 'https://cse.google.com/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s);
})();
</script>

LocalOutlierFactor算法回归数据预处理的更多相关文章

  1. sklearn中的数据预处理和特征工程

    小伙伴们大家好~o( ̄▽ ̄)ブ,沉寂了这么久我又出来啦,这次先不翻译优质的文章了,这次我们回到Python中的机器学习,看一下Sklearn中的数据预处理和特征工程,老规矩还是先强调一下我的开发环境是 ...

  2. 借助 SIMD 数据布局模板和数据预处理提高 SIMD 在动画中的使用效率

    原文链接 简介 为发挥 SIMD1 的最大作用,除了对其进行矢量化处理2外,我们还需作出其他努力.可以尝试为循环添加 #pragma omp simd3,查看编译器是否成功进行矢量化,如果性能有所提升 ...

  3. 对数据预处理的一点理解[ZZ]

    数据预处理没有统一的标准,只能说是根据不同类型的分析数据和业务需求,在对数据特性做了充分的理解之后,再选择相关的数据预处理技术,一般会用到多种预处理技术,而且对每种处理之后的效果做些分析对比,这里面经 ...

  4. [数据预处理]-中心化 缩放 KNN(一)

    据预处理是总称,涵盖了数据分析师使用它将数据转处理成想要的数据的一系列操作.例如,对某个网站进行分析的时候,可能会去掉 html 标签,空格,缩进以及提取相关关键字.分析空间数据的时候,一般会把带单位 ...

  5. [机器学习]-[数据预处理]-中心化 缩放 KNN(二)

    上次我们使用精度评估得到的成绩是 61%,成绩并不理想,再使 recall 和 f1 看下成绩如何? 首先我们先了解一下 召回率和 f1. 真实结果 预测结果 预测结果   正例 反例 正例 TP 真 ...

  6. 【sklearn】数据预处理 sklearn.preprocessing

    数据预处理 标准化 (Standardization) 规范化(Normalization) 二值化 分类特征编码 推定缺失数据 生成多项式特征 定制转换器 1. 标准化Standardization ...

  7. sklearn学习笔记(一)——数据预处理 sklearn.preprocessing

    https://blog.csdn.net/zhangyang10d/article/details/53418227 数据预处理 sklearn.preprocessing 标准化 (Standar ...

  8. 数据预处理(Python scikit-learn)

    在机器学习任务中,经常会对数据进行预处理.如尺度变换,标准化,二值化,正规化.至于采用哪种方法更有效,则与数据分布和采用算法有关.不同算法对数据的假设不同,可能需要不同的变换,而且有时无需进行变换,也 ...

  9. 【Sklearn系列】使用Sklearn进行数据预处理

    这篇文章主要讲解使用Sklearn进行数据预处理,我们使用Kaggle中泰坦尼克号事件的数据作为样本. 读取数据并创建数据表格,查看数据相关信息 import pandas as pd import ...

随机推荐

  1. 【后缀自动机】【拓扑排序】【动态规划】hihocoder1457 后缀自动机四·重复旋律7

    解题方法提示 小Hi:我们已经学习了后缀自动机,今天我们再来看这道有意思的题. 小Ho:好!这道题目让我们求的是若干的数字串所有不同子串的和. 小Hi:你能不能结合后缀自动机的性质来思考如何解决本题? ...

  2. mysql启动错误解决

    mysql 启动时,报错一般都不明显,因此我们需要配置错误日志 #vim /etc/my.cnf xxxxxxxxxx 1   1 #vim /etc/my.cnf 在[mysqld]下添加 log_ ...

  3. (Mark)JS中的上下文

    执行上下文的代码被分成两个基本的阶段来处理: 进入执行上下文 执行代码 变量对象的修改变化与这两个阶段紧密相关. 注:这2个阶段的处理是一般行为,和上下文的类型无关(也就是说,在全局上下文和函数上下文 ...

  4. opencv中keypoint数据结构分析

    分析opencv中keypoint数据结构的相关信息,找到opencv的document(http://docs.opencv.org/java/org/opencv/features2d/KeyPo ...

  5. web安全开发指南--认证

    1.认证 1.1.       认证和密码管理安全规则 1 认证控制必须只能在服务器端执行. 2 除了指定为公开的资源,对所有其它资源的访问都必须先经过认证. 3 为所有关键凭证实施防"暴力 ...

  6. vs2017 新建Class 文件时,自动添加作者版权声明注释

    1.用文本打开,在其头部加上 “C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ItemTempl ...

  7. 折腾mysql的小坑记录

    1.安装 CentOS下先卸载自带的mariadb rpm -qa | grep mariadb mariadb-libs--.el7_2.x86_64 mariadb--.el7_2.x86_64 ...

  8. iOS:多线程同步加锁的简单介绍

    多线程同步加锁主要方式有3种:NSLock(普通锁).NSCondition(状态锁).synchronized同步代码块 还有少用的NSRecursiveLock(递归锁).NSConditionL ...

  9. JStorm模型设计

    问题描述 1.在流式计算中经常需要对一批的数据进行汇总计算,类似SQL中的GROUP BY.在用JStorm来实现这一条简单的SQL时,面对的是一条一条的数据库变化的消息(这里需要保证有序消费),其实 ...

  10. JSP和Servlet中的几个编码的作用及原理

    首先,说说JSP和Servlet中的几个编码的作用. 在JSP和Servlet中主要有以下几个地方可以设置编码,pageEncoding="UTF-8".contentType=& ...