sklearn.neighbors.LocalOutlierFactor

class sklearn.neighbors.LocalOutlierFactor(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]

Unsupervised Outlier Detection using Local Outlier Factor (LOF)

The anomaly score of each sample is called Local Outlier Factor.
It measures the local deviation of density of a given sample with
respect to its neighbors.
It is local in that the anomaly score depends on how isolated the object
is with respect to the surrounding neighborhood.
More precisely, locality is given by k-nearest neighbors, whose distance
is used to estimate the local density.
By comparing the local density of a sample to the local densities of
its neighbors, one can identify samples that have a substantially lower
density than their neighbors. These are considered outliers.

Parameters:
n_neighbors : int, optional (default=20)

Number of neighbors to use by default for kneighbors queries.
If n_neighbors is larger than the number of samples provided,
all samples will be used.

algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional

Algorithm used to compute the nearest neighbors:

  • ‘ball_tree’ will use BallTree
  • ‘kd_tree’ will use KDTree
  • ‘brute’ will use a brute-force search.
  • ‘auto’ will attempt to decide the most appropriate algorithm
    based on the values passed to fit method.

Note: fitting on sparse input will override the setting of
this parameter, using brute force.

leaf_size : int, optional (default=30)

Leaf size passed to BallTree or KDTree. This can
affect the speed of the construction and query, as well as the memory
required to store the tree. The optimal value depends on the
nature of the problem.

metric : string or callable, default ‘minkowski’

metric used for the distance computation. Any metric from scikit-learn
or scipy.spatial.distance can be used.

If ‘precomputed’, the training input X is expected to be a distance
matrix.

If metric is a callable function, it is called on each
pair of instances (rows) and the resulting value recorded. The callable
should take two arrays as input and return one value indicating the
distance between them. This works for Scipy’s metrics, but is less
efficient than passing the metric name as a string.

Valid values for metric are:

  • from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’,
    ‘manhattan’]
  • from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’,
    ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’,
    ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’,
    ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’,
    ‘yule’]

See the documentation for scipy.spatial.distance for details on these
metrics:
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html

p : integer, optional (default=2)

Parameter for the Minkowski metric from
sklearn.metrics.pairwise.pairwise_distances. When p = 1, this
is equivalent to using manhattan_distance (l1), and euclidean_distance
(l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.

metric_params : dict, optional (default=None)

Additional keyword arguments for the metric function.

contamination : float in (0., 0.5), optional (default=0.1)

The amount of contamination of the data set, i.e. the proportion
of outliers in the data set. When fitting this is used to define the
threshold on the decision function. If “auto”, the decision function
threshold is determined as in the original paper.

Changed in version 0.20: The default value of contamination will change from 0.1 in 0.20
to 'auto' in 0.22.

novelty : boolean, default False

By default, LocalOutlierFactor is only meant to be used for outlier
detection (novelty=False). Set novelty to True if you want to use
LocalOutlierFactor for novelty detection. In this case be aware that
that you should only use predict, decision_function and score_samples
on new unseen data and not on the training set.

n_jobs : int or None, optional (default=None)

The number of parallel jobs to run for neighbors search.
None means 1 unless in a joblib.parallel_backend context.
-1 means using all processors. See Glossary
for more details.
Affects only kneighbors and kneighbors_graph methods.

Attributes:
negative_outlier_factor_ : numpy array, shape (n_samples,)

The opposite LOF of the training samples. The higher, the more normal.
Inliers tend to have a LOF score close to 1 (negative_outlier_factor_
close to -1), while outliers tend to have a larger LOF score.

The local outlier factor (LOF) of a sample captures its
supposed ‘degree of abnormality’.
It is the average of the ratio of the local reachability density of
a sample and those of its k-nearest neighbors.

n_neighbors_ : integer

The actual number of neighbors used for kneighbors queries.

offset_ : float

Offset used to obtain binary labels from the raw scores.
Observations having a negative_outlier_factor smaller than offset_
are detected as abnormal.
The offset is set to -1.5 (inliers score around -1), except when a
contamination parameter different than “auto” is provided. In that
case, the offset is defined in such a way we obtain the expected
number of outliers in training.

References

[1] Breunig, M. M., Kriegel, H. P., Ng, R. T., & Sander, J. (2000, May).
LOF: identifying density-based local outliers. In ACM sigmod record.

Methods

fit(X[, y]) Fit the model using X as training data.
get_params([deep]) Get parameters for this estimator.
kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point.
kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X
set_params(**params) Set the parameters of this estimator.
__init__(n_neighbors=20, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, contamination=’legacy’, novelty=False, n_jobs=None)[source]
decision_function

Shifted opposite of the Local Outlier Factor of X.

Bigger is better, i.e. large values correspond to inliers.

The shift offset allows a zero threshold for being an outlier.
Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
shifted_opposite_lof_scores : array, shape (n_samples,)

The shifted opposite of the Local Outlier Factor of each input
samples. The lower, the more abnormal. Negative scores represent
outliers, positive scores represent inliers.

fit(X, y=None)[source]

Fit the model using X as training data.

Parameters:
X : {array-like, sparse matrix, BallTree, KDTree}

Training data. If array or matrix, shape [n_samples, n_features],
or [n_samples, n_samples] if metric=’precomputed’.

y : Ignored

not used, present for API consistency by convention.

Returns:
self : object
fit_predict

“Fits the model to the training set X and returns the labels.

Label is 1 for an inlier and -1 for an outlier according to the LOF
score and the contamination parameter.

Parameters:
X : array-like, shape (n_samples, n_features), default=None

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

y : Ignored

not used, present for API consistency by convention.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and 1 for inliers.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and
contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

kneighbors(X=None, n_neighbors=None, return_distance=True)[source]

Finds the K-neighbors of a point.
Returns indices of and distances to the neighbors of each point.

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors to get (default is the value
passed to the constructor).

return_distance : boolean, optional. Defaults to True.

If False, distances will not be returned

Returns:
dist : array

Array representing the lengths to points, only present if
return_distance=True

ind : array

Indices of the nearest points in the population matrix.

Examples

In the following example, we construct a NeighborsClassifier
class from an array representing our data set and ask who’s
the closest point to [1,1,1]

>>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=1)
>>> neigh.fit(samples)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> print(neigh.kneighbors([[1., 1., 1.]]))
(array([[0.5]]), array([[2]]))

As you can see, it returns [[0.5]], and [[2]], which means that the
element is at distance 0.5 and is the third element of samples
(indexes start at 0). You can also query for multiple points:

>>> X = [[0., 1., 0.], [1., 0., 1.]]
>>> neigh.kneighbors(X, return_distance=False)
array([[1],
[2]]...)
kneighbors_graph(X=None, n_neighbors=None, mode=’connectivity’)[source]

Computes the (weighted) graph of k-Neighbors for points in X

Parameters:
X : array-like, shape (n_query, n_features), or (n_query, n_indexed) if metric == ‘precomputed’

The query point or points.
If not provided, neighbors of each indexed point are returned.
In this case, the query point is not considered its own neighbor.

n_neighbors : int

Number of neighbors for each sample.
(default is value passed to the constructor).

mode : {‘connectivity’, ‘distance’}, optional

Type of returned matrix: ‘connectivity’ will return the
connectivity matrix with ones and zeros, in ‘distance’ the
edges are Euclidean distance between points.

Returns:
A : sparse matrix in CSR format, shape = [n_samples, n_samples_fit]

n_samples_fit is the number of samples in the fitted data
A[i, j] is assigned the weight of edge that connects i to j.

Examples

>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import NearestNeighbors
>>> neigh = NearestNeighbors(n_neighbors=2)
>>> neigh.fit(X)
NearestNeighbors(algorithm='auto', leaf_size=30, ...)
>>> A = neigh.kneighbors_graph(X)
>>> A.toarray()
array([[1., 0., 1.],
[0., 1., 1.],
[1., 0., 1.]])
predict

Predict the labels (1 inlier, -1 outlier) of X according to LOF.

This method allows to generalize prediction to new observations (not
in the training set). Only available for novelty detection (when
novelty is set to True).

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. to the training samples.

Returns:
is_inlier : array, shape (n_samples,)

Returns -1 for anomalies/outliers and +1 for inliers.

score_samples

Opposite of the Local Outlier Factor of X.

It is the opposite as as bigger is better, i.e. large values correspond
to inliers.

Only available for novelty detection (when novelty is set to True).
The argument X is supposed to contain new data: if X contains a
point from training, it considers the later in its own neighborhood.
Also, the samples in X are not considered in the neighborhood of any
point.
The score_samples on training data is available by considering the
the negative_outlier_factor_ attribute.

Parameters:
X : array-like, shape (n_samples, n_features)

The query sample or samples to compute the Local Outlier Factor
w.r.t. the training samples.

Returns:
opposite_lof_scores : array, shape (n_samples,)

The opposite of the Local Outlier Factor of each input samples.
The lower, the more abnormal.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects
(such as pipelines). The latter have parameters of the form
<component>__<parameter> so that it’s possible to update each
component of a nested object.

Returns:
self
      </div>
</div>
</div>
<div class="clearer"></div>
</div>
</div> <div class="footer">
&copy; 2007 - 2018, scikit-learn developers (BSD License).
<a href="../../_sources/modules/generated/sklearn.neighbors.LocalOutlierFactor.rst.txt" rel="nofollow">Show this page source</a>
</div>
<div class="rel"> <div class="buttonPrevious">
<a href="sklearn.neighbors.KNeighborsRegressor.html">Previous
</a>
</div>
<div class="buttonNext">
<a href="sklearn.neighbors.RadiusNeighborsClassifier.html">Next
</a>
</div> </div> <script>
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-22606712-2', 'auto');
ga('set', 'anonymizeIp', true);
ga('send', 'pageview');
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script> <script>
(function() {
var cx = '016639176250731907682:tjtqbvtvij0';
var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true;
gcse.src = 'https://cse.google.com/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s);
})();
</script>

LocalOutlierFactor算法回归数据预处理的更多相关文章

  1. sklearn中的数据预处理和特征工程

    小伙伴们大家好~o( ̄▽ ̄)ブ,沉寂了这么久我又出来啦,这次先不翻译优质的文章了,这次我们回到Python中的机器学习,看一下Sklearn中的数据预处理和特征工程,老规矩还是先强调一下我的开发环境是 ...

  2. 借助 SIMD 数据布局模板和数据预处理提高 SIMD 在动画中的使用效率

    原文链接 简介 为发挥 SIMD1 的最大作用,除了对其进行矢量化处理2外,我们还需作出其他努力.可以尝试为循环添加 #pragma omp simd3,查看编译器是否成功进行矢量化,如果性能有所提升 ...

  3. 对数据预处理的一点理解[ZZ]

    数据预处理没有统一的标准,只能说是根据不同类型的分析数据和业务需求,在对数据特性做了充分的理解之后,再选择相关的数据预处理技术,一般会用到多种预处理技术,而且对每种处理之后的效果做些分析对比,这里面经 ...

  4. [数据预处理]-中心化 缩放 KNN(一)

    据预处理是总称,涵盖了数据分析师使用它将数据转处理成想要的数据的一系列操作.例如,对某个网站进行分析的时候,可能会去掉 html 标签,空格,缩进以及提取相关关键字.分析空间数据的时候,一般会把带单位 ...

  5. [机器学习]-[数据预处理]-中心化 缩放 KNN(二)

    上次我们使用精度评估得到的成绩是 61%,成绩并不理想,再使 recall 和 f1 看下成绩如何? 首先我们先了解一下 召回率和 f1. 真实结果 预测结果 预测结果   正例 反例 正例 TP 真 ...

  6. 【sklearn】数据预处理 sklearn.preprocessing

    数据预处理 标准化 (Standardization) 规范化(Normalization) 二值化 分类特征编码 推定缺失数据 生成多项式特征 定制转换器 1. 标准化Standardization ...

  7. sklearn学习笔记(一)——数据预处理 sklearn.preprocessing

    https://blog.csdn.net/zhangyang10d/article/details/53418227 数据预处理 sklearn.preprocessing 标准化 (Standar ...

  8. 数据预处理(Python scikit-learn)

    在机器学习任务中,经常会对数据进行预处理.如尺度变换,标准化,二值化,正规化.至于采用哪种方法更有效,则与数据分布和采用算法有关.不同算法对数据的假设不同,可能需要不同的变换,而且有时无需进行变换,也 ...

  9. 【Sklearn系列】使用Sklearn进行数据预处理

    这篇文章主要讲解使用Sklearn进行数据预处理,我们使用Kaggle中泰坦尼克号事件的数据作为样本. 读取数据并创建数据表格,查看数据相关信息 import pandas as pd import ...

随机推荐

  1. 【manacher】HDU4513-吉哥系列故事——完美队形II

    [题目大意] 求最长回文队伍且队伍由中间向两边递减. [思路] 和字符串一样的做法,在递推的时候增加判断条件:a[i-p[i]]<=a[i-p[i]+2]. #include<iostre ...

  2. 【快速幂】POJ3641 - Pseudoprime numbers

    输入a和p.如果p不是素数,则若满足ap = a (mod p)输出yes,不满足或者p为素数输出no.最简单的快速幂,啥也不说了. #include<iostream> #include ...

  3. RestFramework

    什么是RESTful REST与技术无关,代表的是一种软件架构风格,REST是Representational State Transfer的简称,中文翻译为“表征状态转移” REST从资源的角度类审 ...

  4. Java学习笔记(14)

    需求:一个银行账户5000块,两夫妻一个拿着存折,一个拿着卡,开始取钱比赛,每次只能取1000,要求不准出现线程安全问题 public class Demo10 { public static voi ...

  5. Problem D: 程序填充(递归函数):数列2项和

    Problem D: 程序填充(递归函数):数列2项和 Time Limit: 1 Sec  Memory Limit: 64 MBSubmit: 2601  Solved: 2117 Descrip ...

  6. Problem D: 结构体:计算输入日期是该年的第几天

    #include <stdio.h> struct time{ int year; int month; int day;}; int main(void) { struct time s ...

  7. React中的Keys

    前言 当你在React当中渲染列表项的时候,React会尝试存储对应每个单独项的相关信息,如果你的组件包含state状态数据,那么这些状态数据必须被排序. 当你想要更新这些列表项的时候,React必须 ...

  8. 改变element-ui滚动条设置,

    基于vue的滚动条组件之--element隐藏组件滚动条scrollbar使用 在项目中,总是需要用到滚动条,但windows浏览器默认的滚动条是很丑的,为了页面美观,可以考虑优化滚动条样式.  vu ...

  9. .Net 有关程序集查找与加载的一点反思

    最近在做一款叫VICA产品,此产品采用了插件机制,插件在运行中加载,插件与插件之间存在依赖关系,所有的插件DLL为方便管理都放置在Plugins的文件夹下统一管理.这种处理方式不自觉的就让我想了解cl ...

  10. mysql: Error Codes and Messages

    Appendix B. Error Codes and MessagesTable of Contents B.1. Server Error Codes and MessagesB.2. Clien ...