normalization, standardization and regularization
Normalization
Normalization refers to rescaling real valued numeric attributes into the range 0 and 1. It is useful to scale the input attributes for a model that relies on the magnitude of values, such as distance measures used in k-nearest neighbors and in the preparation of coefficients in regression.
normalization 是将数据的每个样本(向量)变换为单位范数的向量,各样本之间是相互独立的.其实际上,是对向量中的每个分量值除以正规化因子.常用的正规化因子有 L1, L2 和 Max.假设,对长度为 n 的向量,其正规化因子 z 的计算公式,如下所示:
注意:Max 与无穷范数 不同,无穷范数 是需要先对向量的所有分量取绝对值,然后取其中的最大值;而 Max 是向量中的最大分量值,不需要取绝对值的操作.
补充:一阶范数也称为曼哈顿距离(Manhanttan distance)或街区距离;二阶范数也称为欧式距离(Euclidean distance)
#!/usr/bin/env python
# -*- coding: utf8 -*-
# author: klchang
# Use sklearn.preprocessing.Normalizer class to normalize data.
from __future__ import print_function
import numpy as np
from sklearn.preprocessing import Normalizer x = np.array([1, 2, 3, 4], dtype='float32').reshape(1,-1) print("Before normalization: ", x) options = ['l1', 'l2', 'max']
for opt in options:
norm_x = Normalizer(norm=opt).fit_transform(x)
print("After %s normalization: " % opt.capitalize(), norm_x)
#!/usr/bin/env python
# -*- coding: utf8 -*-
# author: klchang
# Use sklearn.preprocessing.normalize function to normalize data. from __future__ import print_function
import numpy as np
from sklearn.preprocessing import normalize x = np.array([1, 2, 3, 4], dtype='float32').reshape(1,-1) print("Before normalization: ", x) options = ['l1', 'l2', 'max']
for opt in options:
norm_x = normalize(x, norm=opt)
print("After %s normalization: " % opt.capitalize(), norm_x)
Standardizaton
Standardization refers to shifting the distribution of each attribute to have a mean of zero and a standard deviation of one (unit variance). It is useful to standardize attributes for a model that relies on the distribution of attributes such as Gaussian processes.
# Standardize the data attributes for the Iris dataset.
from sklearn.datasets import load_iris
from sklearn import preprocessing
# load the Iris dataset
iris = load_iris()
print(iris.data.shape)
# separate the data and target attributes
X = iris.data
y = iris.target
# standardize the data attributes
standardized_X = preprocessing.scale(X)
from 机器学习里的黑色艺术:normalization, standardization, regularization;
第一部分:大的层面上讲
1. normalization一般是把数据限定在需要的范围,比如一般都是【0,1】,从而消除了数据量纲对建模的影响。standardization 一般是指将数据正态化,使平均值0方差为1. 因此normalization和standardization 是针对数据而言的,消除一些数值差异带来的特种重要性偏见。经过归一化的数据,能加快训练速度,促进算法的收敛。
2.而regularization是在cost function里面加惩罚项,增加建模的模糊性,从而把捕捉到的趋势从局部细微趋势,调整到整体大概趋势。虽然一定程度上的放宽了建模要求,但是能有效防止over-fitting的问题(如图,来源于网上),增加模型准确性。因此,regularization是针对模型而言。
这三个term说的是不同的事情。
第二部分:方法
总结下normalization, standardization,和regularization的方法。
Normalization 和 Standardization
(1).最大最小值normalization: x'=(x-min)/(max-min). 这种方法的本质还是线性变换,简单直接。缺点就是新数据的加入,可能会因数值范围的扩大需要重新regularization。
(2). 对数归一化:x'=log10(x)/log10(xmax)或者log10(x)。推荐第一种,除以最大值,这样使数据落到【0,1】区间
(3).反正切归一化。x'=2atan(x)/pi。能把数据投影到【-1,1】区间。
(4).zero mean normalization归一化,也是standardization. x'=(x-mean)/std.
有无normalization,模型的学习曲线是不一样的,甚至会收敛结果不一样。比如在深度学习中,batch normalization有无,学习曲线对比是这样的:图一 蓝色线有batch normalization (BN),黑色虚线是没有BN. 黑色线放大,是图二的样子,蓝色线放大是图三的样子。reference:Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift, Sergey Ioffe.
Regularization 方法
一般形式,应该是 min , R是regularization term。一般方法有
- L1 regularization: 对整个绝对值只和进行惩罚。
- L2 regularization:对系数平方和进行惩罚。
- Elastic-net 混合regularization。
from Differences between normalization, standardization and regularization;
Normalization
Normalization usually rescales features to [0,1][0,1].1 That is,
x′=x−min(x)max(x)−min(x)x′=x−min(x)max(x)−min(x)
It will be useful when we are sure enough that there are no anomalies (i.e. outliers) with extremely large or small values. For example, in a recommender system, the ratings made by users are limited to a small finite set like {1,2,3,4,5}{1,2,3,4,5}.
In some situations, we may prefer to map data to a range like [−1,1][−1,1] with zero-mean.2 Then we should choose mean normalization.3
x′=x−mean(x)max(x)−min(x)x′=x−mean(x)max(x)−min(x)
In this way, it will be more convenient for us to use other techniques like matrix factorization.
Standardization
Standardization is widely used as a preprocessing step in many learning algorithms to rescale the features to zero-mean and unit-variance.3
x′=x−μσx′=x−μσ
Regularization
Different from the feature scaling techniques mentioned above, regularization is intended to solve the overfitting problem. By adding an extra part增加惩罚项 to the loss function, the parameters in learning algorithms are more likely to converge to smaller values, which can significantly reduce overfitting.
There are mainly two basic types of regularization: L1-norm (lasso) and L2-norm (ridge regression).4
L1-norm5
The original loss function is denoted by f(x)f(x), and the new one is F(x)F(x).
F(x)=f(x)+λ∥x∥1F(x)=f(x)+λ‖x‖1
where
∥x∥p=p ⎷n∑i=1|xi|p‖x‖p=∑i=1n|xi|pp
L1 regularization is better when we want to train a sparse model, since the absolute value function is not differentiable at 0.
L2-norm56
F(x)=f(x)+λ∥x∥22F(x)=f(x)+λ‖x‖22
L2 regularization is preferred in ill-posed problems for smoothing.
Here is a comparison between L1 and L2 regularizations.
From https://en.wikipedia.org/wiki/Regularization_(mathematics)
References
https://stats.stackexchange.com/a/10298 ↩
https://www.quora.com/What-is-the-difference-between-normalization-standardization-and-regularization-for-data/answer/Enzo-Tagliazucchi?share=c48b6752&srid=51VPj ↩
https://en.wikipedia.org/wiki/Regularization_%28mathematics%29 ↩
https://www.quora.com/What-is-the-difference-between-L1-and-L2-regularization-How-does-it-solve-the-problem-of-overfitting-Which-regularizer-to-use-and-when/answer/Kenneth-Tran?share=400c336d&srid=51VPj ↩ ↩2
https://en.wikipedia.org/wiki/Ridge_regression ↩
normalization, standardization and regularization的更多相关文章
- zz先睹为快:神经网络顶会ICLR 2019论文热点分析
先睹为快:神经网络顶会ICLR 2019论文热点分析 - lqfarmer的文章 - 知乎 https://zhuanlan.zhihu.com/p/53011934 作者:lqfarmer链接:ht ...
- 数据预处理中归一化(Normalization)与损失函数中正则化(Regularization)解惑
背景:数据挖掘/机器学习中的术语较多,而且我的知识有限.之前一直疑惑正则这个概念.所以写了篇博文梳理下 摘要: 1.正则化(Regularization) 1.1 正则化的目的 1.2 正则化的L1范 ...
- 学习笔记57—归一化 (Normalization)、标准化 (Standardization)和中心化/零均值化 (Zero-centered)
1 概念 归一化:1)把数据变成(0,1)或者(1,1)之间的小数.主要是为了数据处理方便提出来的,把数据映射到0-1范围之内处理,更加便捷快速.2)把有量纲表达式变成无量纲表达式,便于不同单位或 ...
- 归一化 (Normalization)、标准化 (Standardization)和中心化/零均值化 (Zero-centered)
博主学习的源头,感谢!https://www.jianshu.com/p/95a8f035c86c 归一化 (Normalization).标准化 (Standardization)和中心化/零均值化 ...
- 【转】Standardization(标准化)和Normalization(归一化)的区别
Standardization(标准化)和Normalization(归一化)的区别 https://blog.csdn.net/Dhuang159/article/details/83627146 ...
- Coursera Deep Learning 2 Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week3, Hyperparameter tuning, Batch Normalization and Programming Frameworks
Tuning process 下图中的需要tune的parameter的先后顺序, 红色>黄色>紫色,其他基本不会tune. 先讲到怎么选hyperparameter, 需要随机选取(sa ...
- 课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第三周(Hyperparameter tuning, Batch Normalization and Programming Frameworks) —— 2.Programming assignments
Tensorflow Welcome to the Tensorflow Tutorial! In this notebook you will learn all the basics of Ten ...
- 归一化(Normalization)和标准化(Standardization)
归一化和标准化是机器学习和深度学习中经常使用两种feature scaling的方式,这里主要讲述以下这两种feature scaling的方式如何计算,以及一般在什么情况下使用. 归一化的计算方式: ...
- 归一化方法 Normalization Method
1. 概要 数据预处理在众多深度学习算法中都起着重要作用,实际情况中,将数据做归一化和白化处理后,很多算法能够发挥最佳效果.然而除非对这些算法有丰富的使用经验,否则预处理的精确参数并非显而易见. 2. ...
随机推荐
- Java之CheckedException
先来科普一下 CE 到底是什么吧.Java 要求你必须在函数的类型里面声明它可能抛出的异常.比如,你的函数如果是这样: void foo(string filename) throws FileNot ...
- 当页面提交时,执行相关JS函数检查输入是否合法
当页面提交时,执行相关JS函数检查输入是否合法 关键代码 <form action="tj.php" method="post" onSubmit=&qu ...
- 剑指offer-面试题53_1-在排序数组中查找数字-二分查找
/* 题目: 统计一个数字在排序数组中出现的次数. */ /* 思路: 1.从前往后遍历,时间复杂度O(n). 2.二分查找到目标数字target,向前向后遍历,时间复杂度O(n). 3.利用二分法, ...
- 【python基础语法】第2天作业练习题
""" 1.用户输入一个数值,请判断用户输入的是否为偶数?是偶数输出True,不是输出False(提示:input输入的不管是什么,都会被转换成字符串,自己扩展,想办法将 ...
- 关于所学,及JNI问题
上周每天学习Java两个小时,随后两个小时里对教材上的例子进行验证,学会了如何使用Javac对文件进行终端编译,输出,但由于所下载的 jdk版本问题出现了JNI问题,正在尝试解决.并学会了如何使用ec ...
- 【Pyecharts可视化分享】杭州步行热门路线等~
前言 本文包括内容如下: 杭州步行热门路线 渐变效果散点图 均是Echarts官方提供等示例,本文将会通过Pyecharts来进行实现. 杭州步行热门路线 因为代码中需要调用百度地图,所以开始之前你需 ...
- [SCOI2015]情报传递[树剖+主席树]
[SCOI2015]情报传递 题意大概就是 使得在 \(i\) 时刻加入一个情报员帮您传情报 然后询问 \(x,y,c\) 指 \(x\)到\(y\)多少个人有风险-(大于c)的都有风险-每天风险值+ ...
- vue 学习2
模板指令.属性总结 html 中的标签属性 1. :class 值是对象,key为class 的值,值为boolean类型 html标签任意属性都可以:属性,表示动态值(值是变化的,不是固定不变的) ...
- laravel 解决 sql mode only_full_group_by
this is incompatible with sql_mode=only_full_group_by 先贴报错是这样的哦,sql 中使用到了 group by 然后这是mysql-5.7以上版本 ...
- win10系统家庭版升级到专业版
win10家庭版升级专业版密钥:VK7JG-NPHTM-C97JM-9MPGT-3V66T4N7JM-CV98F-WY9XX-9D8CF-369TT FMPND-XFTD4-67FJC-HDR8C-3 ...