• TensorFlow是一个面向数值计算的通用平台,可以方便地训练线性模型。下面采用TensorFlow完成Andrew Ng主讲的Deep Learning课程练习题,提供了整套源码。

线性回归

多元线性回归

逻辑回归

线性回归


# -*- coding: utf-8 -*-
"""
Created on Wed Sep 6 19:46:04 2017 @author: Administrator
""" #!/usr/bin/env python
# -*- coding=utf-8 -*-
# @author: ranjiewen
# @date: 2017-9-6
# @description: compare scikit-learn and tensorflow, using linear regression data from deep learning course by Andrew Ng.
# @ref: http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=DeepLearning&doc=exercises/ex2/ex2.html import tensorflow as tf
import numpy as np
from sklearn import linear_model # Read x and y
#x_data = np.loadtxt("ex2x.dat")
#y_data = np.loadtxt("ex2y.dat") x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3+np.random.rand(100) # We use scikit-learn first to get a sense of the coefficients
reg = linear_model.LinearRegression()
reg.fit(x_data.reshape(-1, 1), y_data) print ("Coefficient of scikit-learn linear regression: k=%f, b=%f" % (reg.coef_, reg.intercept_)) # Then we apply tensorflow to achieve the similar results
# The structure of tensorflow code can be divided into two parts: # First part: set up computation graph
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b loss = tf.reduce_mean(tf.square(y - y_data)) / 2
# 对于tensorflow,梯度下降的步长alpha参数需要很仔细的设置,步子太大容易扯到蛋导致无法收敛;步子太小容易等得蛋疼。迭代次数也需要细致的尝试。
optimizer = tf.train.GradientDescentOptimizer(0.07) # Try 0.1 and you will see unconvergency
train = optimizer.minimize(loss) init = tf.initialize_all_variables() # Second part: launch the graph
sess = tf.Session()
sess.run(init) for step in range(1500):
sess.run(train)
if step % 100 == 0:
print (step, sess.run(W), sess.run(b))
print ("Coeeficient of tensorflow linear regression: k=%f, b=%f" % (sess.run(W), sess.run(b)))
  • 思考:对于tensorflow,梯度下降的步长alpha参数需要很仔细的设置,步子太大容易扯到蛋导致无法收敛;步子太小容易等得蛋疼。迭代次数也需要细致的尝试。

多元线性回归


# -*- coding: utf-8 -*-
"""
Created on Wed Sep 6 19:53:24 2017 @author: Administrator
""" import numpy as np
import tensorflow as tf
from numpy import mat
from sklearn import linear_model
from sklearn import preprocessing # Read x and y
#x_data = np.loadtxt("ex3x.dat").astype(np.float32)
#y_data = np.loadtxt("ex3y.dat").astype(np.float32) x_data = [np.random.rand(100).astype(np.float32),np.random.rand(100).astype(np.float32)+10]
x_data=mat(x_data).T
y_data = 5.3+np.random.rand(100) # We evaluate the x and y by sklearn to get a sense of the coefficients.
reg = linear_model.LinearRegression()
reg.fit(x_data, y_data)
print ("Coefficients of sklearn: K=%s, b=%f" % (reg.coef_, reg.intercept_)) # Now we use tensorflow to get similar results. # Before we put the x_data into tensorflow, we need to standardize it
# in order to achieve better performance in gradient descent;
# If not standardized, the convergency speed could not be tolearated.
# Reason: If a feature has a variance that is orders of magnitude larger than others,
# it might dominate the objective function
# and make the estimator unable to learn from other features correctly as expected.
# 对于梯度下降算法,变量是否标准化很重要。在这个例子中,变量一个是面积,一个是房间数,量级相差很大,如果不归一化,面积在目标函数和梯度中就会占据主导地位,导致收敛极慢。
scaler = preprocessing.StandardScaler().fit(x_data)
print (scaler.mean_, scaler.scale_)
x_data_standard = scaler.transform(x_data) W = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1, 1]))
y = tf.matmul(x_data_standard, W) + b loss = tf.reduce_mean(tf.square(y - y_data.reshape(-1, 1)))/2
optimizer = tf.train.GradientDescentOptimizer(0.3)
train = optimizer.minimize(loss) init = tf.initialize_all_variables() sess = tf.Session()
sess.run(init)
for step in range(100):
sess.run(train)
if step % 10 == 0:
print (step, sess.run(W).flatten(), sess.run(b).flatten()) print ("Coefficients of tensorflow (input should be standardized): K=%s, b=%s" % (sess.run(W).flatten(), sess.run(b).flatten()))
print ("Coefficients of tensorflow (raw input): K=%s, b=%s" % (sess.run(W).flatten() / scaler.scale_, sess.run(b).flatten() - np.dot(scaler.mean_ / scaler.scale_, sess.run(W))))
  • 思路:对于梯度下降算法,变量是否标准化很重要。在这个例子中,变量一个是面积,一个是房间数,量级相差很大,如果不归一化,面积在目标函数和梯度中就会占据主导地位,导致收敛极慢。

逻辑回归

# -*- coding: utf-8 -*-
"""
Created on Wed Sep 6 20:13:15 2017
数据下载:http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=DeepLearning&doc=exercises/ex4/ex4.html @author: Administrator
""" import tensorflow as tf
import numpy as np
from numpy import mat
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing # Read x and y
x_data = np.loadtxt("ex4Data/ex4x.dat").astype(np.float32)
y_data = np.loadtxt("ex4Data/ex4y.dat").astype(np.float32) #x_data = [np.random.rand(100).astype(np.float32),np.random.rand(100).astype(np.float32)+10]
#x_data=mat(x_data).T
#y_data = 5.3+np.random.rand(100) scaler = preprocessing.StandardScaler().fit(x_data)
x_data_standard = scaler.transform(x_data) # We evaluate the x and y by sklearn to get a sense of the coefficients.
reg = LogisticRegression(C=999999999, solver="newton-cg") # Set C as a large positive number to minimize the regularization effect
reg.fit(x_data, y_data)
print ("Coefficients of sklearn: K=%s, b=%f" % (reg.coef_, reg.intercept_)) # Now we use tensorflow to get similar results.
W = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1, 1]))
y = 1 / (1 + tf.exp(-tf.matmul(x_data_standard, W) + b))
loss = tf.reduce_mean(- y_data.reshape(-1, 1) * tf.log(y) - (1 - y_data.reshape(-1, 1)) * tf.log(1 - y)) optimizer = tf.train.GradientDescentOptimizer(1.3)
train = optimizer.minimize(loss) init = tf.initialize_all_variables() sess = tf.Session()
sess.run(init)
for step in range(100):
sess.run(train)
if step % 10 == 0:
print (step, sess.run(W).flatten(), sess.run(b).flatten()) print ("Coefficients of tensorflow (input should be standardized): K=%s, b=%s" % (sess.run(W).flatten(), sess.run(b).flatten()))
print ("Coefficients of tensorflow (raw input): K=%s, b=%s" % (sess.run(W).flatten() / scaler.scale_, sess.run(b).flatten() - np.dot(scaler.mean_ / scaler.scale_, sess.run(W)))) # Problem solved and we are happy. But...
# I'd like to implement the logistic regression from a multi-class viewpoint instead of binary.
# In machine learning domain, it is called softmax regression
# In economic and statistics domain, it is called multinomial logit (MNL) model, proposed by Daniel McFadden, who shared the 2000 Nobel Memorial Prize in Economic Sciences. print ("------------------------------------------------")
print ("We solve this binary classification problem again from the viewpoint of multinomial classification")
print ("------------------------------------------------") # As a tradition, sklearn first
reg = LogisticRegression(C=9999999999, solver="newton-cg", multi_class="multinomial")
reg.fit(x_data, y_data)
print ("Coefficients of sklearn: K=%s, b=%f" % (reg.coef_, reg.intercept_))
print ("A little bit difference at first glance. What about multiply them with 2?") # Then try tensorflow
W = tf.Variable(tf.zeros([2, 2])) # first 2 is feature number, second 2 is class number
b = tf.Variable(tf.zeros([1, 2]))
V = tf.matmul(x_data_standard, W) + b
y = tf.nn.softmax(V) # tensorflow provide a utility function to calculate the probability of observer n choose alternative i, you can replace it with `y = tf.exp(V) / tf.reduce_sum(tf.exp(V), keep_dims=True, reduction_indices=[1])` # Encode the y label in one-hot manner
lb = preprocessing.LabelBinarizer()
lb.fit(y_data)
y_data_trans = lb.transform(y_data)
y_data_trans = np.concatenate((1 - y_data_trans, y_data_trans), axis=1) # Only necessary for binary class loss = tf.reduce_mean(-tf.reduce_sum(y_data_trans * tf.log(y), reduction_indices=[1]))
optimizer = tf.train.GradientDescentOptimizer(1.3)
train = optimizer.minimize(loss) init = tf.initialize_all_variables() sess = tf.Session()
sess.run(init)
for step in range(100):
sess.run(train)
if step % 10 == 0:
print (step, sess.run(W).flatten(), sess.run(b).flatten()) print ("Coefficients of tensorflow (input should be standardized): K=%s, b=%s" % (sess.run(W).flatten(), sess.run(b).flatten()))
print ("Coefficients of tensorflow (raw input): K=%s, b=%s" % ((sess.run(W) / scaler.scale_).flatten(), sess.run(b).flatten() - np.dot(scaler.mean_ / scaler.scale_, sess.run(W))))
  • 思考:
  • 对于逻辑回归,损失函数比线性回归模型复杂了一些。首先需要通过sigmoid函数,将线性回归的结果转化为0至1之间的概率值。然后写出每个样本的发生概率(似然),那么所有样本的发生概率就是每个样本发生概率的乘积。为了求导方便,我们对所有样本的发生概率取对数,保持其单调性的同时,可以将连乘变为求和(加法的求导公式比乘法的求导公式简单很多)。对数极大似然估计方法的目标函数是最大化所有样本的发生概率;机器学习习惯将目标函数称为损失,所以将损失定义为对数似然的相反数,以转化为极小值问题。
  • 我们提到逻辑回归时,一般指的是二分类问题;然而这套思想是可以很轻松就拓展为多分类问题的,在机器学习领域一般称为softmax回归模型。本文的作者是统计学与计量经济学背景,因此一般将其称为MNL模型。

Reference:

tensorflow基础练习:线性模型的更多相关文章

  1. TensorFlow基础

    TensorFlow基础 SkySeraph  2017 Email:skyseraph00#163.com 更多精彩请直接访问SkySeraph个人站点:www.skyseraph.com Over ...

  2. TensorFlow基础笔记(0) 参考资源学习文档

    1 官方文档 https://www.tensorflow.org/api_docs/ 2 极客学院中文文档 http://www.tensorfly.cn/tfdoc/api_docs/python ...

  3. TensorFlow基础笔记(3) cifar10 分类学习

    TensorFlow基础笔记(3) cifar10 分类学习 CIFAR-10 is a common benchmark in machine learning for image recognit ...

  4. TensorFlow基础剖析

    TensorFlow基础剖析 一.概述 TensorFlow 是一个使用数据流图 (Dataflow Graph) 表达数值计算的开源软件库.它使 用节点表示抽象的数学计算,并使用 OP 表达计算的逻 ...

  5. 芝麻HTTP:TensorFlow基础入门

    本篇内容基于 Python3 TensorFlow 1.4 版本. 本节内容 本节通过最简单的示例 -- 平面拟合来说明 TensorFlow 的基本用法. 构造数据 TensorFlow 的引入方式 ...

  6. 05基于python玩转人工智能最火框架之TensorFlow基础知识

    从helloworld开始 mkdir mooc # 新建一个mooc文件夹 cd mooc mkdir 1.helloworld # 新建一个helloworld文件夹 cd 1.helloworl ...

  7. tensorflow基础篇-1

    1.使用占位符和变量 import tensorflow as tf import numpy as np #-----创建变量并初始化----------- def first(): my_var= ...

  8. 5、Tensorflow基础(三)神经元函数及优化方法

    1.激活函数 激活函数(activation function)运行时激活神经网络中某一部分神经元,将激活信息向后传入下一层的神经网络.神经网络之所以能解决非线性问题(如语音.图像识别),本质上就是激 ...

  9. TensorFlow应用实战 | TensorFlow基础知识

    挺长的~超出估计值了~预计阅读时间20分钟. 从helloworld开始 mkdir 1.helloworld cd 1.helloworldvim helloworld.py 代码: # -*- c ...

随机推荐

  1. python私有成员与公有成员(_和__)

    python并没有对私有成员提供严格的访问保护机制. 在定义类的成员时,如果成员名以两个下划线“__”或更多下划线开头而不以两个或更多下划线结束则表示是私有成员. 私有成员在类的外部不能直接访问,需要 ...

  2. 对Fiddler设置【Decrypt HTTPS traffic】后火狐浏览器打开https【您的连接并不安全】的解决方法

    火狐浏览器在打开https页面的时候出现[您的连接并不安全]的提示页面: 在设置Fiddler的HTTPS解密的时候,会对下面图中的红线框的选项点击一次生成一个Fiddler 根证书在桌面上: 点击火 ...

  3. docker系列之基础命令-1

    1.docker基础命令 docker images 显示镜像列表 docker ps 显示容器列表 docker run IMAGE_ID 指定镜像, 运行一个容器 docker start/sto ...

  4. kafka异常问题汇总

    1.报错:: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.NotLeaderForPartition ...

  5. Day13有参装饰器,三元表达式,匿名函数

    多个装饰器: 加载顺序:由下而上 执行顺序:由上而下 有参装饰器: 闭包,给函数传参的一种方法 当装饰器内需要参数时,可以采用闭包形式给其传参,第三层函数接收完参数时,就变为无参装饰器 三元表达式: ...

  6. Python9-内置函数2-day16

    #zip方法 l = [1,2,3] l2 = ['a','b','c'] l3 = ('*','**',[1,2]) l4 = {'k1':1,'k2':2} for i in zip(l,l2,l ...

  7. HDU 3790 (最短路 + 花费)

    题意: 给你n个点,m条无向边,每条边都有长度d和花费p,给你起点s终点t,要求输出起点到终点的最短距离及其花费,如果最短距离有多条路线,则输出花费最少的. #include<bits/stdc ...

  8. 算法学习记录-排序——选择排序(Simple Selection Sort)

    之前在冒泡排序的附录中提到可以在每次循环时候,不用交换操作,而只需要记录最小值下标,每次循环后交换哨兵与最小值下标的书, 这样可以减少交换操作的时间. 这种方法针对冒泡排序中需要频繁交换数组数字而改进 ...

  9. cf950f Curfew

    神贪心--写了一个晚上加一个早上. 先考虑只有一个宿管的情况. 首先,如果这个宿舍人多了,多余的人就跑到下一个宿舍.(如果这是最后一个宿舍的话,多的就躺床底下) 如果这个宿舍人少了,但是能从别的宿舍调 ...

  10. Objective-c 实例变量的访问级别

    在C#和JAVA中无论是method还是variable都有严格的访问级别控制,那么在object-c中对访问级别的使用非常稀少,原因可能是因为在method上没有访问级别的语法,单单控制变量没有什么 ...