"""Simple tutorial for using TensorFlow to compute a linear regression.

Parag K. Mital, Jan. 2016"""
# %% imports
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

# %% Let's create some toy data
plt.ion()  #enable interactive mode
n_observations = 100
fig, ax = plt.subplots(1, 1)
xs = np.linspace(-3, 3, n_observations)
ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations)
ax.scatter(xs, ys)
fig.show()
plt.draw()

# %% tf.placeholders for the input and output of the network. Placeholders are
# variables which we need to fill in when we are ready to compute the graph.
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)

# %% We will try to optimize min_(W,b) ||(X*w + b) - y||^2
# The `Variable()` constructor requires an initial value for the variable,
# which can be a `Tensor` of any type and shape. The initial value defines the
# type and shape of the variable. After construction, the type and shape of
# the variable are fixed. The value can be changed using one of the assign
# methods.
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
Y_pred = tf.add(tf.mul(X, W), b)

# %% Loss function will measure the distance between our observations
# and predictions and average over them.
cost = tf.reduce_sum(tf.pow(Y_pred - Y, 2)) / (n_observations - 1)

# %% if we wanted to add regularization, we could add other terms to the cost,
# e.g. ridge regression has a parameter controlling the amount of shrinkage
# over the norm of activations. the larger the shrinkage, the more robust
# to collinearity.
# cost = tf.add(cost, tf.mul(1e-6, tf.global_norm([W])))

# %% Use gradient descent to optimize W,b
# Performs a single step in the negative gradient
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# %% We create a session to use the graph
n_epochs = 1000
with tf.Session() as sess:
    # Here we tell tensorflow that we want to initialize all
    # the variables in the graph so we can use them
    sess.run(tf.initialize_all_variables())

    # Fit all training data
    prev_training_cost = 0.0
    for epoch_i in range(n_epochs):
        for (x, y) in zip(xs, ys):
            sess.run(optimizer, feed_dict={X: x, Y: y})

        training_cost = sess.run(
            cost, feed_dict={X: xs, Y: ys})
        print(training_cost)

        if epoch_i % 20 == 0:
            ax.plot(xs, Y_pred.eval(
                feed_dict={X: xs}, session=sess),
                    'k', alpha=epoch_i / n_epochs)
            fig.show()
            plt.draw()

        # Allow the training to quit if we've reached a minimum
        if np.abs(prev_training_cost - training_cost) < 0.000001:
            break
        prev_training_cost = training_cost
fig.show()
plt.waitforbuttonpress()

Simple tutorial for using TensorFlow to compute a linear regression的更多相关文章

  1. Simple tutorial for using TensorFlow to compute polynomial regression

    """Simple tutorial for using TensorFlow to compute polynomial regression. Parag K. Mi ...

  2. 深度学习 Deep Learning UFLDL 最新 Tutorial 学习笔记 1:Linear Regression

    1 前言 Andrew Ng的UFLDL在2014年9月底更新了. 对于開始研究Deep Learning的童鞋们来说这真的是极大的好消息! 新的Tutorial相比旧的Tutorial添加了Conv ...

  3. Machine Learning – 第2周(Linear Regression with Multiple Variables、Octave/Matlab Tutorial)

    Machine Learning – Coursera Octave for Microsoft Windows GNU Octave官网 GNU Octave帮助文档 (有900页的pdf版本) O ...

  4. STA 463 Simple Linear Regression Report

    STA 463 Simple Linear Regression ReportSpring 2019 The goal of this part of the project is to perfor ...

  5. TensorFlow 学习笔记(1)----线性回归(linear regression)的TensorFlow实现

    此系列将会每日持续更新,欢迎关注 线性回归(linear regression)的TensorFlow实现 #这里是基于python 3.7版本的TensorFlow TensorFlow是一个机器学 ...

  6. 机器学习-TensorFlow建模过程 Linear Regression线性拟合应用

    TensorFlow是咱们机器学习领域非常常用的一个组件,它在数据处理,模型建立,模型验证等等关于机器学习方面的领域都有很好的表现,前面的一节我已经简单介绍了一下TensorFlow里面基础的数据结构 ...

  7. Tensorflow - Implement for a Softmax Regression Model on MNIST.

    Coding according to TensorFlow 官方文档中文版 import tensorflow as tf from tensorflow.examples.tutorials.mn ...

  8. TensorFlow笔记二:线性回归预测(Linear Regression)

    代码: import tensorflow as tf import numpy as np import xlrd import matplotlib.pyplot as plt DATA_FILE ...

  9. 深度学习 Deep Learning UFLDL 最新Tutorial 学习笔记 3:Vectorization

    1 Vectorization 简述 Vectorization 翻译过来就是向量化,各简单的理解就是实现矩阵计算. 为什么MATLAB叫MATLAB?大概就是Matrix Lab,最根本的差别于其它 ...

随机推荐

  1. bzoj3894

    转载自http://www.cnblogs.com/rausen 3894: 文理分科 Time Limit: 10 Sec  Memory Limit: 512 MBSubmit: 1338  So ...

  2. django rest-framework 1.序列化 二

    在上一节说了Serializers的使用类似Django的From,在Django中有From也有ModelFrom,Serializers也是有个ModelSerializers,下面在讲讲rest ...

  3. Spring学习笔记3——使用注解的方式完成注入对象中的效果

    第一步:修改applicationContext.xml 添加<context:annotation-config/>表示告诉Spring要用注解的方式进行配置 <?xml vers ...

  4. JavaBean实现用户登陆

    本文简单讲述使用javabean实现用户登录,包括用户登录,注册和退出等. 系统结构图 2.数据库表 create table P_USER ( id       VARCHAR2(50) not n ...

  5. Centos下出现read-only file system 的解决办法

    Centos下出现这种情况说明磁盘只能读不能写,出现这种情况一般是因为不正常的关机或者硬盘损坏导致磁盘挂载出现问题. 本萌新也遇到了这个问题,尝试了各种命令都不行,最后用了mount -o remou ...

  6. 光电转研发:和计算机没有一点关系的专业怎么去bat类的公司

    光电 女 其实编码能力一般般,拿到百度腾讯研发offer. 一来幸运,二来真的想说行动决定了结果.研一没事就出去家教充实自己赚点钱,研二就开始找实习,去了网易,海康威视,百度实习.感觉还是吃了不少苦的 ...

  7. (MariaDB)开窗函数用法

    本文目录: 1.1 窗口和开窗函数简介 1.2 OVER()语法和执行位置 1.3 row_number()对分区排名 1.4 rank()和dense_rank() 1.5 percent_rank ...

  8. 使用PHP脚本远程部署git项目

    准备工作: 1.coding.net创建私有项目 2.安装了Web服务 Git服务的服务器 服务器端: 1.nginx.php-fpm统一用www用户 www 目录,这个可以通过修改配置文件实现. [ ...

  9. 如何在Linux上编译c++文件

    1. 打开Linux客户端,新建一个c++文件 2. 写如下代码,退出保存 3.对.cpp文件进行编译并输出结果.

  10. JavaScript正则表达式模式匹配(4)——使用exec返回数组、捕获性分组和非捕获性分组、嵌套分组

    使用exec返回数组 var pattern=/^[a-z]+\s[0-9]{4}$/; var str='google 2012'; alert(pattern.exec(str)); //返回一个 ...