Simple tutorial for using TensorFlow to compute polynomial regression
"""Simple tutorial for using TensorFlow to compute polynomial regression. Parag K. Mital, Jan. 2016""" # %% Imports import numpy as np import tensorflow as tf import matplotlib.pyplot as plt # %% Let's create some toy data plt.ion() n_observations = 100 fig, ax = plt.subplots(1, 1) xs = np.linspace(-3, 3, n_observations) ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations) ax.scatter(xs, ys) fig.show() plt.draw() # %% tf.placeholders for the input and output of the network. Placeholders are # variables which we need to fill in when we are ready to compute the graph. X = tf.placeholder(tf.float32) Y = tf.placeholder(tf.float32) # %% Instead of a single factor and a bias, we'll create a polynomial function # of different polynomial degrees. We will then learn the influence that each # degree of the input (X^0, X^1, X^2, ...) has on the final output (Y). Y_pred = tf.Variable(tf.random_normal([1]), name='bias') for pow_i in range(1, 5): W = tf.Variable(tf.random_normal([1]), name='weight_%d' % pow_i) Y_pred = tf.add(tf.mul(tf.pow(X, pow_i), W), Y_pred) # %% Loss function will measure the distance between our observations # and predictions and average over them. cost = tf.reduce_sum(tf.pow(Y_pred - Y, 2)) / (n_observations - 1) # %% if we wanted to add regularization, we could add other terms to the cost, # e.g. ridge regression has a parameter controlling the amount of shrinkage # over the norm of activations. the larger the shrinkage, the more robust # to collinearity. # cost = tf.add(cost, tf.mul(1e-6, tf.global_norm([W]))) # %% Use gradient descent to optimize W,b # Performs a single step in the negative gradient learning_rate = 0.01 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # %% We create a session to use the graph n_epochs = 1000 with tf.Session() as sess: # Here we tell tensorflow that we want to initialize all # the variables in the graph so we can use them sess.run(tf.initialize_all_variables()) # Fit all training data prev_training_cost = 0.0 for epoch_i in range(n_epochs): for (x, y) in zip(xs, ys): sess.run(optimizer, feed_dict={X: x, Y: y}) training_cost = sess.run( cost, feed_dict={X: xs, Y: ys}) print(training_cost) if epoch_i % 100 == 0: ax.plot(xs, Y_pred.eval( feed_dict={X: xs}, session=sess), 'k', alpha=epoch_i / n_epochs) fig.show() plt.draw() # Allow the training to quit if we've reached a minimum if np.abs(prev_training_cost - training_cost) < 0.000001: break prev_training_cost = training_cost ax.set_ylim([-3, 3]) fig.show() plt.waitforbuttonpress()
Simple tutorial for using TensorFlow to compute polynomial regression的更多相关文章
- Simple tutorial for using TensorFlow to compute a linear regression
"""Simple tutorial for using TensorFlow to compute a linear regression. Parag K. Mita ...
- NNs(Neural Networks,神经网络)和Polynomial Regression(多项式回归)等价性之思考,以及深度模型可解释性原理研究与案例
1. Main Point 0x1:行文框架 第二章:我们会分别介绍NNs神经网络和PR多项式回归各自的定义和应用场景. 第三章:讨论NNs和PR在数学公式上的等价性,NNs和PR是两个等价的理论方法 ...
- TensorFlow实战之Softmax Regression识别手写数字
关于本文说明,本人原博客地址位于http://blog.csdn.net/qq_37608890,本文来自笔者于2018年02月21日 23:10:04所撰写内容(http://blog.c ...
- machine learning (6)---how to choose features, polynomial regression
how to choose features, polynomial regression:通过定义更适合我们的feature,选择更好的模型,使我们的曲线与数据更好的拟合(而不仅仅是一条直线) 可以 ...
- 机器学习-TensorFlow建模过程 Linear Regression线性拟合应用
TensorFlow是咱们机器学习领域非常常用的一个组件,它在数据处理,模型建立,模型验证等等关于机器学习方面的领域都有很好的表现,前面的一节我已经简单介绍了一下TensorFlow里面基础的数据结构 ...
- Polynomial regression
- (转)The Road to TensorFlow
Stephen Smith's Blog All things Sage 300… The Road to TensorFlow – Part 7: Finally Some Code leave a ...
- [Tensorflow] Object Detection API - retrain mobileNet
前言 一.专注话题 重点话题 Retrain mobileNet (transfer learning). Train your own Object Detector. 这部分讲理论,下一篇讲实践. ...
- Ubuntu 14.04(64位)+GTX970+CUDA8.0+Tensorflow配置 (双显卡NVIDIA+Intel集成显卡) ------本内容是长时间的积累,有时间再详细整理
(后面内容是本人初次玩GPU时,遇到很多坑的问题总结及尝试解决办法.由于买独立的GPU安装会涉及到设备的兼容问题,这里建议还是购买GPU一体机(比如https://item.jd.com/396477 ...
随机推荐
- bzoj3831 [Poi2014]Little Bird 单调队列优化dp
3831: [Poi2014]Little Bird Time Limit: 20 Sec Memory Limit: 128 MBSubmit: 505 Solved: 322[Submit][ ...
- bzoj3295[Cqoi2011]动态逆序对 树套树
3295: [Cqoi2011]动态逆序对 Time Limit: 10 Sec Memory Limit: 128 MBSubmit: 5987 Solved: 2080[Submit][Sta ...
- (转)Linux下C++开发初探
1.开发工具 Windows下,开发工具多以集成开发环境IDE的形式展现给最终用户.例如,VS2008集成了编辑器,宏汇编ml,C /C++编译器cl,资源编译器rc,调试器,文档生成工具, nmak ...
- h5的input的required使用中遇到的问题
form提交时隐藏input发生的错误 问题描述 在form表单提交的时候,有些input标签被隐藏,表单验证过程中会出现An invalid form control with name='' is ...
- ASP.NET Core 添加统一模型验证处理机制
一.前言 模型验证自ASP.NET MVC便有提供,我们可以在Model(DTO)的属性上加上数据注解(Data Annotations)特性,在进入Action之前便会根据数据注解,来验证输入的数据 ...
- HTML5 唤起 APP
<p><a href="xxx://app/question/95">点击跳转,直接回帖报名</a></p> /* global n ...
- EXISTS的使用详解
.exists的使用场合: exists 用于只能用于子查询,可以替代in,若匹配到结果,则退出内部 查询,并将条件标志为true,传回全部结果资料,in 不管匹配到匹配不到都 全部匹配完毕,使用ex ...
- Azure AI 服务之语音识别
笔者在前文<Azure AI 服务之文本翻译>中简单介绍了 Azure 认知服务中的文本翻译 API,通过这些简单的 REST API 调用就可以轻松地进行机器翻译.如果能在程序中简单的集 ...
- (二)ROS系统架构及概念 ROS Architecture and Concepts 以Kinetic为主更新 附课件PPT
第2章 ROS系统架构及概念 ROS Architecture and Concepts PPT说明: 正文用白色,命令或代码用黄色,右下角为对应中文译著页码. 这一章需要掌握ROS文件系统,运行图级 ...
- 前端CSS技术全解(二)
欢迎转载,转载请标明出处: http://blog.csdn.net/johnny901114/article/details/52813761 本文出自:[余志强的博客] 一.CSS三大特性 1)继 ...