import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt # 超参数
# Hyper Parameters
TIME_STEP = 10 # rnn time step
INPUT_SIZE = 1 # rnn input size
LR = 0.02 # learning rate # 生成数据
# show data
steps = np.linspace(0, np.pi * 2, 100, dtype=np.float32) # float32 for converting torch FloatTensor
x_np = np.sin(steps) # 输入
y_np = np.cos(steps) # 目标
plt.plot(steps, y_np, 'r-', label='target (cos)')
plt.plot(steps, x_np, 'b-', label='input (sin)')
plt.legend(loc='best')
plt.show() # 定义神经网络
# 对每一个 r_out 都得放到 Linear 中去计算出预测的 output, 所以能用一个 for loop 来循环计算.
class RNN(nn.Module):
def __init__(self):
super(RNN, self).__init__() # 继承 __init__ 功能 self.rnn = nn.RNN( # 一个普通的 RNN
input_size=INPUT_SIZE,
hidden_size=32, # rnn hidden unit 32个神经元
num_layers=1, # number of rnn layer # 有几层 RNN layers
batch_first=True, # input & output will has batch size as 1s dimension. e.g. (batch, time_step, input_size) # input & output 会是以 batch size 为第一维度的特征集 e.g. (batch, time_step, input_size)
)
self.out = nn.Linear(32, 1) def forward(self, x, h_state): # 因为 hidden state 是连续的, 所以要一直传递这一个 state
# x (batch, time_step, input_size)
# h_state (n_layers, batch, hidden_size)
# r_out (batch, time_step, hidden_size)
r_out, h_state = self.rnn(x, h_state) # h_state 也要作为 RNN 的一个输入 outs = [] # save all predictions # 保存所有时间点的预测值
for time_step in range(r_out.size(1)): # calculate output for each time step # 对每一个时间点计算 output
outs.append(self.out(r_out[:, time_step, :]))
return torch.stack(outs, dim=1), h_state # instead, for simplicity, you can replace above codes by follows
# r_out = r_out.view(-1, 32)
# outs = self.out(r_out)
# outs = outs.view(-1, TIME_STEP, 1)
# return outs, h_state # or even simpler, since nn.Linear can accept inputs of any dimension
# and returns outputs with same dimension except for the last
# outs = self.out(r_out)
# return outs rnn = RNN()
print(rnn)
# 选择优化器
optimizer = torch.optim.Adam(rnn.parameters(), lr=LR) # optimize all cnn parameters
# 选择损失函数
loss_func = nn.MSELoss() h_state = None # for initial hidden state plt.figure(1, figsize=(12, 5))
plt.ion() # continuously plot for step in range(100):
start, end = step * np.pi, (step + 1) * np.pi # time range
# use sin predicts cos
steps = np.linspace(start, end, TIME_STEP, dtype=np.float32,
endpoint=False) # float32 for converting torch FloatTensor
x_np = np.sin(steps)
y_np = np.cos(steps) x = torch.from_numpy(x_np[np.newaxis, :, np.newaxis]) # shape (batch, time_step, input_size)
y = torch.from_numpy(y_np[np.newaxis, :, np.newaxis]) prediction, h_state = rnn(x, h_state) # rnn output
# !! next step is important !!
h_state = h_state.data # repack the hidden state, break the connection from last iteration loss = loss_func(prediction, y) # calculate loss
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients # plotting
plt.plot(steps, y_np.flatten(), 'r-')
plt.plot(steps, prediction.data.numpy().flatten(), 'b-')
plt.draw();
plt.pause(0.05) plt.ioff()
plt.show()

pytorch1.0实现RNN for Regression的更多相关文章

  1. 用pytorch1.0快速搭建简单的神经网络

    用pytorch1.0搭建简单的神经网络 import torch import torch.nn.functional as F # 包含激励函数 # 建立神经网络 # 先定义所有的层属性(__in ...

  2. pytorch1.0 用torch script导出模型

    python的易上手和pytorch的动态图特性,使得pytorch在学术研究中越来越受欢迎,但在生产环境,碍于python的GIL等特性,可能达不到高并发.低延迟的要求,存在需要用c++接口的情况. ...

  3. pytorch-1.0 踩坑记录

    参加百度的一个竞赛,官方要求把提交的代码测试环境pyorch1.0,于是将自己计算机pytorch升级到1.0. 在ubuntu下用conda install pytorch 命令安装时,效果很差,解 ...

  4. windows10 安装 Anaconda 并配置 pytorch1.0

    官网下载Anaconda安装包,按步骤安装即可安装完后,打开DOS,或Anaconda自带的Anaconda Prompt终端查看Anaconda已安装的安装包C:\Users\jiangshan&g ...

  5. pytorch1.0实现RNN-LSTM for Classification

    import torch from torch import nn import torchvision.datasets as dsets import torchvision.transforms ...

  6. pytorch1.0进行Optimizer 优化器对比

    pytorch1.0进行Optimizer 优化器对比 import torch import torch.utils.data as Data # Torch 中提供了一种帮助整理数据结构的工具, ...

  7. pytorch1.0批训练神经网络

    pytorch1.0批训练神经网络 import torch import torch.utils.data as Data # Torch 中提供了一种帮助整理数据结构的工具, 叫做 DataLoa ...

  8. pytorch1.0神经网络保存、提取、加载

    pytorch1.0网络保存.提取.加载 import torch import torch.nn.functional as F # 包含激励函数 import matplotlib.pyplot ...

  9. 用pytorch1.0搭建简单的神经网络:进行多分类分析

    用pytorch1.0搭建简单的神经网络:进行多分类分析 import torch import torch.nn.functional as F # 包含激励函数 import matplotlib ...

随机推荐

  1. ZR#1004

    ZR#1004 解法: 对于 $ (x^2 + y)^2 \equiv (x^2 - y)^2 + 1 \pmod p $ 化简并整理得 $ 4x^2y \equiv 1 \pmod p $ 即 $ ...

  2. [代码审计]四个实例递进php反序列化漏洞理解【转载】

    原作者:大方子 原文链接:https://blog.csdn.net/nzjdsds/article/details/82703639 0x01 索引 最近在总结php序列化相关的知识,看了好多前辈师 ...

  3. postman上传excel,java后台读取excel生成到指定位置进行备份,并且把excel中的数据添加到数据库

    最近要做个前端网页上传excel,数据直接添加到数据库的功能..在此写个读取excel的demo. 首先新建springboot的web项目 导包,读取excel可以用poi也可以用jxl,这里本文用 ...

  4. docker版本Mysql安装

    docker部署mysql 1. 下载 [root@localhost my.Shells]# ./dockerStart.sh start or stop start Redirecting to ...

  5. 查看 systemctl 崩溃日志 及 运行日志

    vi /var/log/syslog 查看指定服务的: grep "bx" /var/log/syslog

  6. JVM 扩展类加载器1

    1.创建类 public class MyTest19 { public static void main(String[] args) throws Exception { System.out.p ...

  7. linux网卡参数NM_CONTROLLED【转】

    安装操作系统时,自动生成的网卡配置文件,/etc/sysconfig/network-scripts/ifcfg-eth0里面有如下的参数:NM_CONTROLLED=yes说明 Network ma ...

  8. ActiveMQ相关API

    一.Producer 1,发送消息 MessageProducer send(Message message)发送消息到默认目的地,就是创建Producer时指定的目的地. send(Destinat ...

  9. linux: 右键添加打开终端

    安装一个包,即可在右键里面添加一个“打开终端”的菜单. sudo apt-get install nautilus-open-terminal 注销用户重启,然后再进入就可以右键->在终端打开选 ...

  10. Visual Studio 2019更新到16.2.3

    Visual Studio 2019更新到16.2.3   此次更新,包括以下内容: (1)修复找不到Android SDK的bug. (2)修复安装结束后,无法启动的bug. (3)修复关闭VS时, ...