【深度学习】吴恩达网易公开课练习(class1 week3)
知识点梳理
python工具使用:
- sklearn: 数据挖掘,数据分析工具,内置logistic回归
- matplotlib: 做图工具,可绘制等高线等
- 绘制散点图: plt.scatter(X[0, :], X[1, :], c=np.squeeze(Y), s=40, cmap=plt.cm.Spectral); s:绘制点大小 cmap:颜色集
- 绘制等高线: 先做网格,计算结果,绘图
x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
h = 0.01
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = model(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
#xx是x轴值, yy是y轴值, Z是预测结果值, cmap表示采用什么颜色
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
关键变量:
- m: 训练样本数量
- n_x:一个训练样本的输入数量,输入层大小
- n_h:隐藏层大小
- 方括号上标[l]: 第l层
- 圆括号上标(i): 第i个样本
$$
X =
\left[
\begin{matrix}
\vdots & \vdots & \vdots & \vdots \\
x^{(1)} & x^{(2)} & \vdots & x^{(m)} \\
\vdots & \vdots & \vdots & \vdots \\
\end{matrix}
\right]_{(n\_x, m)}
$$
$$
W^{[1]} =
\left[
\begin{matrix}
\cdots & w^{[1] T}_1 & \cdots \\
\cdots & w^{[1] T}_2 & \cdots \\
\cdots & \cdots & \cdots \\
\cdots & w^{[1] T}_{n\_h} & \cdots \\
\end{matrix}
\right]_{(n\_h, n\_x)}
$$
$$
b^{[1]} =
\left[
\begin{matrix}
b^{[1]}_1 \\
b^{[1]}_2 \\
\vdots \\
b^{[1]}_{n\_h} \\
\end{matrix}
\right]_{(n\_h, 1)}
$$
$$
A^{[1]}=
\left[
\begin{matrix}
\vdots & \vdots & \vdots & \vdots \\
a^{[1](1)} & a^{[1](2)} & \vdots & a^{[1](m)} \\
\vdots & \vdots & \vdots & \vdots \\
\end{matrix}
\right]_{(n\_h, m)}
$$
$$
Z^{[1]}=
\left[
\begin{matrix}
\vdots & \vdots & \vdots & \vdots \\
z^{[1](1)} & z^{[1](2)} & \vdots & z^{[1](m)} \\
\vdots & \vdots & \vdots & \vdots \\
\end{matrix}
\right]_{(n\_h, m)}
$$
***
单隐层神经网络关键公式:
- 前向传播:
$$Z^{[1]}=W^{[1]}X+b^{[1]}$$
$$A^{[1]}=g^{[1]}(Z^{[1]})$$
$$Z^{[2]}=W^{[2]}A^{[1]}+b^{[2]}$$
$$A^{[2]}=g^{[2]}(Z^{[2]})$$
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
- 反向传播
dZ2 = A2 - Y
dW2 = 1 / m * np.dot(dZ2, A1.T)
db2 = 1 / m * np.sum(dZ2, axis = 1, keepdims = True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = 1 / m * np.dot(dZ1, X.T)
db1 = 1 / m * np.sum(dZ1, axis = 1, keepdims = True)
- cost计算
\]
logprobs = np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), 1 - Y)
cost = - 1 / m * np.sum(logprobs)
单隐层神经网络代码:
# Package imports
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
def initialize_parameters(n_x, n_h, n_y):
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)
b2 = np.zeros((n_y, 1))
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
def forward_propagation(X, parameters):
# Retrieve each parameter from the dictionary "parameters"
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Implement Forward Propagation to calculate A2 (probabilities)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
def compute_cost(A2, Y, parameters):
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
logprobs = np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), 1 - Y)
cost = - 1 / m * np.sum(logprobs)
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
def backward_propagation(parameters, cache, X, Y):
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
W1 = parameters["W1"]
W2 = parameters["W2"]
# Retrieve also A1 and A2 from dictionary "cache".
A1 = cache["A1"]
A2 = cache["A2"]
# Backward propagation: calculate dW1, db1, dW2, db2.
dZ2 = A2 - Y
dW2 = 1 / m * np.dot(dZ2, A1.T)
db2 = 1 / m * np.sum(dZ2, axis = 1, keepdims = True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = 1 / m * np.dot(dZ1, X.T)
db1 = 1 / m * np.sum(dZ1, axis = 1, keepdims = True)
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
def update_parameters(parameters, grads, learning_rate = 0.8):
# Retrieve each parameter from the dictionary "parameters"
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Retrieve each gradient from the dictionary "grads"
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
# Update rule for each parameter
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
np.random.seed(3)
n_x = X.shape[0]
n_y = Y.shape[0]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
def predict(parameters, X):
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
A2, cache = forward_propagation(X, parameters)
predictions = A2 > 0.5
return predictions
X, Y = load_planar_dataset()
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, np.squeeze(Y))
plt.title("Decision Boundary for hidden layer size " + str(4))
# planar_utils.py
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import sklearn.datasets
import sklearn.linear_model
def plot_decision_boundary(model, X, y):
# Set min and max values and give it some padding
x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
h = 0.01
# Generate a grid of points with distance h between them
# 创造网格,以0.01为间隔划分整个区间
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole grid
# 计算每个网格点上的预测结果
Z = model(np.c_[xx.ravel(), yy.ravel()])
# 将预测结果变形为与网格形式一致
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
# xx是x轴值, yy是y轴值, Z是预测结果值, cmap表示采用什么颜色
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral) #等位线
plt.ylabel('x2')
plt.xlabel('x1')
plt.scatter(X[0, :], X[1, :], c=y, cmap=plt.cm.Spectral)
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size.
Return:
s -- sigmoid(x)
"""
s = 1/(1+np.exp(-x))
return s
def load_planar_dataset():
np.random.seed(1)
m = 400 # number of examples
N = int(m/2) # number of points per class
D = 2 # dimensionality
X = np.zeros((m,D)) # data matrix where each row is a single example
Y = np.zeros((m,1), dtype='uint8') # labels vector (0 for red, 1 for blue)
a = 4 # maximum ray of the flower
for j in range(2):
ix = range(N*j,N*(j+1))
t = np.linspace(j*3.12,(j+1)*3.12,N) + np.random.randn(N)*0.2 # theta
r = a*np.sin(4*t) + np.random.randn(N)*0.2 # radius
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
Y[ix] = j
X = X.T
Y = Y.T
return X, Y
def load_extra_datasets():
N = 200
noisy_circles = sklearn.datasets.make_circles(n_samples=N, factor=.5, noise=.3)
noisy_moons = sklearn.datasets.make_moons(n_samples=N, noise=.2)
blobs = sklearn.datasets.make_blobs(n_samples=N, random_state=5, n_features=2, centers=6)
gaussian_quantiles = sklearn.datasets.make_gaussian_quantiles(mean=None, cov=0.5, n_samples=N, n_features=2, n_classes=2, shuffle=True, random_state=None)
no_structure = np.random.rand(N, 2), np.random.rand(N, 2)
return noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure
【深度学习】吴恩达网易公开课练习(class1 week3)的更多相关文章
- 【深度学习】吴恩达网易公开课练习(class1 week4)
概要 class1 week3的任务是实现单隐层的神经网络代码,而本次任务是实现有L层的多层深度全连接神经网络.关键点跟class3的基本相同,算清各个参数的维度即可. 关键变量: m: 训练样本数量 ...
- 【深度学习】吴恩达网易公开课练习(class1 week2)
知识点汇总 作业内容:用logistic回归对猫进行分类 numpy知识点: 查看矩阵维度: x.shape 初始化0矩阵: np.zeros((dim1, dim2)) 去掉矩阵中大小是1的维度: ...
- 【深度学习】吴恩达网易公开课练习(class2 week1 task2 task3)
正则化 定义:正则化就是在计算损失函数时,在损失函数后添加权重相关的正则项. 作用:减少过拟合现象 正则化有多种,有L1范式,L2范式等.一种常用的正则化公式 \[J_{regularized} = ...
- 【深度学习】吴恩达网易公开课练习(class2 week1)
权重初始化 参考资料: 知乎 CSDN 权重初始化不能全部为0,不能都是同一个值.原因是,如果所有的初始权重是相同的,那么根据前向和反向传播公式,之后每一个权重的迭代过程也是完全相同的.结果就是,无论 ...
- 深度学习 吴恩达深度学习课程2第三周 tensorflow实践 参数初始化的影响
博主 撸的 该节 代码 地址 :https://github.com/LemonTree1994/machine-learning/blob/master/%E5%90%B4%E6%81%A9%E8 ...
- cousera 深度学习 吴恩达 第一课 第二周 学习率对优化结果的影响
本文代码实验地址: https://github.com/guojun007/logistic_regression_learning_rate cousera 上的作业是 编写一个 logistic ...
- 2017年度好视频,吴恩达、李飞飞、Hinton、OpenAI、NIPS、CVPR、CS231n全都在
我们经常被问:机器翻译迭代了好几轮,专业翻译的饭碗都端不稳了,字幕组到底还能做什么? 对于这个问题,我们自己感受最深,却又来不及解释,就已经边感受边做地冲出去了很远,摸爬滚打了一整年. 其实,现在看来 ...
- 第19月第8天 斯坦福大学公开课机器学习 (吴恩达 Andrew Ng)
1.斯坦福大学公开课机器学习 (吴恩达 Andrew Ng) http://open.163.com/special/opencourse/machinelearning.html 笔记 http:/ ...
- 吴恩达深度学习第4课第3周编程作业 + PIL + Python3 + Anaconda环境 + Ubuntu + 导入PIL报错的解决
问题描述: 做吴恩达深度学习第4课第3周编程作业时导入PIL包报错. 我的环境: 已经安装了Tensorflow GPU 版本 Python3 Anaconda 解决办法: 安装pillow模块,而不 ...
随机推荐
- Java Map在遍历过程中删除元素
Java中的Map如果在遍历过程中要删除元素,除非通过迭代器自己的remove()方法,否则就会导致抛出ConcurrentModificationException异常.JDK文档中是这么描述的: ...
- 给出随机存储器(RAM)和只读存储器(ROM)的差别
给出随机存储器(RAM)和只读存储器(ROM)的差别
- 利用crash 分析软死锁问题【转】
转自:https://blog.csdn.net/divlee130/article/details/47806551 版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog. ...
- requests库入门06-post请求
示例相应的接口文档:GitHub邮箱接口文档 先登录GitHub,然后右上角用户下拉框中选择settings,然后选Emails.可以看到当前账户设置的邮箱情况 再看添加邮箱接口的文档描述,可以通过一 ...
- windows下django1.7 +python3.4.2搭建记录1
python+django在linux下搭建比较简单,windows下搭建比较复杂,所以列在下方一.下载安装下载django的包,到刚解压后的Django-1.7目录下执行命令 python setu ...
- linux批量替换文件内容3种方法(perl,sed,shell)
方法1:perl 这两天在构建一个应用的使用用到了maven,由于project很大,足足有700多个 pom.xml文件,更郁闷的是在很多pom.xml文件里都单独指定了资源库的url,我需要把 ...
- redhat yum
背景 这个redhat可让我好一顿折腾,对于这个yum的安装,如果成功则罢,不成功可有的收拾的.还是centos比较好啊. 转载 1 下载下面四个软件包(可以从163下载,版本号自己搜索) wget ...
- 021_mac提效神奇Alfred
一.破解版下载 (1)https://pan.baidu.com/s/1Kb0HtybvdA1yzHeOWUFM_w 提取码:9tq2 Reference:https://www.jianshu.co ...
- 004_LVS及haproxy
一.lvs工作在哪层(新浪面试题) http://blog.chinaunix.net/uid-346158-id-2131163.html 二.四层.七层负载均衡的区别 所谓四层就是基于IP+端口的 ...
- $Django 发送邮件--django封装模块和python内置SMTP模块
一 使用SMTP模块发送邮件 import smtplib from email.mime.text import MIMEText from email.header import Header m ...