DeepLearning.ai-Week1-Convolution+model+-+Step+by+Step
1 - Import Packages
import numpy as np
import h5py
import math
import matplotlib.pyplot as plt
%matplotlib inline
2 - Global Parameters Setting
plt.rcParams["figure.figsize"] = (5.0, 4.0) # 设置figure_size尺寸
plt.rcParams["image.interpolation"] = "nearest" # 设置插入风格
plt.rcParams["image.cmap"] = "gray" # 设置颜色风格
# 动态重载模块,模块修改时无需重新启动
%load_ext autoreload
%autoreload 2
# 随机数种子
np.random.seed(1)
3 - Convolutional Neural Networks
3.1 - Zero-padding
对输入张量X指定pad大小,对其进行zero的填充。运用numpy模块中的pad方法可以简单的实现。
# GRADED FUNCTION: zero_pad def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1. Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
""" ### START CODE HERE ### (≈ 1 line)
# np.pad第一个参数为pad目标张量,第二个参数为每一个维度要pad的两边的大小,第三个参数为pad的模式,第四个参数对应第二个参数
# 为每一个维度每一边要pad的值
X_pad = np.pad(X,
((0, 0), (pad, pad), (pad, pad), (0, 0)),
"constant",
constant_values=((0, 0), (0, 0), (0, 0), (0, 0)))
### END CODE HERE ### return X_pad
np.random.seed(1) # 随机数种子
x = np.random.randn(4, 3, 3, 2) # 随机一个输入变量
x_pad = zero_pad(x, 2) # 对输入变量x进行zero_pad
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1]) fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
Result:
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
Out[7]:
<matplotlib.image.AxesImage at 0x242c35b4a58>
3.2 - Single step of convolution
对于输入张量a_slice_prev,求出其与其相同规模的卷积核W和偏置项b计算之后的结果。python支持张量相乘,因此相乘之后求和加上偏置项即可得结果。
# GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer. Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
""" ### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
s = a_slice_prev * W
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + float(b)
### END CODE HERE ### return Z
np.random.seed(1)
# 随机相同规模的a_slice_prev以及卷积核W
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
Result:
Z = -6.99908945068
3.3 - Convolutional Neural Networks - Forward pass
输入规模与输出规模的关系式如下:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$
# GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad" Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
""" ### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
# 获取输入张量的维度
# m为数据量
# n_H_prev为输入张量的高
# n_W_prev为输入张量的宽
# n_C_prev为输入张量的通道数
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (≈1 line)
# 获取卷积核的规模
# f为卷积核的高/宽(截面为正方形的卷积核)
# n_C_prev为卷积核通道数=当前输入张量的通道数
# n_C为卷积核数量=输入张量经过该卷积层之后的通道数
(f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" (≈2 lines)
# 获取参数
# 步长&填充边界大小
stride = hparameters["stride"]
pad = hparameters["pad"] # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
# 根据上述公式计算输入张量经过卷积之后的规模(高&宽)
n_H = math.floor((n_H_prev-f+2*pad)/stride) + 1
n_W = math.floor((n_W_prev-f+2*pad)/stride) + 1 # Initialize the output volume Z with zeros. (≈1 line)
# 初始化输出变量为全0张量(规模通过上面式子计算出来)
Z = np.zeros(shape=(m, n_H, n_W, n_C)) # Create A_prev_pad by padding A_prev
# 对于输入张量进行0填充,使其通过卷积之后规模不变
A_prev_pad = zero_pad(A_prev, pad)
# 对于每一个输入张量
for i in range(m): # loop over the batch of training examples
# 取出每一个输出张量
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
# 循环高度的每一行
for h in range(n_H): # loop over vertical axis of the output volume
# 循环每一行的宽度的每一列
for w in range(n_W): # loop over horizontal axis of the output volume
# 循环每一列的每一个通道
for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines)
# 逆向计算出对于每一个对应的输出张量的点影响其的输入张量的范围
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f # Use the corners to define the (3D) # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
# 对于影响该输出张量的输入张量子张量进行卷积运算(调用上面已经实现的方法)
Z[i, h, w, c] = conv_single_step(a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :], W[:, :, :, c], b[:, :, :, c]) ### END CODE HERE ### # Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters) return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
Result:
Z's mean = 0.0489952035289
Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
4 - Pooling layer
4.1 - Forward Pooling
实现MAX-POOL和AVG-POOL两个方法。没有padding,因此对于输出张量规模和输入帐帘规模有如下关系式:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$
# GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
""" # Retrieve dimensions from the input shape
# 获取输入张量的各个维度规模
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters"
# 获取参数f(池化核大小)以及stride(步长)
f = hparameters["f"]
stride = hparameters["stride"] # Define the dimensions of the output
# 根据输入张量规模及上述公式计算出输出张量规模
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev # Initialize output matrix A
# 根据计算出的输出张量规模初始化输出张量为全0张量
A = np.zeros((m, n_H, n_W, n_C)) ### START CODE HERE ###
# 循环每一个输出张量
for i in range(m): # loop over the training examples
# 循环输出张量高的每一行
for h in range(n_H): # loop on the vertical axis of the output volume
# 循环输出张量每一行的宽的每一列
for w in range(n_W): # loop on the horizontal axis of the output volume
# 循环输出张量的每一列的每一个通道
for c in range (n_C): # loop over the channels of the output volume # Find the corners of the current "slice" (≈4 lines)
# 求出影响该输出张量位置的输入张量子张量位置
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
# 切割出影响该输出张量位置的输入张量子张量
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max": # 如果是MAX-POOL
A[i, h, w, c] = np.max(a_prev_slice) # 取输入张量子张量的最大值
elif mode == "average": # 如果是AVG-POOL
A[i, h, w, c] = np.average(a_prev_slice) # 取输入张量子张量的平均值 ### END CODE HERE ### # Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters) # Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C)) return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 1, "f": 4} A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
Result:
mode = max
A = [[[[ 1.74481176 1.6924546 2.10025514]]] [[[ 1.19891788 1.51981682 2.18557541]]]] mode = average
A = [[[[-0.09498456 0.11180064 -0.14263511]]] [[[-0.09525108 0.28325018 0.33035185]]]]
5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
5.1 - Convolutional layer backward pass
5.1.1 - Computing dA
对于确定的卷积核$W_c$以及给定的训练输入样本,有如下公式:
$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
5.1.2 - Computing dW
计算$dW_c$,有如下公式:
$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
其中$a_slice$是用来激活产生$Z_{ij}$的相关输入。
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
5.1.3 - Computing db
对于确定的卷积核$W_c$,有如下公式:
$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
db[:,:,:,c] += dZ[i, h, w, c]
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
""" ### START CODE HERE ###
# Retrieve information from "cache"
# 通过缓存获取输入张量、卷积核、偏置项和参数字典
(A_prev, W, b, hparameters) = cache # Retrieve dimensions from A_prev's shape
# 获取输入张量规模
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape
# 获取卷积核规模
(f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters"
# 从参数字典解析出参数stride和pad
stride = hparameters["stride"]
pad = hparameters["pad"] # Retrieve dimensions from dZ's shape
# 获取dZ的规模
(m, n_H, n_W, n_C) = dZ.shape # Initialize dA_prev, dW, db with the correct shapes
# 根据A_prev, W, b规模初始化对应梯度张量dA_prev, dW, db(全0)
dA_prev = np.zeros(A_prev.shape)
dW = np.zeros(W.shape)
db = np.zeros(b.shape) # Pad A_prev and dA_prev
# 用0填充输入张量以及对应的梯度张量
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
# 对于每一个输入张量
for i in range(m): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i] # 取出每一个输入张量
da_prev_pad = dA_prev_pad[i] # 取出对应的每个输入梯度张量
# 循环每一个dZ的高的每一行
for h in range(n_H): # loop over vertical axis of the output volume
# 循环每一行的宽的每一列
for w in range(n_W): # loop over horizontal axis of the output volume
# 循环每一列的每一个通道
for c in range(n_C): # loop over the channels of the output volume # Find the corners of the current "slice"
# 去除对应影响该张量位置的输入张量的子张量
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f # Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] # Update gradients for the window and the filter's parameters using the code formulas given above
# 根据上述公式更新梯度张量
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:, :, :, c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c] # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
# 去除用0填充的边框
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad]
### END CODE HERE ### # Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
Result:
dA_mean = 1.45243777754
dW_mean = 1.72699145831
db_mean = 7.83923256462
5.2 - Pooling layer - backward pass
5.2.1 - Max pooling - backward pass
$create_mask_from_window()$方法用来取出窗口中最大值的位置,如下:
$$ X = \begin{bmatrix}
1 && 3 \\
4 && 2
\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}
0 && 0 \\
1 && 0
\end{bmatrix}\tag{4}$$
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x. Arguments:
x -- Array of shape (f, f) Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
""" ### START CODE HERE ### (≈1 line)
mask = (x == np.max(x)) # x中等于最大值的位置为True,即为1,其余位置为False,即为0
### END CODE HERE ### return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
Result:
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
5.2.2 - Average pooling - backward pass
在average pooling中,每一个输入窗口的元素同等地影响着输出,所以对于已知$dZ$,需将其平均分给每一个元素,如下:
$$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}
1/4 && 1/4 \\
1/4 && 1/4
\end{bmatrix}\tag{5}$$
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
""" ### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape # 求出规模大小 # Compute the value to distribute on the matrix (≈1 line)
average = dz / (n_H * n_W) # 根据求出的规模大小求出平均值 # Create a matrix where every entry is the "average" value (≈1 line)
a = np.zeros(shape) + average # 生成矩阵,其元素值为dZ平均到每一个元素
### END CODE HERE ### return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
Result:
distributed value = [[ 0.5 0.5]
[ 0.5 0.5]]
5.2.3 - Putting it together: Pooling backward
实现池化反向传播方法$pool_backward$,使其通过$if/elif$支持选择$max$或者$average$模式,如果为$average$模式,则调用$distribute_value()$;如果是$max$模式,则调用$create_mask_from_window()$,然后让其结果乘上dZ。
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
""" ### START CODE HERE ### # Retrieve information from cache (≈1 line)
# 通过缓存获取输入张量以及参数字典
(A_prev, hparameters) = cache # Retrieve hyperparameters from "hparameters" (≈2 lines)
# 解析参数字典获得stride以及f参数
stride = hparameters["stride"]
f = hparameters["f"] # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
# 获得输入张量规模
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
# 获得dA规模
m, n_H, n_W, n_C = dA.shape # Initialize dA_prev with zeros (≈1 line)
# 根据输入张量初始化其对应梯度张量规模(全0)
dA_prev = np.zeros(A_prev.shape)
# 对于每一个输入张量
for i in range(m): # loop over the training examples # select training example from A_prev (≈1 line)
a_prev = A_prev[i] # 取出每一个输入张量
# 遍历每一个dA高的每一行
for h in range(n_H): # loop on the vertical axis
# 遍历每一行的宽的每一列
for w in range(n_W): # loop on the horizontal axis
# 遍历每一列的每一个通道
for c in range(n_C): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines)
# 逆向定位影响当前dA位置的输入张量子张量
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f # Compute the backward propagation in both modes.
if mode == "max": # 对于MAX-POOL模式 # Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, h, w, c] elif mode == "average": # 如果是AVG-POOL模式 # Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f, f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da,shape) ### END CODE ### # Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape) return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
Result:
mode = max
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0. 0. ]
[ 5.05844394 -1.68282702]
[ 0. 0. ]] mode = average
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0.08485462 0.2787552 ]
[ 1.26461098 -0.25749373]
[ 1.17975636 -0.53624893]]
6 - References
https://web.stanford.edu/class/cs230/
DeepLearning.ai-Week1-Convolution+model+-+Step+by+Step的更多相关文章
- 吴恩达DeepLearning.ai的Sequence model作业Dinosaurus Island
目录 1 问题设置 1.1 数据集和预处理 1.2 概览整个模型 2. 创建模型模块 2.1 在优化循环中梯度裁剪 2.2 采样 3. 构建语言模型 3.1 梯度下降 3.2 训练模型 4. 结论 ...
- 课程四(Convolutional Neural Networks),第一周(Foundations of Convolutional Neural Networks) —— 2.Programming assignments:Convolutional Model: step by step
Convolutional Neural Networks: Step by Step Welcome to Course 4's first assignment! In this assignme ...
- WPF MVVM 架构 Step By Step(6)(把actions从view model解耦)
到现在为止,我们创建了一个简单的MVVM的例子,包含了实现了的属性和命令.我们现在有这样一个包含了例如textbox类似的输入元素的视图,textbox用绑定来和view model联系,像点击but ...
- Coursera机器学习+deeplearning.ai+斯坦福CS231n
日志 20170410 Coursera机器学习 2017.11.28 update deeplearning 台大的机器学习课程:台湾大学林轩田和李宏毅机器学习课程 Coursera机器学习 Wee ...
- Convolutional Neural Networks: Step by Step
Andrew Ng deeplearning courese-4:Convolutional Neural Network Convolutional Neural Networks: Step by ...
- 课程五(Sequence Models),第一 周(Recurrent Neural Networks) —— 1.Programming assignments:Building a recurrent neural network - step by step
Building your Recurrent Neural Network - Step by Step Welcome to Course 5's first assignment! In thi ...
- 课程一(Neural Networks and Deep Learning),第四周(Deep Neural Networks)——2.Programming Assignments: Building your Deep Neural Network: Step by Step
Building your Deep Neural Network: Step by Step Welcome to your third programming exercise of the de ...
- Deeplearning - Overview of Convolution Neural Network
Finally pass all the Deeplearning.ai courses in March! I highly recommend it! If you already know th ...
- 吴恩达deepLearning.ai循环神经网络RNN学习笔记_看图就懂了!!!(理论篇)
前言 目录: RNN提出的背景 - 一个问题 - 为什么不用标准神经网络 - RNN模型怎么解决这个问题 - RNN模型适用的数据特征 - RNN几种类型 RNN模型结构 - RNN block - ...
- 吴恩达deepLearning.ai循环神经网络RNN学习笔记_没有复杂数学公式,看图就懂了!!!(理论篇)
本篇文章被Google中国社区组织人转发,评价: 条理清晰,写的很详细! 被阿里算法工程师点在看! 所以很值得一看! 前言 目录: RNN提出的背景 - 一个问题 - 为什么不用标准神经网络 - RN ...
随机推荐
- 一个很适合初学者的selenium教程
http://www.cnblogs.com/hustar0102/p/5885115.html
- vue初始化页面dom操纵 $nextTick
new Vue({ el: '#app', data:{ }, mounted: function () {/*生命周期函数*/ this.$nextTick(function () { $(&quo ...
- python自动化开发-[第一天]-练习题
1.使用while循环输入 1 2 3 4 5 6 8 9 10 i = 1 while i < 11: if i == 7: i += 1 continue print (i) i += 1 ...
- linux报错汇总
一.出现cannot send message: Process exited with a non-zero status错误 查看log文件:sudo cat /var/log/mail.err, ...
- 面向对象【day07】:面向对象使用场景(十)
本节内容 1.概述 2.知识回顾 3.使用场景 一.概述 之前我们学了面向对象知识,那我们在什么时候用呢?不可能什么时候都需要用面向对象吧,除非你是纯的面向对象语言,好的,我们下面就来谈谈 二.知识回 ...
- Mybatis笔记三:全局配置文件
目录 配置文件 dtd提示 properties标签(不怎么用) typeAliases 自动把下划线换成驼峰命名 配置文件 看着这个配置文件,我们将对这个配置文件进行细致的讲解 <?xml v ...
- springboot下整合各种配置文件
本博是在springboot下整合其他中间件,比如,mq,redis,durid,日志...等等 以后遇到再更.springboot真是太便捷了,让我们赶紧涌入到springboot的怀抱吧. ap ...
- C#设计模式(13)——享元模式
1.享元模式介绍 在软件开发中我们经常遇到多次使用相似或者相同对象的情况,如果每次使用这个对象都去new一个新的实例会很浪费资源.这时候很多人会想到前边介绍过的一个设计模式:原型模式,原型模式通过拷贝 ...
- GET和POST传输方式
GET和POST传输 在很多人看来,get和post的区别有比如安不安全,传输有大小限制等,在这里,我将对get和post做出客观的评价: GET: 传输方法:get传输数据一般是在地址栏的url的问 ...
- weblogic每天日志合并shell脚本 [个人记录]【转】【补】
from RogerZhu modified by King sh logback.rb "/data/logs/" "/tmp/domain" "a ...