第二课第三周:TensorFlow Introduction

Introduction to TensorFlow

TensorFlow 2.3 has made significant improvements over its predecessor, some of which you'll encounter and implement here!

By the end of this assignment, you'll be able to do the following in TensorFlow 2.3:

  • Use tf.Variable to modify the state of a variable
  • Explain the difference between a variable and a constant
  • Train a Neural Network on a TensorFlow dataset

Programming frameworks like TensorFlow not only cut down on time spent coding, but can also perform optimizations that speed up the code itself

1 - Packages

  1. import h5py
  2. import numpy as np
  3. import tensorflow as tf
  4. import matplotlib.pyplot as plt
  5. from tensorflow.python.framework.ops import EagerTensor
  6. from tensorflow.python.ops.resource_variable_ops import ResourceVariable
  7. import time

1.1 - Checking TensorFlow Version

You will be using v2.3 for this assignment, for maximum speed and efficiency.

tf.__version__

2 - Basic Optimization with GradientTape

The beauty of TensorFlow 2 is in its simplicity. Basically, all you need to do is implement forward propagation through a computational graph. TensorFlow will compute the derivatives for you, by moving backwards through the graph recorded with GradientTape. All that's left for you to do then is specify the cost function and optimizer you want to use!

When writing a TensorFlow program, the main object to get used and transformed is the tf.Tensor. These tensors are the TensorFlow equivalent of Numpy arrays, i.e. multidimensional arrays of a given data type that also contain information about the computational graph.

Below, you'll use tf.Variable to store the state of your variables. Variables can only be created once as its initial value defines the variable shape and type. Additionally, the dtype arg in tf.Variable can be set to allow data to be converted to that type. But if none is specified, either the datatype will be kept if the initial value is a Tensor, or convert_to_tensor will decide. It's generally best for you to specify directly, so nothing breaks!

Here you'll call the TensorFlow dataset created on a HDF5 file, which you can use in place of a Numpy array to store your datasets. You can think of this as a TensorFlow data generator!

You will use the Hand sign data set, that is composed of images with shape 64x64x3.

  1. train_dataset = h5py.File('datasets/train_signs.h5', "r")
  2. test_dataset = h5py.File('datasets/test_signs.h5', "r")
  3. x_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_x'])
  4. y_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_y'])
  5. x_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_x'])
  6. y_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_y'])
  7. type(x_train)

Since TensorFlow Datasets are generators, you can't access directly the contents unless you iterate over them in a for loop, or by explicitly creating a Python iterator using iter and consuming its elements using next. Also, you can inspect the shape and dtype of each element using the element_spec attribute.

The dataset that you'll be using during this assignment is a subset of the sign language digits. It contains six different classes representing the digits from 0 to 5.

  1. unique_labels = set()
  2. for element in y_train:
  3. unique_labels.add(element.numpy())
  4. print(unique_labels)

You can see some of the images in the dataset by running the following cell.

  1. images_iter = iter(x_train)
  2. labels_iter = iter(y_train)
  3. plt.figure(figsize=(10, 10))
  4. for i in range(25):
  5. ax = plt.subplot(5, 5, i + 1)
  6. plt.imshow(next(images_iter).numpy().astype("uint8"))
  7. plt.title(next(labels_iter).numpy().astype("uint8"))
  8. plt.axis("off")

There's one more additional difference between TensorFlow datasets and Numpy arrays: If you need to transform one, you would invoke the map method to apply the function passed as an argument to each of the elements.

  1. def normalize(image):
  2. """
  3. Transform an image into a tensor of shape (64 * 64 * 3, )
  4. and normalize its components.
  5. Arguments
  6. image - Tensor.
  7. Returns:
  8. result -- Transformed tensor
  9. """
  10. image = tf.cast(image, tf.float32) / 255.0
  11. image = tf.reshape(image, [-1,])
  12. return image
  1. new_train = x_train.map(normalize)
  2. new_test = x_test.map(normalize)
  3. new_train.element_spec

2.1 - Linear Function

Let's begin this programming exercise by computing the following equation: Y = WX + b, where W and X are random matrices and b is a random vector.

Exercise 1 - linear_function

Compute WX + b where W, X, and b are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, this is how to define a constant X with the shape (3,1):

  1. X = tf.constant(np.random.randn(3,1), name = "X")

Note that the difference between tf.constant and tf.Variable is that you can modify the state of a tf.Variable but cannot change the state of a tf.constant.

You might find the following functions helpful:

  • tf.matmul(..., ...) to do a matrix multiplication
  • tf.add(..., ...) to do an addition
  • np.random.randn(...) to initialize randomly
  1. # GRADED FUNCTION: linear_function
  2. def linear_function():
  3. """
  4. Implements a linear function:
  5. Initializes X to be a random tensor of shape (3,1)
  6. Initializes W to be a random tensor of shape (4,3)
  7. Initializes b to be a random tensor of shape (4,1)
  8. Returns:
  9. result -- Y = WX + b
  10. """
  11. np.random.seed(1)
  12. """
  13. Note, to ensure that the "random" numbers generated match the expected results,
  14. please create the variables in the order given in the starting code below.
  15. (Do not re-arrange the order).
  16. """
  17. # (approx. 4 lines)
  18. # X = ...
  19. # W = ...
  20. # b = ...
  21. # Y = ...
  22. # YOUR CODE STARTS HERE
  23. X =tf.constant(np.random.randn(3,1), name = "X")
  24. W =tf.constant(np.random.randn(4,3), name = "W")
  25. b =tf.constant(np.random.randn(4,1),name="b")
  26. Y =tf.add(tf.matmul(W,X),b)#矩阵乘法
  27. # YOUR CODE ENDS HERE
  28. return Y
  1. result = linear_function()
  2. print(result)
  3. assert type(result) == EagerTensor, "Use the TensorFlow API"
  4. assert np.allclose(result, [[-2.15657382], [ 2.95891446], [-1.08926781], [-0.84538042]]), "Error"
  5. print("\033[92mAll test passed")

2.2 - Computing the Sigmoid

Amazing! You just implemented a linear function. TensorFlow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax.

For this exercise, compute the sigmoid of z.

In this exercise, you will: Cast your tensor to type float32 using tf.cast, then compute the sigmoid using tf.keras.activations.sigmoid.

Exercise 2 - sigmoid

Implement the sigmoid function below. You should use the following:

  • tf.cast("...", tf.float32)
  • tf.keras.activations.sigmoid("...")
  1. # GRADED FUNCTION: sigmoid
  2. def sigmoid(z):
  3. """
  4. Computes the sigmoid of z
  5. Arguments:
  6. z -- input value, scalar or vector
  7. Returns:
  8. a -- (tf.float32) the sigmoid of z
  9. """
  10. # tf.keras.activations.sigmoid requires float16, float32, float64, complex64, or complex128.
  11. # (approx. 2 lines)
  12. # z = ...
  13. # a = ...
  14. # YOUR CODE STARTS HERE
  15. z = tf.cast(z, tf.float32)#将 z变为floa32
  16. a =tf.keras.activations.sigmoid(z)#激活函数sigmoid
  17. # YOUR CODE ENDS HERE
  18. return a

  1. result = sigmoid(-1)
  2. print ("type: " + str(type(result)))
  3. print ("dtype: " + str(result.dtype))
  4. print ("sigmoid(-1) = " + str(result))
  5. print ("sigmoid(0) = " + str(sigmoid(0.0)))
  6. print ("sigmoid(12) = " + str(sigmoid(12)))
  7. def sigmoid_test(target):
  8. result = target(0)
  9. assert(type(result) == EagerTensor)
  10. assert (result.dtype == tf.float32)
  11. assert sigmoid(0) == 0.5, "Error"
  12. assert sigmoid(-1) == 0.26894143, "Error"
  13. assert sigmoid(12) == 0.9999939, "Error"
  14. print("\033[92mAll test passed")
  15. sigmoid_test(sigmoid)

2.3 - Using One Hot Encodings

Many times in deep learning you will have a Y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert like this:

This is called "one hot" encoding, because in the converted representation, exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In TensorFlow, you can use one line of code:

Exercise 3 - one_hot_matrix

Implement the function below to take one label and the total number of classes C, and return the one hot encoding in a column wise matrix. Use tf.one_hot() to do this, and tf.reshape() to reshape your one hot tensor!

  • tf.reshape(tensor, shape)
  1. # GRADED FUNCTION: one_hot_matrix
  2. def one_hot_matrix(label, depth=6):
  3. """
  4.     Computes the one hot encoding for a single label
  5.     
  6.     Arguments:
  7. label -- (int) Categorical labels
  8. depth -- (int) Number of different classes that label can take
  9.     
  10.     Returns:
  11. one_hot -- tf.Tensor A single-column matrix with the one hot encoding.
  12. """
  13. # (approx. 1 line)
  14. # one_hot = ...
  15. # YOUR CODE STARTS HERE
  16. #one_hot =tf.one_hot(label,depth,axis=0)
  17. #将lable变为热键,由上图可见,2即为该列第2位为1(0开始),axis=0即上下维度(竖向)
  18. one_hot=tf.reshape(tensor=tf.one_hot(label,depth,axis=0), shape=[-1])#[-1]表示这一维度不定义大小,而是根据数据情况进行匹配。
  19. print(one_hot)
  20. # YOUR CODE ENDS HERE
  21. return one_hot
  1. def one_hot_matrix_test(target):
  2. label = tf.constant(1)
  3. depth = 4
  4. result = target(label, depth)
  5. print("Test 1:",result)
  6. assert result.shape[0] == depth, "Use the parameter depth"
  7. assert np.allclose(result, [0., 1. ,0., 0.] ), "Wrong output. Use tf.one_hot"
  8. label_2 = [2]
  9. result = target(label_2, depth)
  10. print("Test 2:", result)
  11. assert result.shape[0] == depth, "Use the parameter depth"
  12. assert np.allclose(result, [0., 0. ,1., 0.] ), "Wrong output. Use tf.reshape as instructed"
  13. print("\033[92mAll test passed")
  14. one_hot_matrix_test(one_hot_matrix)
  1. new_y_test = y_test.map(one_hot_matrix)
  2. new_y_train = y_train.map(one_hot_matrix)

2.4 - Initialize the Parameters

Now you'll initialize a vector of numbers with the Glorot initializer. The function you'll be calling is tf.keras.initializers.GlorotNormal, which draws samples from a truncated normal distribution centered on 0, with stddev = sqrt(2 / (fan_in + fan_out)), where fan_in is the number of input units and fan_out is the number of output units, both in the weight tensor.

To initialize with zeros or ones you could use tf.zeros() or tf.ones() instead.

Exercise 4 - initialize_parameters

Implement the function below to take in a shape and to return an array of numbers using the GlorotNormal initializer.

  • tf.keras.initializers.GlorotNormal(seed=1)
  • tf.Variable(initializer(shape=())
  1. # GRADED FUNCTION: initialize_parameters
  2. def initialize_parameters():
  3. """
  4. Initializes parameters to build a neural network with TensorFlow. The shapes are:
  5. W1 : [25, 12288]
  6. b1 : [25, 1]
  7. W2 : [12, 25]
  8. b2 : [12, 1]
  9. W3 : [6, 12]
  10. b3 : [6, 1]
  11. Returns:
  12. parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
  13. """
  14. initializer = tf.keras.initializers.GlorotNormal(seed=1)
  15. #(approx. 6 lines of code)
  16. # W1 = ...
  17. # b1 = ...
  18. # W2 = ...
  19. # b2 = ...
  20. # W3 = ...
  21. # b3 = ...
  22. # YOUR CODE STARTS HERE
  23. W1 =tf.Variable(initializer(shape=(25,12288)),name="W1")
  24. b1 =tf.Variable(initializer(shape=(25,1)),name="b1")
  25. W2 =tf.Variable(initializer(shape=(12,25)),name="W2")
  26. b2 =tf.Variable(initializer(shape=(12,1)),name="b2")
  27. W3 =tf.Variable(initializer(shape=(6,12)),name="W3")
  28. b3 =tf.Variable(initializer(shape=(6,1)),name="b3")
  29. # YOUR CODE ENDS HERE
  30. parameters = {"W1": W1,
  31. "b1": b1,
  32. "W2": W2,
  33. "b2": b2,
  34. "W3": W3,
  35. "b3": b3}
  36. return parameters
  1. def initialize_parameters_test(target):
  2. parameters = target()
  3. values = {"W1": (25, 12288),
  4. "b1": (25, 1),
  5. "W2": (12, 25),
  6. "b2": (12, 1),
  7. "W3": (6, 12),
  8. "b3": (6, 1)}
  9. for key in parameters:
  10. print(f"{key} shape: {tuple(parameters[key].shape)}")
  11. assert type(parameters[key]) == ResourceVariable, "All parameter must be created using tf.Variable"
  12. assert tuple(parameters[key].shape) == values[key], f"{key}: wrong shape"
  13. assert np.abs(np.mean(parameters[key].numpy())) < 0.5, f"{key}: Use the GlorotNormal initializer"
  14. assert np.std(parameters[key].numpy()) > 0 and np.std(parameters[key].numpy()) < 1, f"{key}: Use the GlorotNormal initializer"
  15. print("\033[92mAll test passed")
  16. initialize_parameters_test(initialize_parameters)
  1. parameters = initialize_parameters()

3 - Building Your First Neural Network in TensorFlow

In this part of the assignment you will build a neural network using TensorFlow. Remember that there are two parts to implementing a TensorFlow model:

  • Implement forward propagation
  • Retrieve the gradients and train the model

Let's get into it!

3.1 - Implement Forward Propagation

One of TensorFlow's great strengths lies in the fact that you only need to implement the forward propagation function and it will keep track of the operations you did to calculate the back propagation automatically.

Exercise 5 - forward_propagation

Implement the forward_propagation function.

Note Use only the TF API.

  • tf.math.add
  • tf.linalg.matmul
  • tf.keras.activations.relu
  1. # GRADED FUNCTION: forward_propagation
  2. def forward_propagation(X, parameters):
  3. """
  4. Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR
  5. Arguments:
  6. X -- input dataset placeholder, of shape (input size, number of examples)
  7. parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
  8. the shapes are given in initialize_parameters
  9. Returns:
  10. Z3 -- the output of the last LINEAR unit
  11. """
  12. # Retrieve the parameters from the dictionary "parameters"
  13. W1 = parameters['W1']
  14. b1 = parameters['b1']
  15. W2 = parameters['W2']
  16. b2 = parameters['b2']
  17. W3 = parameters['W3']
  18. b3 = parameters['b3']
  19. #(approx. 5 lines) # Numpy Equivalents:
  20. # Z1 = ... # Z1 = np.dot(W1, X) + b1
  21. # A1 = ... # A1 = relu(Z1)
  22. # Z2 = ... # Z2 = np.dot(W2, A1) + b2
  23. # A2 = ... # A2 = relu(Z2)
  24. # Z3 = ... # Z3 = np.dot(W3, A2) + b3
  25. # YOUR CODE STARTS HERE
  26. Z1 = tf.math.add(tf.linalg.matmul(W1, X) ,b1)
  27. A1 = tf.keras.activations.relu(Z1)
  28. Z2 = tf.math.add(tf.linalg.matmul(W2, A1) ,b2)
  29. A2 = tf.keras.activations.relu(Z2)
  30. Z3 = tf.math.add(tf.linalg.matmul(W3, A2) ,b3)
  31. # YOUR CODE ENDS HERE
  32. return Z3
  1. def forward_propagation_test(target, examples):
  2. minibatches = examples.batch(2)
  3. for minibatch in minibatches:
  4. forward_pass = target(tf.transpose(minibatch), parameters)
  5. print(forward_pass)
  6. assert type(forward_pass) == EagerTensor, "Your output is not a tensor"
  7. assert forward_pass.shape == (6, 2), "Last layer must use W3 and b3"
  8. assert np.allclose(forward_pass,
  9. [[-0.13430887, 0.14086473],
  10. [ 0.21588647, -0.02582335],
  11. [ 0.7059658, 0.6484556 ],
  12. [-1.1260961, -0.9329492 ],
  13. [-0.20181894, -0.3382722 ],
  14. [ 0.9558965, 0.94167566]]), "Output does not match"
  15. break
  16. print("\033[92mAll test passed")
  17. forward_propagation_test(forward_propagation, new_train)

3.2 Compute the Cost

All you have to do now is define the loss function that you're going to use. For this case, since we have a classification problem with 6 labels, a categorical cross entropy will work!

Exercise 6 - compute_cost

Implement the cost function below.

  • tf.reduce_mean basically does the summation over the examples.
  1. # GRADED FUNCTION: compute_cost
  2. def compute_cost(logits, labels):
  3. """
  4. Computes the cost
  5. Arguments:
  6. logits -- output of forward propagation (output of the last LINEAR unit), of shape (6, num_examples)
  7. labels -- "true" labels vector, same shape as Z3
  8. Returns:
  9. cost - Tensor of the cost function
  10. """
  11. #(1 line of code)
  12. # cost = ...
  13. # YOUR CODE STARTS HERE
  14. cost =tf.reduce_mean(tf.keras.losses.categorical_crossentropy(labels, logits,from_logits=False))
  15. #本部分结果并不正确
  16. #cost =tf.keras.losses.categorical_crossentropy(labels,logits)
  17. # YOUR CODE ENDS HERE
  18. return cost

本部分答案并不准确,仅供参考

  1. def compute_cost_test(target, Y):
  2. pred = tf.constant([[ 2.4048107, 5.0334096 ],
  3. [-0.7921977, -4.1523376 ],
  4. [ 0.9447198, -0.46802214],
  5. [ 1.158121, 3.9810789 ],
  6. [ 4.768706, 2.3220146 ],
  7. [ 6.1481323, 3.909829 ]])
  8. minibatches = Y.batch(2)
  9. for minibatch in minibatches:
  10. result = target(pred, tf.transpose(minibatch))
  11. break
  12. print(result)
  13. assert(type(result) == EagerTensor), "Use the TensorFlow API"
  14. assert (np.abs(result - (0.25361037 + 0.5566767) / 2.0) < 1e-7), "Test does not match. Did you get the mean of your cost functions?"
  15. print("\033[92mAll test passed")
  16. compute_cost_test(compute_cost, new_y_train )

3.3 - Train the Model

Let's talk optimizers. You'll specify the type of optimizer in one line, in this case tf.keras.optimizers.Adam (though you can use others such as SGD), and then call it within the training loop.

Notice the tape.gradient function: this allows you to retrieve the operations recorded for automatic differentiation inside the GradientTape block. Then, calling the optimizer method apply_gradients, will apply the optimizer's update rules to each trainable parameter. At the end of this assignment, you'll find some documentation that explains this more in detail, but for now, a simple explanation will do.

Here you should take note of an important extra step that's been added to the batch training process:

  • tf.Data.dataset = dataset.prefetch(8)

What this does is prevent a memory bottleneck that can occur when reading from disk. prefetch() sets aside some data and keeps it ready for when it's needed. It does this by creating a source dataset from your input data, applying a transformation to preprocess the data, then iterating over the dataset the specified number of elements at a time. This works because the iteration is streaming, so the data doesn't need to fit into the memory.

  1. def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
  2. num_epochs = 1500, minibatch_size = 32, print_cost = True):
  3. """
  4. Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
  5. Arguments:
  6. X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
  7. Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
  8. X_test -- training set, of shape (input size = 12288, number of training examples = 120)
  9. Y_test -- test set, of shape (output size = 6, number of test examples = 120)
  10. learning_rate -- learning rate of the optimization
  11. num_epochs -- number of epochs of the optimization loop
  12. minibatch_size -- size of a minibatch
  13. print_cost -- True to print the cost every 10 epochs
  14. Returns:
  15. parameters -- parameters learnt by the model. They can then be used to predict.
  16. """
  17. costs = [] # To keep track of the cost
  18. train_acc = []
  19. test_acc = []
  20. # Initialize your parameters
  21. #(1 line)
  22. parameters = initialize_parameters()
  23. W1 = parameters['W1']
  24. b1 = parameters['b1']
  25. W2 = parameters['W2']
  26. b2 = parameters['b2']
  27. W3 = parameters['W3']
  28. b3 = parameters['b3']
  29. optimizer = tf.keras.optimizers.Adam(learning_rate)
  30. # The CategoricalAccuracy will track the accuracy for this multiclass problem
  31. test_accuracy = tf.keras.metrics.CategoricalAccuracy()
  32. train_accuracy = tf.keras.metrics.CategoricalAccuracy()
  33. dataset = tf.data.Dataset.zip((X_train, Y_train))
  34. test_dataset = tf.data.Dataset.zip((X_test, Y_test))
  35. # We can get the number of elements of a dataset using the cardinality method
  36. m = dataset.cardinality().numpy()
  37. minibatches = dataset.batch(minibatch_size).prefetch(8)
  38. test_minibatches = test_dataset.batch(minibatch_size).prefetch(8)
  39. #X_train = X_train.batch(minibatch_size, drop_remainder=True).prefetch(8)# <<< extra step
  40. #Y_train = Y_train.batch(minibatch_size, drop_remainder=True).prefetch(8) # loads memory faster
  41. # Do the training loop
  42. for epoch in range(num_epochs):
  43. epoch_cost = 0.
  44. #We need to reset object to start measuring from 0 the accuracy each epoch
  45. train_accuracy.reset_states()
  46. for (minibatch_X, minibatch_Y) in minibatches:
  47. with tf.GradientTape() as tape:
  48. # 1. predict
  49. Z3 = forward_propagation(tf.transpose(minibatch_X), parameters)
  50. # 2. loss
  51. minibatch_cost = compute_cost(Z3, tf.transpose(minibatch_Y))
  52. # We acumulate the accuracy of all the batches
  53. train_accuracy.update_state(tf.transpose(Z3), minibatch_Y)
  54. trainable_variables = [W1, b1, W2, b2, W3, b3]
  55. grads = tape.gradient(minibatch_cost, trainable_variables)
  56. optimizer.apply_gradients(zip(grads, trainable_variables))
  57. epoch_cost += minibatch_cost
  58. # We divide the epoch cost over the number of samples
  59. epoch_cost /= m
  60. # Print the cost every 10 epochs
  61. if print_cost == True and epoch % 10 == 0:
  62. print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
  63. print("Train accuracy:", train_accuracy.result())
  64. # We evaluate the test set every 10 epochs to avoid computational overhead
  65. for (minibatch_X, minibatch_Y) in test_minibatches:
  66. Z3 = forward_propagation(tf.transpose(minibatch_X), parameters)
  67. test_accuracy.update_state(tf.transpose(Z3), minibatch_Y)
  68. print("Test_accuracy:", test_accuracy.result())
  69. costs.append(epoch_cost)
  70. train_acc.append(train_accuracy.result())
  71. test_acc.append(test_accuracy.result())
  72. test_accuracy.reset_states()
  73. return parameters, costs, train_acc, test_acc
  1. parameters, costs, train_acc, test_acc = model(new_train, new_y_train, new_test, new_y_test, num_epochs=100)

Numbers you get can be different, just check that your loss is going down and your accuracy going up!

  1. # Plot the cost
  2. plt.plot(np.squeeze(costs))
  3. plt.ylabel('cost')
  4. plt.xlabel('iterations (per fives)')
  5. plt.title("Learning rate =" + str(0.0001))
  6. plt.show()
  1. # Plot the train accuracy
  2. plt.plot(np.squeeze(train_acc))
  3. plt.ylabel('Train Accuracy')
  4. plt.xlabel('iterations (per fives)')
  5. plt.title("Learning rate =" + str(0.0001))
  6. # Plot the test accuracy
  7. plt.plot(np.squeeze(test_acc))
  8. plt.ylabel('Test Accuracy')
  9. plt.xlabel('iterations (per fives)')
  10. plt.title("Learning rate =" + str(0.0001))
  11. plt.show()

Congratulations! You've made it to the end of this assignment, and to the end of this week's material. Amazing work building a neural network in TensorFlow 2.3!

Here's a quick recap of all you just achieved:

  • Used tf.Variable to modify your variables
  • Trained a Neural Network on a TensorFlow dataset

You are now able to harness the power of TensorFlow to create cool things, faster. Nice!

4 - Bibliography

In this assignment, you were introducted to tf.GradientTape, which records operations for differentation. Here are a couple of resources for diving deeper into what it does and why:

Introduction to Gradients and Automatic Differentiation:

https://www.tensorflow.org/guide/autodiff

GradientTape documentation:

https://www.tensorflow.org/api_docs/python/tf/GradientTape

吴恩达课后习题第二课第三周:TensorFlow Introduction的更多相关文章

  1. 深度学习 吴恩达深度学习课程2第三周 tensorflow实践 参数初始化的影响

    博主 撸的  该节 代码 地址 :https://github.com/LemonTree1994/machine-learning/blob/master/%E5%90%B4%E6%81%A9%E8 ...

  2. 【吴恩达课后测验】Course 1 - 神经网络和深度学习 - 第二周测验【中英】

    [中英][吴恩达课后测验]Course 1 - 神经网络和深度学习 - 第二周测验 第2周测验 - 神经网络基础 神经元节点计算什么? [ ]神经元节点先计算激活函数,再计算线性函数(z = Wx + ...

  3. 吴恩达课后作业学习2-week1-1 初始化

    参考:https://blog.csdn.net/u013733326/article/details/79847918 希望大家直接到上面的网址去查看代码,下面是本人的笔记 初始化.正则化.梯度校验 ...

  4. 吴恩达课后作业学习2-week1-2正则化

    参考:https://blog.csdn.net/u013733326/article/details/79847918 希望大家直接到上面的网址去查看代码,下面是本人的笔记 4.正则化 1)加载数据 ...

  5. 吴恩达课后作业学习1-week4-homework-two-hidden-layer -1

    参考:https://blog.csdn.net/u013733326/article/details/79767169 希望大家直接到上面的网址去查看代码,下面是本人的笔记 两层神经网络,和吴恩达课 ...

  6. 吴恩达课后作业学习1-week4-homework-multi-hidden-layer -2

    参考:https://blog.csdn.net/u013733326/article/details/79767169 希望大家直接到上面的网址去查看代码,下面是本人的笔记 实现多层神经网络 1.准 ...

  7. 【吴恩达课后测验】Course 1 - 神经网络和深度学习 - 第一周测验【中英】

    [吴恩达课后测验]Course 1 - 神经网络和深度学习 - 第一周测验[中英] 第一周测验 - 深度学习简介 和“AI是新电力”相类似的说法是什么? [  ]AI为我们的家庭和办公室的个人设备供电 ...

  8. 【中文】【deplearning.ai】【吴恩达课后作业目录】

    [目录][吴恩达课后作业目录] 吴恩达深度学习相关资源下载地址(蓝奏云) 课程 周数 名称 类型 语言 地址 课程1 - 神经网络和深度学习 第1周 深度学习简介 测验 中英 传送门 无编程作业 编程 ...

  9. 【吴恩达课后编程作业】第二周作业 - Logistic回归-识别猫的图片

    1.问题描述 有209张图片作为训练集,50张图片作为测试集,图片中有的是猫的图片,有的不是.每张图片的像素大小为64*64 吴恩达并没有把原始的图片提供给我们 而是把这两个图片集转换成两个.h5文件 ...

随机推荐

  1. C# 给PPT中的图表添加趋势线

    本文内容分享通过C#程序代码给PPT文档中的图表添加数据趋势线的方法. 支持趋势线的图表类型包括二维面积图.条形图.柱形图.柱形图.股价图.xy (散点图) 和气泡图中:不能向三维.堆积.雷达图.饼图 ...

  2. 将两个byte型拼接成16位二进制,再转化为十进制

    short s = 0; //一个16位整形变量,初值为 0000 0000 0000 0000 byte b1 = 1; //一个byte的变量,作为转换后的高8位,假设初值为 0000 0001 ...

  3. FileWriter文件文件字符输出流写入存储数据

    1.FileWriter文件字符输出流-写入-存储数据 其中,流关闭之后再调用会报IOException; 其中,与文件字符输入流-写出-读取数据 和 字节输出流-写入-存储数据 不同的是,要先flu ...

  4. Vue组件传值(三)之 深层嵌套组件传值 - $attrs 和 $listeners

    $attrs 包含了父作用域中不作为 prop 被识别 (且获取) 的特性绑定 (class 和 style 除外).当一个组件没有声明任何 prop 时,这里会包含所有父作用域的绑定 (class和 ...

  5. ClickOnce手动更新

    if (ApplicationDeployment.IsNetworkDeployed == true)             {                 ApplicationDeploy ...

  6. Excel中怎么快速选中区域

    连续的表格选定 一张表格中会有不同的部分,若想选择某一个区域的数据的时候我们可以使用快捷键Ctrl+A,这是需要先选中第一个单元格,接着点击Ctrl+A即可选中连续的单元格.       汇总后需要汇 ...

  7. Android实现自动登录和记住密码

    效果图: 在勾选自动登录后下次打开软件会直接跳过登录界面 代码: protected void onCreate(Bundle savedInstanceState) { super.onCreate ...

  8. PHP垃圾回收机制的一些浅薄理解

    相信只要入门学习过一点开发的同学都知道,不管任何编程语言,一个变量都会保存在内存中.其实,我们这些开发者就是在来回不停地操纵内存,相应地,我们如果一直增加新的变量,内存就会一直增加,如果没有一个好的机 ...

  9. Django边学边记—模板

    功能 产生html,且不仅仅是一个html 包含: 静态内容:html,css,js 动态内容:模板语言 使用 一般使用 Django中提供的简写函数render调用模板 render(request ...

  10. python学习笔记(四)-文件操作

    文件读写"""一.文件打开有3种方式 1.读 r #如果打开的文件的时候没有指定模式,那么默认是读 读写模式 r+,只要沾上r,文件不存在的时候,打开都会报错 2.写 w ...