Lab: Advanced Learning Algorithms

C2_Week1

As usual, modify:

1
plt.style.use('./deeplearning.mplstyle') -> plt.style.use('deeplearning.mplstyle')

The same operation should also be applied to lab_coffee_utils.py, lab_neurons_utils.py and lab_utils_common.py.

Then, install tensorflow. Maybe you should run you anaconda prompt as a administrator. When running the code of labs, once using the module from tensorflow, jupyter notebook keeps saying: The kernel has died. When checking the command line of the prompt, there may be one warning and one error:

1

Warning

2

Error

For the warning, it indicates that tensorflow could run faster by using some CPU instructions. We can add the following code to the beginning of the first code cell to ignore this warning:

1
2
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

It is the error that leads the kernel to die. The error occurs because there are more than one libiomap5md.dll files. We can add the following code to the beginning of the first code cell to solve it:

1
2
import os
os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'

Now, we can run all the labs normally. Labs in C2_Week1 focus on introducing tensorflow, see Tensorflow to know more about it.

C2_W1_PracticeLab

Exercise 1 is quite simple, it is almost the same as Create tensorflow model.

Exercise 2 and exercise 3 are quite interesting. They both implement forward propagation using numpy, that is, making predictions.

1
2
3
4
5
6
7
8
9
10
# Non-vectorized
### START CODE HERE ###
for j in range(units):
a_out[j] = g(np.dot(a_in.T, W[:, j]) + b[j])
### END CODE HERE ###

# Vectorized
### START CODE HERE ###
A_out = g(A_in @ W + b)
### END CODE HERE ###

C2_Week2

Just following its instructions is okay. The last part of lab Multi-class Classification, which shows how the new features created by neurons are like, is rather thought-provoking. It reveals that neurons of hidden layers have learnt something about the problem. In fact, they have partly partition the dataset:

3

See Multiclass classification to know more about relu, softmax and multiclass classification.

C2_W2_PracticeLab

Quite easy:

1
2
3
4
5
6
7
8
9
10
11
12
# Exercise 1
### START CODE HERE ###
e = np.exp(z)
a = e / np.sum(e)
### END CODE HERE ###

# Exercise 2
### START CODE HERE ###
Dense(units=25, activation='relu', input_dim=400),
Dense(units=15, activation='relu'),
Dense(units=10, activation='linear')
### END CODE HERE ###

C2_W3_PracticeLab

This lab is still very easy but it is quite useful. It use real data and curve to show us how to choose better $d$ and $\lambda$ using dev set and training set.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
# Exercise 1
for i in range(m):
### START CODE HERE ###
err += (yhat[i] - y[i])**2
err /= 2 * m
### END CODE HERE ###

# Exercise 2
m = len(y)
incorrect = 0.
for i in range(m):
### START CODE HERE ###
if y[i] != yhat[i]:
incorrect += 1
cerr = incorrect / m
### END CODE HERE ###

# Exercise 3
model = Sequential(
[
### START CODE HERE ###
Dense(units=120, activation='relu'),
Dense(units=40, activation='relu'),
Dense(units=6, activation='linear')
### END CODE HERE ###

], name="Complex"
)
model.compile(
### START CODE HERE ###
loss=SparseCategoricalCrossentropy(from_logits=True),
optimizer=Adam(learning_rate=0.01),
### END CODE HERE ###
)

# Exercise 4
model_s = Sequential(
[
### START CODE HERE ###
Dense(units=6, activation='relu'),
Dense(units=6, activation='linear')
### END CODE HERE ###
], name = "Simple"
)
model_s.compile(
### START CODE HERE ###
loss=SparseCategoricalCrossentropy(from_logits=True),
optimizer=Adam(learning_rate=0.01),
### START CODE HERE ###
)

# Exercise 5
model_r = Sequential(
[
### START CODE HERE ###
Dense(units=120, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.1)),
Dense(units=40, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.1)),
Dense(units=6, activation='linear')
### START CODE HERE ###
], name= None
)
model_r.compile(
### START CODE HERE ###
loss=SparseCategoricalCrossentropy(from_logits=True),
optimizer=Adam(learning_rate=0.01),
### START CODE HERE ###
)

C2_W4_PracticeLab

To finish this lab, pydot is required. Install it using:

1
2
3
pip intsall pydot
or
conda install pydot
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# Exercise 1
entropy = 0.
length = len(y)
e_count = np.sum(y) + 0.
### START CODE HERE ###
if length != 0 and e_count != 0 and e_count != length:
p1 = e_count / length
entropy = -p1 * np.log2(p1) - (1 - p1) * np.log2(1 - p1)
### END CODE HERE ###

# Exercise 2
left_indices = []
right_indices = []
### START CODE HERE ###
for i in node_indices:
if X[i][feature] == 1:
left_indices.append(i)
else:
right_indices.append(i)
### END CODE HERE ###

# Exercise 3
information_gain = 0
### START CODE HERE ###
length_node = len(y_node) + 0.
if length_node != 0:
length_left = len(y_left)
weight_left = length_left / length_node
h_n = compute_entropy(y_node)
h_left = compute_entropy(y_left)
h_right = compute_entropy(y_right)
information_gain = h_n - weight_left * h_left - (1 - weight_left) * h_right
### END CODE HERE ###

# Exercise 4
best_feature = -1
biggest_gain = 0.
### START CODE HERE ###
for feature in range(num_features):
current_gain = compute_information_gain(X, y, node_indices, feature)
if (current_gain > biggest_gain):
best_feature = feature
biggest_gain = current_gain
### END CODE HERE ##

Some bugs occur when drawing the picture, and I don't know how to fit it... Just commenting out the relevant code is ok.