The same operation should also be applied to lab_coffee_utils.py, lab_neurons_utils.py and lab_utils_common.py.
Then, install tensorflow. Maybe you should run you anaconda prompt as a administrator. When running the code of labs, once using the module from tensorflow, jupyter notebook keeps saying: The kernel has died. When checking the command line of the prompt, there may be one warning and one error:
Warning
Error
For the warning, it indicates that tensorflow could run faster by using some CPU instructions. We can add the following code to the beginning of the first code cell to ignore this warning:
1 2
import os os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
It is the error that leads the kernel to die. The error occurs because there are more than one libiomap5md.dll files. We can add the following code to the beginning of the first code cell to solve it:
1 2
import os os.environ['KMP_DUPLICATE_LIB_OK']='TRUE'
Now, we can run all the labs normally. Labs in C2_Week1 focus on introducing tensorflow, see Tensorflow to know more about it.
Exercise 2 and exercise 3 are quite interesting. They both implement forward propagation using numpy, that is, making predictions.
1 2 3 4 5 6 7 8 9 10
# Non-vectorized ### START CODE HERE ### for j inrange(units): a_out[j] = g(np.dot(a_in.T, W[:, j]) + b[j]) ### END CODE HERE ###
# Vectorized ### START CODE HERE ### A_out = g(A_in @ W + b) ### END CODE HERE ###
C2_Week2
Just following its instructions is okay. The last part of lab Multi-class Classification, which shows how the new features created by neurons are like, is rather thought-provoking. It reveals that neurons of hidden layers have learnt something about the problem. In fact, they have partly partition the dataset:
# Exercise 1 ### START CODE HERE ### e = np.exp(z) a = e / np.sum(e) ### END CODE HERE ###
# Exercise 2 ### START CODE HERE ### Dense(units=25, activation='relu', input_dim=400), Dense(units=15, activation='relu'), Dense(units=10, activation='linear') ### END CODE HERE ###
C2_W3_PracticeLab
This lab is still very easy but it is quite useful. It use real data and curve to show us how to choose better $d$ and $\lambda$ using dev set and training set.
# Exercise 1 for i inrange(m): ### START CODE HERE ### err += (yhat[i] - y[i])**2 err /= 2 * m ### END CODE HERE ###
# Exercise 2 m = len(y) incorrect = 0. for i inrange(m): ### START CODE HERE ### if y[i] != yhat[i]: incorrect += 1 cerr = incorrect / m ### END CODE HERE ###
# Exercise 3 model = Sequential( [ ### START CODE HERE ### Dense(units=120, activation='relu'), Dense(units=40, activation='relu'), Dense(units=6, activation='linear') ### END CODE HERE ###
], name="Complex" ) model.compile( ### START CODE HERE ### loss=SparseCategoricalCrossentropy(from_logits=True), optimizer=Adam(learning_rate=0.01), ### END CODE HERE ### )
# Exercise 4 model_s = Sequential( [ ### START CODE HERE ### Dense(units=6, activation='relu'), Dense(units=6, activation='linear') ### END CODE HERE ### ], name = "Simple" ) model_s.compile( ### START CODE HERE ### loss=SparseCategoricalCrossentropy(from_logits=True), optimizer=Adam(learning_rate=0.01), ### START CODE HERE ### )
# Exercise 2 left_indices = [] right_indices = [] ### START CODE HERE ### for i in node_indices: if X[i][feature] == 1: left_indices.append(i) else: right_indices.append(i) ### END CODE HERE ###