本周作业分为两部分,一部分是keras的基本使用,另一部分是ResNet的构建。
Part1: Keras – Tutorial
Keras是TensorFlow的高层封装,可以更高效的实现神经网络的搭建。
先导入库
1 | import numpy as np |
构建模型
1 | def HappyModel(input_shape): |
然后实例化这个模型
1 | ### START CODE HERE ### (1 line) |
进行优化器和loss的选择
1 | ### START CODE HERE ### (1 line) |
训练
1 | ### START CODE HERE ### (1 line) |
预测:
1 | ### START CODE HERE ### (1 line) |
可以用summary()来看看详细信息:
1 | happyModel.summary() |
1 | _________________________________________________________________ |
用plot_model()来得到详细的graph
1 | plot_model(happyModel, to_file='HappyModel.png') |
Part2: Residual Networks
主要有两个步骤:
- 构建基本的ResNet的块
- 将块放到一起,变成一个网络,来做图像分类
1 - The problem of very deep neural networks
这一部分非常深的神经网络的一些问题,主要是参数会变得很小或者爆炸,这样子训练的时候就会收敛的很慢,因此,用残差网络可以有效的改善这个问题。
2 - Building a Residual Network
根据输入输入的维度不同,分为两种块:
1. identity block(一致块)
可以看到,identity block的前后两端维度是一致的,可以直接相加。
在这里我们实现了一个跳跃三层的块。
基本结构是:
First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be
conv_name_base + '2a'
. Use 0 as the seed for the random initialization. - The first BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2a'
. - Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has F2F2 filters of shape (f,f)(f,f) and a stride of (1,1). Its padding is “same” and its name should be
conv_name_base + '2b'
. Use 0 as the seed for the random initialization. - The second BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2b'
. - Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has F3F3 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be
conv_name_base + '2c'
. Use 0 as the seed for the random initialization. - The third BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2c'
. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
注意在跳跃相加部分要用函数keras的函授Add(),不能用加号,不然会出错。
这里f是卷积核的大小,filters是这三层卷积层的深度的list,stage指的是哪一大层的网络,用来取名字的,后面有用,block是在stage下的某一层的网络,用a,b,c,d等字母表示。
1 | # GRADED FUNCTION: identity_block |
2. The convolutional block(卷积块)
当两端的维度不一致时,可以加一个卷积核来转化维度,这时候没有激活函数。
First component of main path:
- The first CONV2D has F1F1 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be
conv_name_base + '2a'
. - The first BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2a'
. - Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has F2F2 filters of (f,f) and a stride of (1,1). Its padding is “same” and it’s name should be
conv_name_base + '2b'
. - The second BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2b'
. - Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has F3F3 filters of (1,1) and a stride of (1,1). Its padding is “valid” and it’s name should be
conv_name_base + '2c'
. - The third BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '2c'
. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has F3F3 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be
conv_name_base + '1'
. - The BatchNorm is normalizing the channels axis. Its name should be
bn_name_base + '1'
.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
这里参数新增了s是stride每一步数
1 | def convolutional_block(X, f, filters, stage, block, s = 2): |
3 - Building your first ResNet model (50 layers)
构建一个50层的网络,分为5块,结构如下:
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is “conv1”.
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], “f” is 3, “s” is 1 and the block is “a”.
- The 2 identity blocks use three set of filters of size [64,64,256], “f” is 3 and the blocks are “b” and “c”.
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], “f” is 3, “s” is 2 and the block is “a”.
- The 3 identity blocks use three set of filters of size [128,128,512], “f” is 3 and the blocks are “b”, “c” and “d”.
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], “f” is 3, “s” is 2 and the block is “a”.
- The 5 identity blocks use three set of filters of size [256, 256, 1024], “f” is 3 and the blocks are “b”, “c”, “d”, “e” and “f”.
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], “f” is 3, “s” is 2 and the block is “a”.
- The 2 identity blocks use three set of filters of size [512, 512, 2048], “f” is 3 and the blocks are “b” and “c”.
- The 2D Average Pooling uses a window of shape (2,2) and its name is “avg_pool”.
- The flatten doesn’t have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be
'fc' + str(classes)
.
Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You’ll need to use this function:
- Average pooling see reference
1 | # GRADED FUNCTION: ResNet50 |