cs231n作业:assignment1 - softmax

GitHub地址:https://github.com/ZJUFangzh/cs231n

softmax是最常用的分类器之一。

softmax和svm都是常用的分类器,而softmax更为常用。

具体可以参考我这篇的最后,ng老师有讲,softmax

前面数据集的都跟SVM的一样。

直接进入loss和grads推导环节。

$$L_i = -log(\frac{e^{f_{y_i}}}{\sum_j e^{f_j}})$$

可以看到,计算的公式也就是cross-entropy,即

$$H(p,q) = - \sum_i y_i log(y_{i}^{hat})$$

但是,这样有一个缺点,就是指数$e^{f_{y_i}}$可能会特别大,这样可能导致内存不足,计算不稳定等问题。那么可以在分子分母同乘一个常数C,一般C取为$logC = -max f_j$

1
2
3
4
5
6
f = np.array([123, 456, 789]) # 例子中有3个分类,每个评分的数值都很大
p = np.exp(f) / np.sum(np.exp(f)) # 不妙:数值问题,可能导致数值爆炸

# 那么将f中的值平移到最大值为0:
f -= np.max(f) # f becomes [-666, -333, 0]
p = np.exp(f) / np.sum(np.exp(f)) # 现在OK了,将给出正确结果

精确地说,SVM分类器使用的是折叶损失(hinge loss),有时候又被称为最大边界损失(max-margin loss)。Softmax分类器使用的是交叉熵损失(corss-entropy loss)。Softmax分类器的命名是从softmax函数那里得来的,softmax函数将原始分类评分变成正的归一化数值,所有数值和为1,这样处理后交叉熵损失才能应用。注意从技术上说“softmax损失(softmax loss)”是没有意义的,因为softmax只是一个压缩数值的函数。但是在这个说法常常被用来做简称。

求导过程参考:cs231n softmax求导

最终得到的公式是:

softmax代码实现

编辑cs231n/classifiers/softmax.py,先写一下softmax_loss_naive函数,依旧是循环:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def softmax_loss_naive(W, X, y, reg):
"""
Softmax loss function, naive implementation (with loops)

Inputs have dimension D, there are C classes, and we operate on minibatches
of N examples.

Inputs:
- W: A numpy array of shape (D, C) containing weights.
- X: A numpy array of shape (N, D) containing a minibatch of data.
- y: A numpy array of shape (N,) containing training labels; y[i] = c means
that X[i] has label c, where 0 <= c < C.
- reg: (float) regularization strength

Returns a tuple of:
- loss as single float
- gradient with respect to weights W; an array of same shape as W
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)

#############################################################################
# TODO: Compute the softmax loss and its gradient using explicit loops. #
# Store the loss in loss and the gradient in dW. If you are not careful #
# here, it is easy to run into numeric instability. Don't forget the #
# regularization! #
#############################################################################
(N, D) = X.shape
C = W.shape[1]
#遍历每个样本
for i in range(N):
f_i = X[i].dot(W)
#进行公式的指数修正
f_i -= np.max(f_i)
sum_j = np.sum(np.exp(f_i))
#得到样本中每个类别的概率
p = lambda k : np.exp(f_i[k]) / sum_j
loss += - np.log(p(y[i]))
#根据softmax求导公式
for k in range(C):
p_k = p(k)
dW[:, k] += (p_k - (k == y[i])) * X[i]

loss /= N
loss += 0.5 * reg * np.sum(W * W)
dW /= N
dW += reg*W
#############################################################################
# END OF YOUR CODE #
#############################################################################

return loss, dW

验证一下loss和grad得到:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
numerical: -0.621593 analytic: -0.621593, relative error: 7.693773e-09
numerical: -2.576505 analytic: -2.576505, relative error: 4.492083e-09
numerical: -1.527801 analytic: -1.527801, relative error: 4.264914e-08
numerical: 1.101379 analytic: 1.101379, relative error: 9.735173e-09
numerical: 2.375620 analytic: 2.375620, relative error: 3.791861e-08
numerical: 3.166961 analytic: 3.166960, relative error: 8.526285e-09
numerical: -1.440997 analytic: -1.440998, relative error: 4.728898e-08
numerical: 0.563304 analytic: 0.563304, relative error: 2.409996e-08
numerical: -2.057292 analytic: -2.057292, relative error: 1.820335e-08
numerical: -0.450338 analytic: -0.450338, relative error: 8.075985e-08
numerical: -0.233090 analytic: -0.233090, relative error: 4.136546e-08
numerical: 0.251391 analytic: 0.251391, relative error: 4.552523e-08
numerical: 0.787031 analytic: 0.787031, relative error: 5.036469e-08
numerical: -1.801593 analytic: -1.801594, relative error: 3.159903e-08
numerical: -0.294108 analytic: -0.294109, relative error: 1.792497e-07
numerical: -1.974307 analytic: -1.974307, relative error: 1.160708e-08
numerical: 2.986921 analytic: 2.986921, relative error: 2.788065e-08
numerical: -0.247281 analytic: -0.247281, relative error: 8.957573e-08
numerical: 0.569337 analytic: 0.569337, relative error: 2.384912e-08
numerical: -1.579298 analytic: -1.579298, relative error: 1.728733e-08

向量化softmax

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
def softmax_loss_vectorized(W, X, y, reg):
"""
Softmax loss function, vectorized version.

Inputs and outputs are the same as softmax_loss_naive.
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)

#############################################################################
# TODO: Compute the softmax loss and its gradient using no explicit loops. #
# Store the loss in loss and the gradient in dW. If you are not careful #
# here, it is easy to run into numeric instability. Don't forget the #
# regularization! #
#############################################################################
(N, D) = X.shape
C = W.shape[1]
f = X.dot(W)
#在列方向进行指数修正
f -= np.max(f,axis=1,keepdims=True)
#求得softmax各个类的概率
p = np.exp(f) / np.sum(np.exp(f),axis=1,keepdims=True)
y_lable = np.zeros((N,C))
#y_lable就是(N,C)维的矩阵,每一行中只有对应的那个正确类别 = 1,其他都是0
y_lable[np.arange(N),y] = 1
#cross entropy
loss = -1 * np.sum(np.multiply(np.log(p),y_lable)) / N
loss += 0.5 * reg * np.sum( W * W)
#求导公式,很清晰
dW = X.T.dot(p-y_lable)
dW /= N
dW += reg*W


#############################################################################
# END OF YOUR CODE #
#############################################################################

return loss, dW

检验一下向量化和非向量化的时间:

1
2
3
4
naive loss: 2.357905e+00 computed in 0.091724s
vectorized loss: 2.357905e+00 computed in 0.002995s
Loss difference: 0.000000
Gradient difference: 0.000000

softmax的函数已经编写完成了,接下来调一下学习率和正则化两个超参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]

################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
for lr in learning_rates:
for reg in regularization_strengths:
softmax = Softmax()
loss_hist = softmax.train(X_train, y_train, learning_rate=lr, reg=reg,
num_iters=1500, verbose=True)
y_train_pred = softmax.predict(X_train)
y_val_pred = softmax.predict(X_val)
y_train_acc = np.mean(y_train_pred==y_train)
y_val_acc = np.mean(y_val_pred==y_val)
results[(lr,reg)] = [y_train_acc, y_val_acc]
if y_val_acc > best_val:
best_val = y_val_acc
best_softmax = softmax
################################################################################
# END OF YOUR CODE #
################################################################################

# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))

print('best validation accuracy achieved during cross-validation: %f' % best_val)
1
2
3
4
5
lr 1.000000e-07 reg 2.500000e+04 train accuracy: 0.350592 val accuracy: 0.354000
lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.329551 val accuracy: 0.342000
lr 5.000000e-07 reg 2.500000e+04 train accuracy: 0.347286 val accuracy: 0.359000
lr 5.000000e-07 reg 5.000000e+04 train accuracy: 0.328551 val accuracy: 0.337000
best validation accuracy achieved during cross-validation: 0.359000
分享到