• 第二周 优化算法实战


    课程2 - 改善深层神经网络

    第二周 优化算法实战

    cd D:\software\OneDrive\桌面\吴恩达深度学习课后作业\第二部分 改善深层神经网络\第二周 优化算法实战
    
    • 1

    D:\software\OneDrive\桌面\吴恩达深度学习课后作业\第二部分 改善深层神经网络\第二周 优化算法实战

    import numpy as np
    import matplotlib.pyplot as plt
    import scipy.io
    import math
    import sklearn
    import sklearn.datasets
    
    from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
    from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
    from testCase import *
    
    %matplotlib inline
    plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
    plt.rcParams['image.interpolation'] = 'nearest'
    plt.rcParams['image.cmap'] = 'gray'
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    D:\software\OneDrive\桌面\吴恩达深度学习课后作业\第二部分 改善深层神经网络\第二周
    优化算法实战\opt_utils.py:76: SyntaxWarning: assertion is always true,
    perhaps remove parentheses?
    assert(parameters[‘W’ + str(l)].shape == layer_dims[l], layer_dims[l-1])
    D:\software\OneDrive\桌面\吴恩达深度学习课后作业\第二部分 改善深层神经网络\第二周 优化算法实战\opt_utils.py:77: SyntaxWarning: assertion is always true,
    perhaps remove parentheses?
    assert(parameters[‘W’ + str(l)].shape == layer_dims[l], 1)

    1、梯度下降

    def update_parameters_with_gd(parameters, grads, learning_rate):
        
        L = len(parameters) // 2
        
        for i in range(L):
            parameters["W"+str(i+1)] = parameters["W"+str(i+1)] - learning_rate*grads["dW"+str(i+1)]
            parameters["b"+str(i+1)] = parameters["b"+str(i+1)] - learning_rate*grads["db"+str(i+1)]
        
        return parameters
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    parameters, grads, learning_rate = update_parameters_with_gd_test_case()
    parameters = update_parameters_with_gd(parameters, grads, learning_rate)
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

    W1 = [[ 1.63535156 -0.62320365 -0.53718766]
    [-1.07799357 0.85639907 -2.29470142]]
    b1 = [[ 1.74604067]
    [-0.75184921]]
    W2 = [[ 0.32171798 -0.25467393 1.46902454]
    [-2.05617317 -0.31554548 -0.3756023 ]
    [ 1.1404819 -1.09976462 -0.1612551 ]]
    b2 = [[-0.88020257]
    [ 0.02561572]
    [ 0.57539477]]

    传统梯度下降与随机梯度下降

    1、(Batch) Gradient Descent:

    X = data_input  
    Y = labels  
    parameters = initialize_parameters(layers_dims)  
    for i in range(0, num_iterations):  
        # Forward propagation  
        a, caches = forward_propagation(X, parameters)  
        # Compute cost.  
        cost = compute_cost(a, Y)  
        # Backward propagation.  
        grads = backward_propagation(a, caches, parameters)  
        # Update parameters.  
        parameters = update_parameters(parameters, grads)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

    2、Stochastic Gradient Descent: (SGD)相当于mini版的批次梯度下降,其中每个mini-batch只有一个数据示例。即以1为规模

    X = data_input  
    Y = labels  
    parameters = initialize_parameters(layers_dims)  
    for i in range(0, num_iterations):  
        for j in range(0, m):  
            # Forward propagation  
            a, caches = forward_propagation(X[:,j], parameters)  
            # Compute cost  
            cost = compute_cost(a, Y[:,j])  
            # Backward propagation  
            grads = backward_propagation(a, caches, parameters)  
            # Update parameters.  
            parameters = update_parameters(parameters, grads)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    你应该记住:

    梯度下降,小批量梯度下降和随机梯度下降之间的差异是用于执行一个更新步骤的数据数量。
    必须调整超参数学习率α。
    在小批量的情况下,通常它会胜过梯度下降或随机梯度下降(尤其是训练集较大时)。
    
    • 1
    • 2
    • 3

    2、Mini-Batch 梯度下降

    分两个步骤:

    1、Shuffle
    创建训练集(X,Y)的随机打乱版本。
    X和Y中的每一列代表一个训练示例。
    注意,随机打乱是在X和Y之间同步完成的。
    这样,在随机打乱之后,X的ith列就是对应于Y中ith标签的示例。打乱步骤可确保该示例将随机分为不同小批。

    2、Partition
    将打乱后的(X,Y)划分为大小为mini_batch_size(此处为64)的小批处理。
    请注意,训练示例的数量并不总是可以被mini_batch_size整除。最后的小批量可能较小,但是你不必担心。(向上兼容)

    练习

    实现random_mini_batches。
    我们为你编码好了shuffling部分。为了帮助你实现partitioning部分,我们为你提供了以下代码,用于选择1st和2nd小批次的索引:

    first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
    second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 *
    mini_batch_size]

    def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
        
        np.random.seed(seed)
        m = X.shape[1]
        mini_batches = []
        
        permutation = list(np.random.permutation(m)) #permutation:照给定列表生成一个打乱后的随机列表
        shuffled_X = X[:, permutation]  # 1、shuffled
        shuffled_Y = Y[:, permutation].reshape((1,m))
        
        num_complete_minibatches  = math.floor(m/mini_batch_size)
        
        for k in range(0,num_complete_minibatches):
            mini_batch_X = shuffled_X[:, k * mini_batch_size : (k+1) * mini_batch_size] # 2、partition
            mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k+1) * mini_batch_size]
            mini_batche = (mini_batch_X,mini_batch_Y)
            mini_batches.append(mini_batche)
        
        if m%mini_batch_size!=0:
            mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
            mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
            mini_batche = (mini_batch_X,mini_batch_Y)
            mini_batches.append(mini_batche)
        
        return mini_batches
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
    mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
    
    print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
    print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
    print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
    print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
    print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape)) 
    print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
    print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3])) #sanity check 合理性检验 [0][0][0][0:3]?
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

    shape of the 1st mini_batch_X: (12288, 64)
    shape of the 2nd mini_batch_X: (12288, 64)
    shape of the 3rd mini_batch_X: (12288, 20)
    shape of the 1st mini_batch_Y: (1, 64)
    shape of the 2nd mini_batch_Y: (1, 64)
    shape of the 3rd mini_batch_Y: (1, 20)
    mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]

    注意:

    Shuffling和Partitioning是构建小批次数据所需的两个步骤
    通常选择2的幂作为最小批量大小,例如16、32、64、128。
    
    • 1
    • 2

    3、Momentum

    练习:初始化速度。速度是一个Python字典,需要使用零数组进行初始化。它的键与grads词典中的键相同,即:
    为:l = 1,…,L

    v[“dW” + str(l+1)] = … #(numpy array of zeros with the same shape as parameters[“W” + str(l+1)])
    v[“db” + str(l+1)] = … #(numpy array of zeros with the same shape as parameters[“b” + str(l+1)])

    # 初始化v
    def initialize_velocity(parameters):
        
        L = len(parameters) // 2
        v = {}
        
        for i in range(L):
            v["dW"+str(i+1)] = np.zeros(parameters["W"+str(i+1)].shape) 
            v["db"+str(i+1)] = np.zeros(parameters["b"+str(i+1)].shape) 
        
        return v
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    parameters = initialize_velocity_test_case()
    v = initialize_velocity(parameters)
    
    print("v[\"dW1\"] = " + str(v["dW1"]))
    print("v[\"db1\"] = " + str(v["db1"]))
    print("v[\"dW2\"] = " + str(v["dW2"]))
    print("v[\"db2\"] = " + str(v["db2"]))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7

    v[“dW1”] = [[0. 0. 0.]
    [0. 0. 0.]]
    v[“db1”] = [[0.]
    [0.]]
    v[“dW2”] = [[0. 0. 0.]
    [0. 0. 0.]
    [0. 0. 0.]]
    v[“db2”] = [[0.]
    [0.]
    [0.]]

    实现带冲量的参数更新。

    def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
        
        L = len(parameters) // 2
        
        for i in range(L):
            v["dW"+str(i+1)] = beta * v["dW"+str(i+1)] + (1-beta) * grads["dW"+str(i+1)]
            v["db"+str(i+1)] = beta * v["db"+str(i+1)] + (1-beta) * grads["db"+str(i+1)]
            parameters["W"+str(i+1)] = parameters["W"+str(i+1)] - learning_rate * v["dW"+str(i+1)]
            parameters["b"+str(i+1)] = parameters["b"+str(i+1)] - learning_rate * v["db"+str(i+1)]
        
        return parameters,v
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    parameters, grads, v = update_parameters_with_momentum_test_case()
    parameters,v = update_parameters_with_momentum(parameters, grads, v,beta=0.9,learning_rate=0.01)
    
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))
    print("v[\"dW1\"] = " + str(v["dW1"]))
    print("v[\"db1\"] = " + str(v["db1"]))
    print("v[\"dW2\"] = " + str(v["dW2"]))
    print("v[\"db2\"] = " + str(v["db2"]))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    W1 = [[ 1.62544598 -0.61290114 -0.52907334]
    [-1.07347112 0.86450677 -2.30085497]]
    b1 = [[ 1.74493465]
    [-0.76027113]]
    W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
    [-2.05974396 -0.32173003 -0.38320915]
    [ 1.13444069 -1.0998786 -0.1713109 ]]
    b2 = [[-0.87809283]
    [ 0.04055394]
    [ 0.58207317]]
    v[“dW1”] = [[-0.11006192 0.11447237 0.09015907]
    [ 0.05024943 0.09008559 -0.06837279]]
    v[“db1”] = [[-0.01228902]
    [-0.09357694]]
    v[“dW2”] = [[-0.02678881 0.05303555 -0.06916608]
    [-0.03967535 -0.06871727 -0.08452056]
    [-0.06712461 -0.00126646 -0.11173103]]
    v[“db2”] = [[0.02344157]
    [0.16598022]
    [0.07420442]]

    4、Adam

    Adam是训练神经网络最有效的优化算法之一。它结合了RMSProp和Momentum的优点。

    def initialize_adam(parameters):
        
        L = len(parameters) // 2
        v = {}
        s = {}
        
        for i in range(L):
            v["dW"+str(i+1)] = np.zeros(parameters["W"+str(i+1)].shape)
            v["db"+str(i+1)] = np.zeros(parameters["b"+str(i+1)].shape)
            s["dW"+str(i+1)] = np.zeros(parameters["W"+str(i+1)].shape)
            s["db"+str(i+1)] = np.zeros(parameters["b"+str(i+1)].shape)
        
        return v,s
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    parameters = initialize_adam_test_case()
    v,s = initialize_adam(parameters)
    
    print("v[\"dW1\"] = " + str(v["dW1"]))
    print("v[\"db1\"] = " + str(v["db1"]))
    print("v[\"dW2\"] = " + str(v["dW2"]))
    print("v[\"db2\"] = " + str(v["db2"]))
    print("s[\"dW1\"] = " + str(s["dW1"]))
    print("s[\"db1\"] = " + str(s["db1"]))
    print("s[\"dW2\"] = " + str(s["dW2"]))
    print("s[\"db2\"] = " + str(s["db2"]))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11

    v[“dW1”] = [[0. 0. 0.]
    [0. 0. 0.]]
    v[“db1”] = [[0.]
    [0.]]
    v[“dW2”] = [[0. 0. 0.]
    [0. 0. 0.]
    [0. 0. 0.]]
    v[“db2”] = [[0.]
    [0.]
    [0.]]
    s[“dW1”] = [[0. 0. 0.]
    [0. 0. 0.]]
    s[“db1”] = [[0.]
    [0.]]
    s[“dW2”] = [[0. 0. 0.]
    [0. 0. 0.]
    [0. 0. 0.]]
    s[“db2”] = [[0.]
    [0.]
    [0.]]

    def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
                                    beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8):
        
        L = len(parameters) // 2
        v_corrected = {}
        s_corrected = {}
        
        for i in range(L):
            
            v["dW"+str(i+1)] = beta1 * v["dW"+str(i+1)] + (1-beta1) * grads["dW"+str(i+1)]
            v["db"+str(i+1)] = beta1 * v["db"+str(i+1)] + (1-beta1) * grads["db"+str(i+1)]
            v_corrected["dW"+str(i+1)] = v["dW"+str(i+1)] / (1-(beta1)**t)
            v_corrected["db"+str(i+1)] = v["db"+str(i+1)] / (1-(beta1)**t)
            
            s["dW"+str(i+1)] = beta2 * s["dW"+str(i+1)] + (1-beta2) * (grads["dW"+str(i+1)]**2)
            s["db"+str(i+1)] = beta2 * s["db"+str(i+1)] + (1-beta2) * (grads["db"+str(i+1)]**2)
            s_corrected["dW"+str(i+1)] = s["dW"+str(i+1)] / (1-(beta2)**t)
            s_corrected["db"+str(i+1)] = s["db"+str(i+1)] / (1-(beta2)**t)
            
            parameters["W"+str(i+1)] = parameters["W"+str(i+1)] - learning_rate * (v_corrected["dW"+str(i+1)]/np.sqrt(s_corrected["dW"+str(i+1)]+epsilon))
            parameters["b"+str(i+1)] = parameters["b"+str(i+1)] - learning_rate * (v_corrected["db"+str(i+1)]/np.sqrt(s_corrected["db"+str(i+1)]+epsilon))
    
        return parameters,v,s
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    parameters, grads, v, s = update_parameters_with_adam_test_case()
    parameters,v,s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
    
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))
    print("v[\"dW1\"] = " + str(v["dW1"]))
    print("v[\"db1\"] = " + str(v["db1"]))
    print("v[\"dW2\"] = " + str(v["dW2"]))
    print("v[\"db2\"] = " + str(v["db2"]))
    print("s[\"dW1\"] = " + str(s["dW1"]))
    print("s[\"db1\"] = " + str(s["db1"]))
    print("s[\"dW2\"] = " + str(s["dW2"]))
    print("s[\"db2\"] = " + str(s["db2"]))
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15

    W1 = [[ 1.63178673 -0.61919778 -0.53561312]
    [-1.08040999 0.85796626 -2.29409733]]
    b1 = [[ 1.74481176]
    [-0.7612069 ]]
    W2 = [[ 0.32648046 -0.25681174 1.46954931]
    [-2.05269934 -0.31497584 -0.37661299]
    [ 1.14121081 -1.09245036 -0.16498684]]
    b2 = [[-0.87785842]
    [ 0.04221375]
    [ 0.58281521]]
    v[“dW1”] = [[-0.11006192 0.11447237 0.09015907]
    [ 0.05024943 0.09008559 -0.06837279]]
    v[“db1”] = [[-0.01228902]
    [-0.09357694]]
    v[“dW2”] = [[-0.02678881 0.05303555 -0.06916608]
    [-0.03967535 -0.06871727 -0.08452056]
    [-0.06712461 -0.00126646 -0.11173103]]
    v[“db2”] = [[0.02344157]
    [0.16598022]
    [0.07420442]]
    s[“dW1”] = [[0.00121136 0.00131039 0.00081287]
    [0.0002525 0.00081154 0.00046748]]
    s[“db1”] = [[1.51020075e-05]
    [8.75664434e-04]]
    s[“dW2”] = [[7.17640232e-05 2.81276921e-04 4.78394595e-04]
    [1.57413361e-04 4.72206320e-04 7.14372576e-04]
    [4.50571368e-04 1.60392066e-07 1.24838242e-03]]
    s[“db2”] = [[5.49507194e-05]
    [2.75494327e-03]
    [5.50629536e-04]]

    5、不同优化算法的模型

    我们使用“moons”数据集来测试不同的优化方法。

    我们已经实现了一个三层的神经网络。你将使用以下方法进行训练:
    小批次 Gradient Descent:它将调用你的函数:
    - update_parameters_with_gd()
    小批次 冲量:它将调用你的函数:
    - initialize_velocity()和 update_parameters_with_momentum()
    小批次 Adam:它将调用你的函数:
    - initialize_adam()和 update_parameters_with_adam()

    train_X, train_Y = load_dataset()
    
    • 1

    在这里插入图片描述

    def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
              beta1 = 0.9, beta2 = 0.999,  epsilon = 1e-8, num_epochs = 10000, print_cost = True):
        
        L = len(layers_dims)
        costs = []
        t = 0
        seed = 10
        
        parameters  = initialize_parameters(layers_dims)
        
        if optimizer == "gd":
            pass
        elif optimizer == "momentum":
            v = initialize_velocity(parameters)
        elif optimizer == "adam":
            v,s = initialize_adam(parameters)
        
        for i in range(num_epochs):
            seed = seed + 1
            minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
            
            for minibatch in minibatches:
                
                (minibatch_X, minibatch_Y) = minibatch
                a3,caches = forward_propagation(minibatch_X, parameters)
                cost = compute_cost(a3,minibatch_Y)
                grads = backward_propagation(minibatch_X, minibatch_Y,caches)
                
                if optimizer == "gd":
                    parameters = update_parameters_with_gd(parameters,grads,learning_rate)
                elif optimizer == "momentum":
                    parameters,v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
                elif optimizer == "adam":
                    t = t+1
                    parameters,v,s = update_parameters_with_adam(parameters, grads, v, s,
                                                                   t, learning_rate, beta1, beta2,  epsilon)
        
            if print_cost and i % 1000 == 0:
                print ("Cost after epoch %i: %f" %(i, cost))
            if print_cost and i % 100 == 0:
                costs.append(cost)
        
        # plot the cost
        plt.plot(costs)
        plt.ylabel('cost')
        plt.xlabel('epochs (per 100)')
        plt.title("Learning rate = " + str(learning_rate))
        plt.show()
    
        return parameters
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50

    (1) 小批量梯度下降

    #小批量梯度下降
    # train 3-layer model
    layers_dims = [train_X.shape[0], 5, 2, 1]
    parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
    
    # Predict
    predictions = predict(train_X, train_Y, parameters)
    
    # Plot decision boundary
    plt.title("Model with Gradient Descent optimization")
    axes = plt.gca()
    axes.set_xlim([-1.5,2.5])
    axes.set_ylim([-1,1.5])
    plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14

    Cost after epoch 0: 0.690736
    Cost after epoch 1000: 0.685273
    Cost after epoch 2000: 0.647072
    Cost after epoch 3000: 0.619525
    Cost after epoch 4000: 0.576584
    Cost after epoch 5000: 0.607243
    Cost after epoch 6000: 0.529403
    Cost after epoch 7000: 0.460768
    Cost after epoch 8000: 0.465586
    Cost after epoch 9000: 0.464518

    在这里插入图片描述

    Accuracy: 0.7966666666666666

    在这里插入图片描述

    (2)带冲量的小批量梯度下降

    因为此示例相对简单,所以使用冲量的收益很小。但是对于更复杂的问题,你可能会看到更大的收获。

    # train 3-layer model
    layers_dims = [train_X.shape[0], 5, 2, 1]
    parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
    
    # Predict
    predictions = predict(train_X, train_Y, parameters)
    
    # Plot decision boundary
    plt.title("Model with Momentum optimization")
    axes = plt.gca()
    axes.set_xlim([-1.5,2.5])
    axes.set_ylim([-1,1.5])
    plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    Cost after epoch 0: 0.690741
    Cost after epoch 1000: 0.685341
    Cost after epoch 2000: 0.647145
    Cost after epoch 3000: 0.619594
    Cost after epoch 4000: 0.576665
    Cost after epoch 5000: 0.607324
    Cost after epoch 6000: 0.529476
    Cost after epoch 7000: 0.460936
    Cost after epoch 8000: 0.465780
    Cost after epoch 9000: 0.464740

    在这里插入图片描述

    Accuracy: 0.7966666666666666

    在这里插入图片描述

    (3)Adam模式的小批量梯度下降

    # train 3-layer model
    layers_dims = [train_X.shape[0], 5, 2, 1]
    parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
    
    # Predict
    predictions = predict(train_X, train_Y, parameters)
    
    # Plot decision boundary
    plt.title("Model with Adam optimization")
    axes = plt.gca()
    axes.set_xlim([-1.5,2.5])
    axes.set_ylim([-1,1.5])
    plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13

    Cost after epoch 0: 0.690552
    Cost after epoch 1000: 0.185501
    Cost after epoch 2000: 0.150830
    Cost after epoch 3000: 0.074454
    Cost after epoch 4000: 0.125959
    Cost after epoch 5000: 0.104344
    Cost after epoch 6000: 0.100676
    Cost after epoch 7000: 0.031652
    Cost after epoch 8000: 0.111973
    Cost after epoch 9000: 0.197940

    在这里插入图片描述

    Accuracy: 0.94

    在这里插入图片描述

    (4)总结

    优化方法 准确度 模型损失
    Gradient descent 79.70% 振荡
    Momentum 79.70% 振荡
    Adam 94% 更光滑

    -冲量通常会有所帮助,但是鉴于学习率低和数据集过于简单,其影响几乎可以忽略不计。
    -另一方面,Adam明显胜过小批次梯度下降和冲量。

    Adam的优势包括:

    • 相对较低的内存要求(尽管高于梯度下降和带冲量的梯度下降)
    • 即使很少调整超参数,通常也能很好地工作(α除外)
  • 相关阅读:
    打造炫酷效果:用Java优雅地制作Excel迷你图
    七月集训(2)字符串
    SpringBoot的JSON工具类(java),用于前后端分离
    Flowable监听器动态调用Springcloud接口
    基于SSM的医院医疗管理系统的设计与实现
    面试官:vue2和vue3的区别有哪些?
    LINQ to SQL语句(24)之视图
    R语言手动绘制NHANSE数据基线表并聊聊NHANSE数据制作亚组交互效应表的问题(P for interaction)
    玩转Jetson Nano(三):安装Pytorch GPU版
    Java 项目-基于 SpringBoot+Vue的疫情网课管理系统
  • 原文地址:https://blog.csdn.net/woailiqi12134/article/details/126390427