本文建立在理论推导之上,推导部分我通过一系列视频呈现,感兴趣请去我的主页找『戴森与你聊:神经网络小知识』这个合集即可,根据前面所做的推导,本文就通过代码来实现一个简单的三层全连接网络。
本文将要实现的一个三层全连接简单网络
我们代码基于Python环境,大家可以把下面的代码写入到一个jupyter notebook中,分节运行并调试。实现神经网络的基本算法,需要用到一些库,我们先把它们导入进来:
import numpy as np
import matplotlib.pyplot as plot
接下来进入正题!
X = np.array([[1,0,0,0],[1,0,1,1],[0,1,0,1],[1,1,1,0],[1,0,0,1]])
print('nInput shape:n',X.shape)
y = np.array([[1],[1],[0],[1],[0]])
print('nGround truth shape:n',y.shape)
注意: 在之前的推导中(视频中)我们假设一个输入是一个列向量,而这里使用的是矩阵,代表什么呢?在上面(3,4)所表示的输入信号维度中,第一个3是指的样本数目,而第二个4指的是每个样本中的feature的数目。因此,这里的(3,4)意思就是,输入是三个样本,每个样本用一个 1x4 的向量来表达。一定注意二者区别,这决定了后面所有矩阵运算时角标的顺序(也就是矩阵相乘时候的顺序)。 还要提醒各位注意观察,样本数目的多少,和后面的权重没有关系!权重的数目只取决于每个样本自身的维度。这其中有什么原因吗?
假定使用以下结构的简单全连接网络,输入层有4个单元,隐藏层3个单元,输出层一个单元
numInputNeurons = X.shape[1]
numHiddenNeurons = 3
numOutputNeurons = 1
注意: 权重编号规则,与推导过程中使用的下标编号规则不一致,比如对于权重矩阵,之前推导中我们约定的是先写目标单元,再写起始单元的顺序,这里反过来了,大家可以考虑下为什么?
weightsInputHidden = np.random.uniform(size=(numInputNeurons,numHiddenNeurons))
print('nThe shape of weight matrix between the input layer and hidden layer is: ',weightsInputHidden.shape)
weightsHiddenOutput = np.random.uniform(size=(numHiddenNeurons,numOutputNeurons))
print('nThe shape of weight matrix between the hidden layer and output layer is: ',weightsHiddenOutput.shape)
biasHidden = np.random.uniform(size=(1,numHiddenNeurons))
print('nThe shape of bias matrix of hidden layer: ',biasHidden.shape)
biasOutput = np.random.uniform(size=(1,numOutputNeurons))
print('nThe shape of bias matrix of output layer is: ',biasOutput.shape)
前向和反向传播都会用到Sigmoid函数以及它的导数,先定义它们:
def sigmoid(x):
return 1/(1 + np.exp(-x))
# Detailed definition of the derivative of sigmoid function
def derivative_sigmoid(x, original = False):
return x * (1 - x)
if(original == True):
return sigmoid(x) * (1 - sigmoid(x))
5.1 InputLayer --> HiddenLayer
hiddenIn = np.dot(X, weightsInputHidden) + biasHidden
hiddenActivation = sigmoid(hiddenIn)
注意: 这里涉及到矩阵运算的顺序,仔细分析一下。主要就是盯着维度的匹配!
5.2 HiddenLayer --> OutputLayer
outputIn = np.dot(hiddenActivation, weightsHiddenOutput) + biasOutput
outputActivation = sigmoid(outputIn)print('nPrediction is: ', outputActivation)
误差反传是最重要的一步,分为以下几个关键步骤:
6.1 成本函数和成本函数的导数
Error = np.square(y - outputActivation)/2
E = outputActivation - y
6.2 BP四个基本方程之:输出层神经元误差
derivativeHidden = derivative_sigmoid(hiddenActivation)
derivativeHidden.shapedeltaHidden = np.dot(deltaOutput, weightsHiddenOutput.T) * derivativeHiddendeltaHidden.sha# Learning rate
lr = 0.01n
6.3 BP四个基本方程之:中间层神经元误差
derivativeHidden = derivative_sigmoid(hiddenActivation)
derivativeHidden.shapedeltaHidden = np.dot(deltaOutput, weightsHiddenOutput.T) * derivativeHiddendeltaHidden.shapedeltaHidden
6.4 BP四个基本方程之:权重和偏置的更新
# Learning rate
lr = 0.01
weightsHiddenOutput -= np.dot(hiddenActivation.T, deltaOutput) * lr # 3xN x Nx1 = 3x1
weightsInputHidden -= np.dot(X.T, deltaHidden) * lr # 4xN x Nx3 = 4x3
biasOutput -= np.sum(deltaOutput, axis=0) * lr
biasHidden -= np.sum(deltaHidden, axis=0) * lr
注意: 上面注意维度的匹配!网络本身的参数维度和样本数均无关,比如权重和偏置的维度,都不可能与样本数有关系!这是检验我们有没有做对的一个很有用的标准。
到这为止,对这个神经网络的一次完整的前向传播+反向传播的流程算是进行完了!这是分解动作,也是完成了一次『训练』,但是一个神经网络必须经过多次训练,才能够较好的调整参数并完成任务,因此我们需要把这个训练过程写入一个循环中,反复进行!
# Define Structure Parameters
numInputNeurons = X.shape[1]
numHiddenNeurons = 3
numOutputNeurons = 1
# Initialize weights and biases with random numbers
weightsInputHidden = np.random.uniform(size=(numInputNeurons,numHiddenNeurons))print('nThe shape of weight matrix between the input layer and hidden layer is: ',weightsInputHidden.shape)
weightsHiddenOutput = np.random.uniform(size=(numHiddenNeurons,numOutputNeurons))print('nThe shape of weight matrix between the hidden layer and output layer is: ',weightsHiddenOutput.shape)
biasHidden = np.random.uniform(size=(1,numHiddenNeurons))
print('nThe shape of bias matrix of hidden layer: ',biasHidden.shape)
biasOutput = np.random.uniform(size=(1,numOutputNeurons))
print('nThe shape of bias matrix of output layer is: ',biasOutput.shape)
# Define useful functionsdef sigmoid(x): return 1/(1 + np.exp(-x))
# Definition of the derivative of sigmoid function with switch between original and efficient
def derivative_sigmoid(x, original = False):
return x * (1 - x)
if(original == True):
return sigmoid(x) * (1 - sigmoid(x))
# Define training parameters
epochs = 8000
lr = 1
# Start training
for epoch in range(epochs):
# Forward Propagation
hiddenIn = np.dot(X, weightsInputHidden) + biasHidden # Nx4 x 4x3 + Nx3 = Nx3
hiddenActivation = sigmoid(hiddenIn) # Nx3
outputIn = np.dot(hiddenActivation, weightsHiddenOutput) + biasOutput # Nx3 x 3x1 + Nx1
outputActivation = sigmoid(outputIn) # Nx1
# Error
Error = np.square(y - outputActivation)/2 # Nx1
print('n Error in epoch ', epoch,' is: ', np.mean(Error))
# Back Propagation
E = outputActivation - y # Nx1
derivativeOutput = derivative_sigmoid(outputActivation) # Nx1
#Output --> Hidden
deltaOutput = E * derivativeOutput # Nx1 Hadamard Nx1 = Nx1
# Hidden --> Input
derivativeHidden = derivative_sigmoid(hiddenActivation) # Nx3
deltaHidden = np.dot(deltaOutput, weightsHiddenOutput.T) * derivativeHidden # Nx1 x 1x3 Hadamard Nx3
# Update weights
weightsHiddenOutput -= np.dot(hiddenActivation.T, deltaOutput) * lr # 3xN x Nx1 = 3x1
weightsInputHidden -= np.dot(X.T, deltaHidden) * lr # 4xN x Nx3 = 4x3
# Update biases
biasOutput -= np.sum(deltaOutput, axis=0) * lr
biasHidden -= np.sum(deltaHidden, axis=0) * lr
print('nTraining Accomplished!n', outputActivation)