2019. 2. 11. 23:41ㆍArtificial Intelligence
Programming Exercise 4:
Neural Networks Learning
이번 프로그래밍 과제에서는 저번 과제에서 잠깐 살펴보았던 hand-written digit들의
pixel data matrix를 기반으로
Forward Propagation & BackPropagation을 수행하는
Neural Networks Model을 구현하였다.
구현해야 하는 파일은 다음과 같았다.
ex4.m - Octave/MATLAB script that steps you through the exercise
ex4data1.mat - Training set of hand-written digits
ex4weights.mat - Neural network parameters for exercise 4
submit.m - Submission script that sends your solutions to our servers
displayData.m - Function to help visualize the dataset
fmincg.m - Function minimization routine (similar to fminunc)
sigmoid.m - Sigmoid function
computeNumericalGradient.m - Numerically compute gradients
checkNNGradients.m - Function to help check your gradients
debugInitializeWeights.m - Function for initializing weights
predict.m - Neural network prediction function
[⋆] sigmoidGradient.m - Compute the gradient of the sigmoid function
[⋆] randInitializeWeights.m - Randomly initialize weights
[⋆] nnCostFunction.m - Neural network cost function
* DataSet Example
1. sigmoidGradient.m
function g = sigmoidGradient(z)
%SIGMOIDGRADIENT returns the gradient of the sigmoid function
%evaluated at z
% g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
% evaluated at z. This should work regardless if z is a matrix or a
% vector. In particular, if z is a vector or matrix, you should return
% the gradient for each element.
g = zeros(size(z));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the gradient of the sigmoid function evaluated at
% each value of z (z can be a matrix, vector or scalar).
g = sigmoid(z) .* (1-sigmoid(z));
% =============================================================
end
2. randInitializeWeights.m
function W = randInitializeWeights(L_in, L_out)
%RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
%incoming connections and L_out outgoing connections
% W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights
% of a layer with L_in incoming connections and L_out outgoing
% connections.
%
% Note that W should be set to a matrix of size(L_out, 1 + L_in) as
% the first column of W handles the "bias" terms
%
% You need to return the following variables correctly
W = zeros(L_out, 1 + L_in);
% ====================== YOUR CODE HERE ======================
% Instructions: Initialize W randomly so that we break the symmetry while
% training the neural network.
%
% Note: The first column of W corresponds to the parameters for the bias unit
%
epsilon_init = 0.12;
W = rand(L_out, 1+L_in) * 2 * epsilon_init - epsilon_init;
% =========================================================================
end
3. nnCostFunction.m
항상 프로그래밍 과제에서 CostFunction.m 파일만 나오면 그저 주어진 식을 대입만 하면 되는 정도였는데 이번 nnCostFunction은 꽤나 복잡했다. 일단 forward propagaion과 backpropagation을 하나의 파일 안에서 동시에 수행해야 했고, theta vector Gradient까지 연산해서 리턴해주어야 했다. 연산은 크게 3가지 파트로 나누어진다.
1. forward propagation
2. back propagation
3. regularization
forward propagation은 저번 프로그래밍 과제 3에서 Neural Network 에서 했던 것과 완전히 동일했다. 레이어에 bias를 추가한 다음 Theta vector와 곱해 Activation result를 만들고 이를 sigmoid function에 매핑하는 것이다.
다만 조금 달랐던 부분은, J Cost Function을 계산할 때, y값을 새롭게 매핑해준 다음 계산해야 한다는 것이다.
y는 주어진 20*20 픽셀 데이터가 가리키는 한 자리 수의 결과를 의미한다. 하지만 이 y값이 Neural Network 안에서 올바르게 돌아가기 위해서는 y의 값을 0과 1로만 표현하는 Classification Model에 맞춰주어야 한다. 따라서 y = 5인 경우 이를 0000100000로 매핑해 주는 과정이 필요한데, 이를 도와주는 함수가 바로 'repmat' 함수였다.
이 외의 부분은 기존의 One vs All Classification Model의 Cost Function과 구하는 방법이 동일하다.
back propagation도 강의와 유튜브 링크등을 통해 원리를 제대로 학습했다면 코드 자체를 구현하는 데에는 그렇게 큰 어려움이 없다.
그냥 pdf파일에서 시키는 그대로 따라가 주면 된다. final layer에서 에러 항을 계산하고 그 값에 theta vector를 역으로 곱해서 오차항을 계속 구해주면 된다. 이를 수행하는 과정에서 gradient value를 구할 수 있다.
Regularization도 지금까지 구현했던 방법과 크게 다르지 않았다.
결론적으로 구현해야 할 코드는 많았으나, 하나하나씩 따라가다보면 전체적인 흐름상에서 복잡한 결과물을 요구하지는 않는 함수였다.
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
%
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% You need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
% variable J. After implementing Part 1, you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
% Theta1_grad and Theta2_grad. You should return the partial derivatives of
% the cost function with respect to Theta1 and Theta2 in Theta1_grad and
% Theta2_grad, respectively. After implementing Part 2, you can check
% that your implementation is correct by running checkNNGradients
%
% Note: The vector y passed into the function is a vector of labels
% containing values from 1..K. You need to map this vector into a
% binary vector of 1's and 0's to be used with the neural network
% cost function.
%
% Hint: We recommend implementing backpropagation using a for-loop
% over the training examples if you are implementing it for the
% first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
% Hint: You can implement this around the code for
% backpropagation. That is, you can compute the gradients for
% the regularization separately and then add them to Theta1_grad
% and Theta2_grad from Part 2.
%
% Part 1 Coding (costfunction & forwardpropagation)
a1 = [ones(m, 1), X];
z2 = a1 * Theta1';
a2 = sigmoid(z2);
a2 = [ones(m, 1), a2];
z3 = a2 * Theta2';
h = sigmoid(z3);
% repmat makes y = 5 to 0000100000 form
y = repmat([1:num_labels], m, 1) == repmat(y, 1, num_labels);
J = (-1/m)*sum(sum(y.*log(h) + (1-y).*log(1-h)));
% Regularization Part
regTheta1 = Theta1(:, 2:end);
regTheta2 = Theta2(:, 2:end);
error = (lambda/(2*m)) * (sum(sum(regTheta1.^2))+sum(sum(regTheta2.^2)));
J += error;
% Part 2 Coding (backpropagation)
del1 = zeros(size(Theta1));
del2 = zeros(size(Theta2));
for t = 1:m ,
a1t = a1(t, :);
a2t = a2(t, :);
a3t = h(t, :);
yt = y(t, :);
d3 = a3t-yt;
d2 = Theta2'*d3' .* sigmoidGradient([1;Theta1 * a1t']);
del1 = del1 + d2(2:end)*a1t;
del2 = del2 + d3' * a2t;
end;
Theta1_grad = 1/m * del1 + (lambda/m) * [zeros(size(Theta1, 1), 1) regTheta1];
Theta2_grad = 1/m * del2 + (lambda/m) * [zeros(size(Theta2, 1), 1) regTheta2];
% -------------------------------------------------------------
% =========================================================================
% Unroll gradients
grad = [Theta1_grad(:) ; Theta2_grad(:)];
end
'Artificial Intelligence' 카테고리의 다른 글
Spam Classifier - Theory (0) | 2019.02.14 |
---|---|
Bias & Variance (0) | 2019.02.12 |
Neural Networks(3) - Cost Function & BackPropagation (0) | 2019.02.08 |
Multi-Class Classification with Octave (0) | 2019.02.08 |
Neural Networks(2) (0) | 2019.02.07 |