README.md 4.09 KB
Newer Older
Joao Fabro's avatar
Joao Fabro committed
1 2
# Neuro

Joao Fabro's avatar
Joao Fabro committed
3 4 5 6 7 8
Backpropagation Algorithm in C++

By Joao Fabro (joaofabro at gmail.com, fabro at utfpr.edu.br)

Usage:

Joao Fabro's avatar
Joao Fabro committed
9
$ make
Joao Fabro's avatar
Joao Fabro committed
10

Joao Fabro's avatar
Joao Fabro committed
11
$./neural xor
Joao Fabro's avatar
Joao Fabro committed
12 13 14

(this will open the config file "config.xor", and use the data file "training.xor", to train a feedforward neural network with backpropagation)

Joao Fabro's avatar
Joao Fabro committed
15
$./neural xor xortest
Joao Fabro's avatar
Joao Fabro committed
16 17 18 19

(this will open the config SAVED NETWORK FROM FILE "neuralnet.xor", and use the data file "training.xortest", to TEST a feedforward neural network with backpropagation)

In order to understand how the parameters of the feedforward neural network should be setted, analyze the file "config.howto", detailed bellow:
Joao Fabro's avatar
Joao Fabro committed
20
#-----------------
Joao Fabro's avatar
Joao Fabro committed
21 22 23 24 25 26 27 28
2	- Number of Inputs of Input Pattern (for example 2 inputs for XOR)
1	- Number of Outputs of Output Pattern (in this case, one -binary- output)
2	- Number of Layers of the Neural Net (2 layers, one hidden and one output layer)
51	- Number of neurons of 1st hidden layer
4	- Number of neurons of 2nd hidden layer...etc!(the number of layers depends on the third line-Number of Layers)
0.5	- Parameter momentum
0.3	- Parameter learning-rate
0.0001	- Parameter max_err (the training finishes when the maximum error for each pattern is smaller than this parameter)
Joao Fabro's avatar
Joao Fabro committed
29
#-----------------
Joao Fabro's avatar
Joao Fabro committed
30 31 32

In order to understand how to prepare the data for training, analyze the file "training.howto", detailed below:

Joao Fabro's avatar
Joao Fabro committed
33
#-----------------
Joao Fabro's avatar
Joao Fabro committed
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
4 	// First line, just an integer number, identfying the total number of training patterns, 4 in this case
2	// from the second line to the end of the file, each pattern must be presented. In this case, each
0.0	// training pattern is composed of TWO(2) inputs, and ONE(1) output. Each number must be alone in each line
0.0	// So, for the "XOR" training pattern, the first patter is (0.0,0.0)->(0.0), 2 inputs, 0.0 and 0.0, and
1	// one output, in this case also 0.0.
0.0	// The FIRST training pattern ends here. The next line starts the second, and then the third and fourth.
2
0.0
1.0
1
1.0
2
1.0
0.0
1
1.0
2
1.0
1.0
1
0.0
Joao Fabro's avatar
Joao Fabro committed
55
#-----------------
Joao Fabro's avatar
Joao Fabro committed
56 57 58 59 60 61 62

Please note that these files are commented to explain how to build your own config and training files. 
The "real" config and training files shouldn't have any comments, just plain numbers, one number per line.

After training, a "neuralnet" file is created, with the same "extension" used in both the "config" and the "training" files.
So, after executing "./neural xor", and if the training is successful, a "neuralnet.xor" would be created, with all the trained weights, as follows:

Joao Fabro's avatar
Joao Fabro committed
63
#-----------------
Joao Fabro's avatar
Joao Fabro committed
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87
2
5
1
2
2.552809
2.478980
2
-2.468708
0.932158
2
0.486090
-1.731878
2
-0.111488
-1.068038
2
-0.271861
0.937927
5
4.615818
2.032804
1.405843
1.562611
-0.485905
Joao Fabro's avatar
Joao Fabro committed
88
#-----------------
Joao Fabro's avatar
Joao Fabro committed
89 90 91 92 93 94 95 96 97 98 99

In this case, the feedforward neural network has 2 layers(first line), 5 neurons on the first hidden layer(second line), and 1 neuron on the last(output) layer(third line).
After that, follows the (2) trained weights of each of the 5 neurons of the first hidden layer, and then the weights of each neuron of each layer 
(in this case, just one neuron on the second layer, with 5 weights).

The files "config.xor_do_not_converge" and "training.xor_do_not_converge" are just an example with parameter set to avoid convergence of the training. Just as another example.

The file "training.xortest2" is also an example, in order to evaluate the "generalization capability" of the network for inputs "a little different" from the trained one.
By executing "./neural xor xortest2" it is possible to see that the trained network has a BIG error for input (0.1,0.1), instead of resulting something close to (0.0) it results 0.894627.....
....really bad generalization......

Joao Fabro's avatar
Joao Fabro committed
100
#----------------
Joao Fabro's avatar
Joao Fabro committed
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
$ ./neural xor xortest2
Argc= 3
Configuration file: config.xor
 Number of Inputs : 2
 Number of Outputs : 1
 Number of Layers of the Net: 2
 Number of Neurons on Layer 0 : 5
 Number of Neurons on Layer 1 : 1
 Momentum: 0.900000
 Learning Rate: 0.200000
 Precision: 0.010000
Number of Neurons for level 0 are 5
Number of Neurons for level 1 are 1
Training Pattern
 In Part 
 In = 0.1
 In = 0.1
 In = 1
 Out Part 
 Out = 0
Remembered Pattern
 In Part 
 In = 0.1
 In = 0.1
 In = 1
 Out Part 
 Out = 0.894627
Joao Fabro's avatar
Joao Fabro committed
128
#----------------