Commit 67adacf5 authored by Joao Fabro's avatar Joao Fabro

Update README.md

parent bb990f98
...@@ -6,18 +6,18 @@ By Joao Fabro (joaofabro at gmail.com, fabro at utfpr.edu.br) ...@@ -6,18 +6,18 @@ By Joao Fabro (joaofabro at gmail.com, fabro at utfpr.edu.br)
Usage: Usage:
\$ make $ make
\$./neural xor $./neural xor
(this will open the config file "config.xor", and use the data file "training.xor", to train a feedforward neural network with backpropagation) (this will open the config file "config.xor", and use the data file "training.xor", to train a feedforward neural network with backpropagation)
\$./neural xor xortest $./neural xor xortest
(this will open the config SAVED NETWORK FROM FILE "neuralnet.xor", and use the data file "training.xortest", to TEST a feedforward neural network with backpropagation) (this will open the config SAVED NETWORK FROM FILE "neuralnet.xor", and use the data file "training.xortest", to TEST a feedforward neural network with backpropagation)
In order to understand how the parameters of the feedforward neural network should be setted, analyze the file "config.howto", detailed bellow: In order to understand how the parameters of the feedforward neural network should be setted, analyze the file "config.howto", detailed bellow:
----------------- #-----------------
2 - Number of Inputs of Input Pattern (for example 2 inputs for XOR) 2 - Number of Inputs of Input Pattern (for example 2 inputs for XOR)
1 - Number of Outputs of Output Pattern (in this case, one -binary- output) 1 - Number of Outputs of Output Pattern (in this case, one -binary- output)
2 - Number of Layers of the Neural Net (2 layers, one hidden and one output layer) 2 - Number of Layers of the Neural Net (2 layers, one hidden and one output layer)
...@@ -26,11 +26,11 @@ In order to understand how the parameters of the feedforward neural network shou ...@@ -26,11 +26,11 @@ In order to understand how the parameters of the feedforward neural network shou
0.5 - Parameter momentum 0.5 - Parameter momentum
0.3 - Parameter learning-rate 0.3 - Parameter learning-rate
0.0001 - Parameter max_err (the training finishes when the maximum error for each pattern is smaller than this parameter) 0.0001 - Parameter max_err (the training finishes when the maximum error for each pattern is smaller than this parameter)
----------------- #-----------------
In order to understand how to prepare the data for training, analyze the file "training.howto", detailed below: In order to understand how to prepare the data for training, analyze the file "training.howto", detailed below:
----------------- #-----------------
4 // First line, just an integer number, identfying the total number of training patterns, 4 in this case 4 // First line, just an integer number, identfying the total number of training patterns, 4 in this case
2 // from the second line to the end of the file, each pattern must be presented. In this case, each 2 // from the second line to the end of the file, each pattern must be presented. In this case, each
0.0 // training pattern is composed of TWO(2) inputs, and ONE(1) output. Each number must be alone in each line 0.0 // training pattern is composed of TWO(2) inputs, and ONE(1) output. Each number must be alone in each line
...@@ -52,7 +52,7 @@ In order to understand how to prepare the data for training, analyze the file "t ...@@ -52,7 +52,7 @@ In order to understand how to prepare the data for training, analyze the file "t
1.0 1.0
1 1
0.0 0.0
----------------- #-----------------
Please note that these files are commented to explain how to build your own config and training files. Please note that these files are commented to explain how to build your own config and training files.
The "real" config and training files shouldn't have any comments, just plain numbers, one number per line. The "real" config and training files shouldn't have any comments, just plain numbers, one number per line.
...@@ -60,7 +60,7 @@ The "real" config and training files shouldn't have any comments, just plain num ...@@ -60,7 +60,7 @@ The "real" config and training files shouldn't have any comments, just plain num
After training, a "neuralnet" file is created, with the same "extension" used in both the "config" and the "training" files. After training, a "neuralnet" file is created, with the same "extension" used in both the "config" and the "training" files.
So, after executing "./neural xor", and if the training is successful, a "neuralnet.xor" would be created, with all the trained weights, as follows: So, after executing "./neural xor", and if the training is successful, a "neuralnet.xor" would be created, with all the trained weights, as follows:
----------------- #-----------------
2 2
5 5
1 1
...@@ -85,7 +85,7 @@ So, after executing "./neural xor", and if the training is successful, a "neural ...@@ -85,7 +85,7 @@ So, after executing "./neural xor", and if the training is successful, a "neural
1.405843 1.405843
1.562611 1.562611
-0.485905 -0.485905
----------------- #-----------------
In this case, the feedforward neural network has 2 layers(first line), 5 neurons on the first hidden layer(second line), and 1 neuron on the last(output) layer(third line). In this case, the feedforward neural network has 2 layers(first line), 5 neurons on the first hidden layer(second line), and 1 neuron on the last(output) layer(third line).
After that, follows the (2) trained weights of each of the 5 neurons of the first hidden layer, and then the weights of each neuron of each layer After that, follows the (2) trained weights of each of the 5 neurons of the first hidden layer, and then the weights of each neuron of each layer
...@@ -97,7 +97,7 @@ The file "training.xortest2" is also an example, in order to evaluate the "gener ...@@ -97,7 +97,7 @@ The file "training.xortest2" is also an example, in order to evaluate the "gener
By executing "./neural xor xortest2" it is possible to see that the trained network has a BIG error for input (0.1,0.1), instead of resulting something close to (0.0) it results 0.894627..... By executing "./neural xor xortest2" it is possible to see that the trained network has a BIG error for input (0.1,0.1), instead of resulting something close to (0.0) it results 0.894627.....
....really bad generalization...... ....really bad generalization......
---------------- #----------------
$ ./neural xor xortest2 $ ./neural xor xortest2
Argc= 3 Argc= 3
Configuration file: config.xor Configuration file: config.xor
...@@ -125,4 +125,4 @@ Remembered Pattern ...@@ -125,4 +125,4 @@ Remembered Pattern
In = 1 In = 1
Out Part Out Part
Out = 0.894627 Out = 0.894627
---------------- #----------------
\ No newline at end of file \ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment