[mlpack-git] (blog) master: new blog post (5c45f60)
gitdub at mlpack.org
gitdub at mlpack.org
Mon Aug 22 14:25:14 EDT 2016
Repository : https://github.com/mlpack/blog
On branch : master
Link : https://github.com/mlpack/blog/compare/0ea3a75b9d629cb6a317962355af2b9e3541a2d8...5c45f6077cf4e82240a426ced3dc0f383a6bf04e
>---------------------------------------------------------------
commit 5c45f6077cf4e82240a426ced3dc0f383a6bf04e
Author: Bang Liu <bang3 at ualberta.ca>
Date: Mon Aug 22 12:25:14 2016 -0600
new blog post
>---------------------------------------------------------------
5c45f6077cf4e82240a426ced3dc0f383a6bf04e
content/blog/BangGsocSummary.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/blog/BangGsocSummary.md b/content/blog/BangGsocSummary.md
index 95de713..7c583f9 100644
--- a/content/blog/BangGsocSummary.md
+++ b/content/blog/BangGsocSummary.md
@@ -31,14 +31,14 @@ After they are merged, they can be found in the same directory under mlpack repo
The first algorithm is Conventional Neural Evolution (CNE) algorithm. The main reference papers and code for the implementation of CNE includes:
- "[Training Feedforward Neural Networks Using Genetic Algorithms](http://www.ijcai.org/Proceedings/89-1/Papers/122.pdf)"
-- "[Evolving Artificial Neural Networks](http://www.cs.bham.ac.uk/~axk/evoNN.pdf)"
+- "[Evolving Artificial Neural Networks](http://www.cs.bham.ac.uk/~axk/evoNN.pdf)"
- [Multineat](http://multineat.com/index.html)
Generally, different neural evolution algorithms are evolving a number of neural networks iteratively to find out a siutable neural network for solving specific tasks. So we first define some classes to represent key concepts in neural evolution algorithms. Including:
1. **`LinkGene`**: this class defines a link. Basically, a link is defined by the two neurons' id it connected, its own id, and its weight. Detailed implementation is in `mlpack/src/mlpack/methods/ne/link_gene.hpp`.
2. **`NeuronGene`**: this class defines a neuron. Basically, a neuron is defined by its id, neuron type (INPUT, HIDDEN, OUTPUT, BIAS), activation function type (SIGMOID, LINEAR, RELU, etc.) Detailed implementation is in `mlpack/src/mlpack/methods/ne/neuron_gene.hpp`.
-3. **`Genome`**: this is a critical class. A genome is the encoding format of a neural network. A neural network contains multiple links and neurons. Thus, a genome contains a vector of link genes and neuron genes. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/genome.hpp`. A novel idea we made is how we calculate a genome's output given an input vector, which is the `void Activate(std::vector<double>& input)` function in the `Genome` class. Briefly speaking, as neural networks in NE algorithms are not in well-defined layered structure, we assign each neuron a *height* attribute. Input neurons are of height 0. Output neurons are of height 1. Heights of hidden neurons are between 0 and 1. Different neurons with same height value cannot be connected (but a neuron can connect to itself to form a recurrent link). In this way, we can have at least three benefits: first, it makes the activation calculation be quite fast (we just need to loop through all links for once); second, calculation logic of any complex neurl network structure is quite clear: neurons are activated in sequence according to its height: from small (0) to big (1); third, different kind of links can be defined by compare the heights of the two neurons it connected. A FORWARD link is connect a small height neuron to a big height neuron. A BACKWARD link is connect a big height neuron to a small height neuron. And a RECURRENT link is connect a neuron to itself.
+3. **`Genome`**: this is a critical class. A genome is the encoding format of a neural network. A neural network contains multiple links and neurons. Thus, a genome contains a vector of link genes and neuron genes. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/genome.hpp`. A novel idea we made is how we calculate a genome's output given an input vector, which is the `void Activate(std::vector`<`double`> `input)` function in the `Genome` class. Briefly speaking, as neural networks in NE algorithms are not in well-defined layered structure, we assign each neuron a *height* attribute. Input neurons are of height 0. Output neurons are of height 1. Heights of hidden neurons are between 0 and 1. Different neurons with same height value cannot be connected (but a neuron can connect to itself to form a recurrent link). In this way, we can have at least three benefits: first, it makes the activation calculation be quite fast (we just need to loop through all links for once); second, calculation logic of any complex neurl network structure is quite clear: neurons are activated in sequence according to its height: from small (0) to big (1); third, different kind of links can be defined by compare the heights of the two neurons it connected. A FORWARD link is connect a small height neuron to a big height neuron. A BACKWARD link is connect a big height neuron to a small height neuron. And a RECURRENT link is connect a neuron to itself.
4. **`Species`**: Basically a species contains a vector of genomes which will be evolved by NE algorithms, such as CNE algorithm. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/species.hpp` .
5. **`Population`**: Basically a population contains a vector of species. As in algorithms such as NEAT, a number of genomes is not just an array of genomes, but be speciated into different species. Thus, we define the class `Population` to organize a vector of species. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/population.hpp` .
More information about the mlpack-git
mailing list