[mlpack-git] (blog) master: new blog post (ef56d5c)

gitdub at mlpack.org gitdub at mlpack.org
Mon Aug 22 15:07:34 EDT 2016


Repository : https://github.com/mlpack/blog
On branch  : master
Link       : https://github.com/mlpack/blog/compare/5c45f6077cf4e82240a426ced3dc0f383a6bf04e...ef56d5c0415a3dc3a318910eefb9a34adb94f843

>---------------------------------------------------------------

commit ef56d5c0415a3dc3a318910eefb9a34adb94f843
Author: Bang Liu <bang3 at ualberta.ca>
Date:   Mon Aug 22 13:07:34 2016 -0600

    new blog post


>---------------------------------------------------------------

ef56d5c0415a3dc3a318910eefb9a34adb94f843
 content/blog/BangGsocSummary.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/content/blog/BangGsocSummary.md b/content/blog/BangGsocSummary.md
index 7c583f9..b6e01b4 100644
--- a/content/blog/BangGsocSummary.md
+++ b/content/blog/BangGsocSummary.md
@@ -38,7 +38,7 @@ Generally, different neural evolution algorithms are evolving a number of neural
 
 1. **`LinkGene`**: this class defines a link. Basically, a link is defined by the two neurons' id it connected, its own id, and its weight. Detailed implementation is in `mlpack/src/mlpack/methods/ne/link_gene.hpp`.
 2. **`NeuronGene`**: this class defines a neuron. Basically, a neuron is defined by its id, neuron type (INPUT, HIDDEN, OUTPUT, BIAS), activation function type (SIGMOID, LINEAR, RELU, etc.) Detailed implementation is in `mlpack/src/mlpack/methods/ne/neuron_gene.hpp`.
-3. **`Genome`**: this is a critical class. A genome is the encoding format of a neural network. A neural network contains multiple links and neurons. Thus, a genome contains a vector of link genes and neuron genes. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/genome.hpp`. A novel idea we made is how we calculate a genome's output given an input vector, which is the `void Activate(std::vector`<`double`> `input)` function in the `Genome` class. Briefly speaking, as neural networks in NE algorithms are not in well-defined layered structure, we assign each neuron a *height* attribute. Input neurons are of height 0. Output neurons are of height 1. Heights of hidden neurons are between 0 and 1. Different neurons with same height value cannot be connected (but a neuron can connect to itself to form a recurrent link). In this way, we can have at least three benefits: first, it makes the activation calculation be quite fast (we just need to loop through all links for once); second, calculation logic of any complex neurl network structure is quite clear: neurons are activated in sequence according to its height: from small (0) to big (1); third, different kind of links can be defined by compare the heights of the two neurons it connected. A FORWARD link is connect a small height neuron to a big height neuron.  A BACKWARD link is connect a big height neuron to a small height neuron. And a RECURRENT link is connect a neuron to itself.
+3. **`Genome`**: this is a critical class. A genome is the encoding format of a neural network. A neural network contains multiple links and neurons. Thus, a genome contains a vector of link genes and neuron genes. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/genome.hpp`. A novel idea we made is how we calculate a genome's output given an input vector, which is the `Activate` function in the `Genome` class. Briefly speaking, as neural networks in NE algorithms are not in well-defined layered structure, we assign each neuron a *height* attribute. Input neurons are of height 0. Output neurons are of height 1. Heights of hidden neurons are between 0 and 1. Different neurons with same height value cannot be connected (but a neuron can connect to itself to form a recurrent link). In this way, we can have at least three benefits: first, it makes the activation calculation be quite fast (we just need to loop through all links for once); second, calculation logic of any complex neurl network structure is quite clear: neurons are activated in sequence according to its height: from small (0) to big (1); third, different kind of links can be defined by compare the heights of the two neurons it connected. A FORWARD link is connect a small height neuron to a big height neuron.  A BACKWARD link is connect a big height neuron to a small height neuron. And a RECURRENT link is connect a neuron to itself.
 4. **`Species`**: Basically a species contains a vector of genomes which will be evolved by NE algorithms, such as CNE algorithm. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/species.hpp` .
 5. **`Population`**: Basically a population contains a vector of species. As in algorithms such as NEAT, a number of genomes is not just an array of genomes, but be speciated into different species. Thus, we define the class `Population` to organize a vector of species. Detailed implementation can be found in `mlpack/src/mlpack/methods/ne/population.hpp` .
 




More information about the mlpack-git mailing list