[mlpack-git] [mlpack/mlpack] NeuralEvolution - implemented gene, genome (#686)

Keon Kim notifications at github.com
Tue Jun 7 14:47:27 EDT 2016


> +    // Construct neuron id: index dictionary.
> +    std::map<unsigned int, unsigned int> neuronIdToIndex;
> +    for (unsigned int i=0; i<NumNeuron(); ++i) {
> +      neuronIdToIndex.insert(std::pair<unsigned int, unsigned int>(aNeuronGenes[i].Id(), i));
> +    }
> +
> +    // Activate layer by layer.
> +    for (unsigned int i=0; i<aDepth; ++i) {
> +      // Loop links to calculate neurons' input sum.
> +      for (unsigned int j=0; j<aLinkGenes.size(); ++j) {
> +        aNeuronGenes[neuronIdToIndex.at(aLinkGenes[j].ToNeuronId())].aInput +=
> +          aLinkGenes[j].Weight() * aNeuronGenes[neuronIdToIndex.at(aLinkGenes[j].FromNeuronId())].aActivation;
> +      }
> +
> +      // Loop neurons to calculate neurons' activation.
> +      for (unsigned in j=aNumInput; j<aNeuronGenes.size(); ++j) {

maybe using same strategies used in [sigmoid function in sparse_autoencoder](https://github.com/mlpack/mlpack/blob/637809fec8d341829e4cd122cf5a385e5e219c9b/src/mlpack/methods/sparse_autoencoder/sparse_autoencoder_function.hpp#L74) is faster?

Unrelated to this, I think activation functions like relu, sigmoid , etc are implemented many times. I think we can put it in core?
It is also implemented in [artificial neural net](https://github.com/mlpack/mlpack/blob/d2e353468b8fce9fc1ee46799860f3860c4c8db9/src/mlpack/methods/ann/layer/base_layer.hpp)


---
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/686/files/3c8aa62b951f029b3883e9baef1ea556ef5af2d3#r66129966
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160607/77de165c/attachment.html>


More information about the mlpack-git mailing list