[mlpack-git] [mlpack/mlpack] NeuralEvolution - implemented gene, genome (#686)

Keon Kim notifications at github.com
Tue Jun 7 14:49:12 EDT 2016


> +
> +      // Loop neurons to calculate neurons' activation.
> +      for (unsigned in j=aNumInput; j<aNeuronGenes.size(); ++j) {
> +        double x = aNeuronGenes[j].aInput;  // TODO: consider bias. Difference?
> +        aNeuronGenes[j].aInput = 0;
> +
> +        double y = 0;
> +        switch (aNeuronGenes[j].Type()) { // TODO: revise the implementation.
> +          case SIGMOID:                   // TODO: more cases.
> +            y = sigmoid(x);
> +            break;
> +          case RELU:
> +            y = relu(x);
> +            break;
> +          default:
> +            y = sigmoid(x);

maybe using same strategies used in sigmoid function in [sparse_autoencoder](https://github.com/mlpack/mlpack/blob/637809fec8d341829e4cd122cf5a385e5e219c9b/src/mlpack/methods/sparse_autoencoder/sparse_autoencoder_function.hpp#L74) is faster?

Unrelated to this, I think activation functions like relu, sigmoid , etc are implemented many times. I think we can put it in core?
It is also implemented in [artificial neural net](https://github.com/mlpack/mlpack/blob/d2e353468b8fce9fc1ee46799860f3860c4c8db9/src/mlpack/methods/ann/layer/base_layer.hpp)

---
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/686/files/3c8aa62b951f029b3883e9baef1ea556ef5af2d3#r66130281
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160607/fe3f95dd/attachment.html>


More information about the mlpack-git mailing list