[mlpack-git] [mlpack/mlpack] NeuralEvolution - implemented gene, genome (#686)

Marcus Edel notifications at github.com
Tue Jun 7 16:44:32 EDT 2016


> +
> +      // Loop neurons to calculate neurons' activation.
> +      for (unsigned in j=aNumInput; j<aNeuronGenes.size(); ++j) {
> +        double x = aNeuronGenes[j].aInput;  // TODO: consider bias. Difference?
> +        aNeuronGenes[j].aInput = 0;
> +
> +        double y = 0;
> +        switch (aNeuronGenes[j].Type()) { // TODO: revise the implementation.
> +          case SIGMOID:                   // TODO: more cases.
> +            y = sigmoid(x);
> +            break;
> +          case RELU:
> +            y = relu(x);
> +            break;
> +          default:
> +            y = sigmoid(x);

I think, since our network structure could be somewhat sparse:


```
x0-----------------|
      |            |
      |---h0^0-----|---h0^1---|
x1----             |          |------o0
                   |          |
x2----------------------------|
```

it isn't that easy to reuse the activation function of the sparse autoencoder. However, what we should do here is to reuse the ann activation functions: https://github.com/mlpack/mlpack/tree/master/src/mlpack/methods/ann/activation_functions

so instead of ```y = sigmoid(x);``` we can write ``y = LogisticFunction::fn(x)``` or ```y = RectifierFunction::fn(x)``

---
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/686/files/3c8aa62b951f029b3883e9baef1ea556ef5af2d3#r66149003
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160607/d86355be/attachment.html>


More information about the mlpack-git mailing list