[mlpack-git] [mlpack] improve speed of SparseAutoencoder and make it more flexible (#451)
notifications at github.com
Tue Oct 20 04:20:28 EDT 2015
After some experiments, not all of the layers of ann are suitable for SparseAutoencoder, it need to do some modification(ex : you cannot just call the dropout function, you need to call the activation function like sigmoid or RELU after calling Forward), I would like to open two new folders
1 : SparseAutoencoder/layers
2 : SparseAutoencoder/activation_functions
to collect the useable layers and activation functions, the implementation will based on the layers and activation functions of ann
Besides, I think the function "GetNewFeatures" of SparseAutoencoder should call the activation function of the hidden layer rather the Sigmoid.
I will remove LazyLogisticFunction and remove the check of LogisticFunction as zoq mentioned
Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mlpack-git