[mlpack-git] [mlpack] improve speed of SparseAutoencoder and make it more flexible (#451)

Marcus Edel notifications at github.com
Tue Sep 29 09:13:20 EDT 2015


> + *   //using SAEF = nn::SparseAutoencoderFunction;
> + *
> + *   size_t const Features = 16*16;
> + *   arma::mat data = randu<mat>(Features, 10000);
> + *
> + *   SAEF encoderFunction(data, Features, Features / 2);
> + *   const size_t numIterations = 100; // Maximum number of iterations.
> + *   const size_t numBasis = 10;
> + *   optimization::L_BFGS<SAEF> optimizer(encoderFunction, numBasis, numIterations);
> + *
> + *   arma::mat parameters = encoderFunction.GetInitialPoint();
> + *
> + *   // Train the model.
> + *   Timer::Start("sparse_autoencoder_optimization");
> + *   const double out = optimizer.Optimize(parameters);
> + *   Timer::Stop("sparse_autoencoder_optimization");

I would go with the trainer class. The trainer class is not bounded with the network architecture, e.g. we can put a bias term in the first layer or in the first and second layer, etc. We can also eventually store the results and keep track of the optimization process using arbitrary performance functions.

---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/451/files#r40668663
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20150929/3123ca98/attachment-0001.html>


More information about the mlpack-git mailing list