[mlpack-git] [mlpack] improve speed of SparseAutoencoder and make it more flexible (#451)
notifications at github.com
Tue Sep 22 22:47:46 EDT 2015
> + * //using SAEF = nn::SparseAutoencoderFunction;
> + *
> + * size_t const Features = 16*16;
> + * arma::mat data = randu<mat>(Features, 10000);
> + *
> + * SAEF encoderFunction(data, Features, Features / 2);
> + * const size_t numIterations = 100; // Maximum number of iterations.
> + * const size_t numBasis = 10;
> + * optimization::L_BFGS<SAEF> optimizer(encoderFunction, numBasis, numIterations);
> + *
> + * arma::mat parameters = encoderFunction.GetInitialPoint();
> + *
> + * // Train the model.
> + * Timer::Start("sparse_autoencoder_optimization");
> + * const double out = optimizer.Optimize(parameters);
> + * Timer::Stop("sparse_autoencoder_optimization");
The API here is the way Siddharth originally wrote it, using the standard mlpack optimizers to optimize the weights of the network. But maybe it would be a better idea to make this class work with the `Trainer` class in `src/mlpack/methods/ann/`? Kind of like the examples in `convolutional_network_test.cpp` and `feedforward_network_test.cpp`:
Trainer<SAEF> trainer(autoencoder, ...);
This would help make all of the ANN-related code in mlpack have a unified interface. I'd be interested in zoq's comments on this too, since I don't know his plans for what the API there will eventually look like.
Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mlpack-git