<p>In <a href="https://github.com/mlpack/mlpack/pull/451#discussion_r40668663">src/mlpack/methods/ann/sparse_autoencoder_function.hpp</a>:</p>
<pre style='color:#555'>&gt; + *   //using SAEF = nn::SparseAutoencoderFunction;
&gt; + *
&gt; + *   size_t const Features = 16*16;
&gt; + *   arma::mat data = randu&lt;mat&gt;(Features, 10000);
&gt; + *
&gt; + *   SAEF encoderFunction(data, Features, Features / 2);
&gt; + *   const size_t numIterations = 100; // Maximum number of iterations.
&gt; + *   const size_t numBasis = 10;
&gt; + *   optimization::L_BFGS&lt;SAEF&gt; optimizer(encoderFunction, numBasis, numIterations);
&gt; + *
&gt; + *   arma::mat parameters = encoderFunction.GetInitialPoint();
&gt; + *
&gt; + *   // Train the model.
&gt; + *   Timer::Start(&quot;sparse_autoencoder_optimization&quot;);
&gt; + *   const double out = optimizer.Optimize(parameters);
&gt; + *   Timer::Stop(&quot;sparse_autoencoder_optimization&quot;);
</pre>
<p>I would go with the trainer class. The trainer class is not bounded with the network architecture, e.g. we can put a bias term in the first layer or in the first and second layer, etc. We can also eventually store the results and keep track of the optimization process using arbitrary performance functions.</p>

<p style="font-size:small;-webkit-text-size-adjust:none;color:#666;">&mdash;<br>Reply to this email directly or <a href="https://github.com/mlpack/mlpack/pull/451/files#r40668663">view it on GitHub</a>.<img alt="" height="1" src="https://github.com/notifications/beacon/AJ4bFLYT82WSwM0QRVWb0iatB0Rvwdyuks5o2oXwgaJpZM4GAqt4.gif" width="1" /></p>
<div itemscope itemtype="http://schema.org/EmailMessage">
<div itemprop="action" itemscope itemtype="http://schema.org/ViewAction">
  <link itemprop="url" href="https://github.com/mlpack/mlpack/pull/451/files#r40668663"></link>
  <meta itemprop="name" content="View Pull Request"></meta>
</div>
<meta itemprop="description" content="View this Pull Request on GitHub"></meta>
</div>