[mlpack-svn] [MLPACK] #345: Sparse Autoencoder Module

MLPACK Trac trac at coffeetalk-1.cc.gatech.edu
Fri Apr 11 13:10:04 EDT 2014


#345: Sparse Autoencoder Module
----------------------------+-----------------------------------------------
  Reporter:  siddharth.950  |        Owner:     
      Type:  enhancement    |       Status:  new
  Priority:  major          |    Milestone:     
 Component:  mlpack         |   Resolution:     
  Keywords:                 |     Blocking:     
Blocked By:                 |  
----------------------------+-----------------------------------------------

Comment (by rcurtin):

 Hello Siddharth,

 Thanks for the contribution.  We talked in IRC about some things that can
 be improved, and I'm going to list them (plus a few more) here because
 otherwise I'll forget about all this:

  * The sparseautoencoder.cpp function should be adapted into a test using
 the Boost Unit Test Framework, and maybe using a smaller dataset.

  * We should add a few simple tests with tiny, trivial datasets.  This can
 make debugging a lot easier, if someone makes a change later that breaks
 the tests.

  * The member `data` in SparseAutoencoderFunction can probably be `const
 arma::mat&`, since I don't think it's being modified.  If it's just
 `arma::mat`, like it is now, then the data matrix is copied when the
 function is created, which is something we'd like to avoid.

  * Some comments about what sparse autoencoders do would be nice; I don't
 think it's a good assumption that users know what sparse autoencoders do.
 Probably good to add a comment describing the algorithm and maybe some
 references.

  * sparse_autoencoder_function.cpp:34,35: these can be arma::vec objects,
 not arma::mat objects.  The same applies for b1 and b2.

  * sparse_autoencoder_function.cpp:39,40: these can be combined onto one
 line, and `range` can be `const double`.

  * sparse_autoencoder_function.cpp:53: why not work directly with the
 `parameters` object instead of having w1, w2, b1, and b2?  The code might
 get uglier, but it will be faster, and you can add comments to it so that
 someone reading it can understand which parts of the parameter vector are
 w1, w2, b1, and b2.  Ideally operations like `join_cols()` or
 `shed_cols()` or similar should be avoided, since they will incur copies.
 Sometimes they can't be avoided, though, but I think in this case they
 can.  Alternately, store four different parameter vectors instead of one,
 if you like.

  * sparse_autoencoder_function.cpp:72: you can make all these `const
 size_t`.  That gives the compiler better hints on how to optimize
 expressions that use those variables.

  * sparse_autoencoder_function.cpp:80: this extraction is going to be
 slow; it will be quicker to just use submatrix expressions to access those
 parts of the parameter vector.

 Most of these same comments apply to the Gradient() function.  Basically,
 working with the parameters as separate objects (w1, w2, b1, b2) is going
 to make things slow, since those are all being extracted from the
 parameters matrix and then put back in the parameters matrix.

 There are probably more comments that I could make, but let's improve
 these things first and then we can continue.  I'd suggest starting with
 the tests, so that as you change your Evaluate() and Gradient()
 implementation you can continue to test it and know that it's still
 working right.

-- 
Ticket URL: <http://trac.research.cc.gatech.edu/fastlab/ticket/345#comment:3>
MLPACK <www.fast-lab.org>
MLPACK is an intuitive, fast, and scalable C++ machine learning library developed at Georgia Tech.


More information about the mlpack-svn mailing list