[mlpack-git] [mlpack] Create a function to visualize the features learned by sparse autoencoder (#465)

Ryan Curtin notifications at github.com
Tue Oct 27 09:22:26 EDT 2015

> +      {
> +        continue;
> +      }
> +      arma::mat reshapeMat(squareRows, squareRows);
> +      arma::mat const weights = input.row(k);
> +      std::copy(std::begin(weights),
> +                std::end(weights),
> +                std::begin(reshapeMat));
> +      double const max = arma::abs(input.row(k)).max();
> +      if(max != 0.0)
> +      {
> +        reshapeMat /= max;
> +      }
> +      output.submat(i*(offset), j*(offset),
> +                    i*(offset) + squareRows - 1,
> +                    j*(offset) + squareRows - 1) = reshapeMat;

I think it should be possible to avoid using extra memory for `reshapeMat`.  Note that line 23 initializes some memory, line 24 causes a copy of the row of the input matrix, and lines 25-27 copy the copied input matrix row.  But realistically, this could all be done without initializing extra memory or making more than two passes over the matrix (one to find the max, one to copy to the output matrix).

Here's my first pass (I haven't tried or compiled it or anything... it should just be a step in the right direction.  It's probably worth figuring out some simple tests before changing this.):

// Find the maximum element in this row.
const double max  = arma::abs(input.row(k)).max();
// Now, copy the elements of the row to the output submatrix.
const arma::uword minRow = i * offset;
const arma::uword minCol = j * offset;
const arma::uword maxRow = i * offset + squareRows - 1;
const arma::uword maxCol = j * offset + squareRows - 1;
// Only divide by the max if it's not 0.
if (max != 0.0)
  output.submat(minRow, minCol, maxRow, maxCol) = arma::reshape(input.row(k), squareRows, squareRows) / max;
  output.submat(minRow, minCol, maxRow, maxCol) = arma::reshape(input.row(k), squareRows, squareRows);

The only other thought I have here is that memory accesses are much faster when you are accessing columns instead of rows, because Armadillo matrices are column-major.  It costs some amount to transpose, but it may or may not be a little faster to transpose the `input` matrix.  Given the use of this function (probably only once when an autoencoder is trained), it's probably not worth it to put too much time into optimizing for speed, but that may be something to think about later.  It would probably be a good thing to refactor this expression to avoid copies, though.

Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20151027/03723e7a/attachment.html>

More information about the mlpack-git mailing list