[mlpack-git] [mlpack] improve speed of SparseAutoencoder and make it more flexible (#451)

stereomatchingkiss notifications at github.com
Sat Dec 5 02:35:46 EST 2015


Ok, find out the problem, we need a new "LinearLayer" which customize for the SparseAutoencoder, the problem is this part

    arma::mat klDivGrad = beta * (-(rho / rhoCap) + (1 - rho) / (1 - rhoCap));
    klDivGrad.elem(arma::find_nonfinite(klDivGrad)).zeros();
    diff2 = parameters.submat(l1, 0, l3 - 1, l2 - 1) * delOut +
              arma::repmat(klDivGrad, 1, data.n_cols);
    arma::mat delHid;
    hiddenLayerFunc.Backward(hiddenLayer, diff2, delHid);

We would need to have do some special calculation if this is sparse autoencoder, first we need the rho, second we need the rhoCap from the performance function, other parts are done by FNN already(please correct me if I am wrong). Good news are we do not need to modify the codes ann.

    LinearLayer<RMSPROP, RandomInitialization>
            hiddenLayer(visibleSize, hiddenSize, rho, {-range, range});  

The performance function should be able to accept the reference of RhoCap of hiddenLayer

    SparseErrorFunction(hiddenLayer.Weights(), outputLayer.Weights(), hiddenLayer.RhoCap(), 
                        beta, lambda)

Any suggestion?

---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/451#issuecomment-162158379
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20151204/df54f4c5/attachment.html>


More information about the mlpack-git mailing list