[mlpack-git] [mlpack] improve speed of SparseAutoencoder and make it more flexible (#451)

Marcus Edel notifications at github.com
Sun Jan 31 20:18:36 EST 2016


Thanks for the contribution. I made a couple of changes:

- moved the main sparse autoencoder into a separate folder in 4ad39f8651da0
- minor formatting and comment fixes in 896937d03c5eb32a4e980
- modified the sparse autoencoder test in 443ecdcd35d, so that it uses the SparseAutoencoder class you already implemented. Since we already have a gradient test for each activation function, I removed the gradient sparse autoencoder test.

Let me know if I messed anything up.

Since, we have this nice SparseAutoencoder class, it should be easy provide a reverse-compatibility layer for the 2.x.x releases. I'll go and write the necessary code, if nobody really likes to do it. 

We should also think about a test case that tests the code in combination with an optimizer. I run into a couple of problems, once I tested the code with the existing trainer class. I solved the issues in f34ae33e2ccdaca68dc7. Another test could also test the ability to work with additional layer. We only test the standard sparse autoencoder model structure (input layer, hidden layer, output layer). Wich is fine since the former code uses this static model structure, but since we build the sparse autoencoder using the ann modules, we have the ability to add a bunch of interesting layer, e.g. Dropout layer.

---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/451#issuecomment-177687178
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160131/d83bf194/attachment-0001.html>


More information about the mlpack-git mailing list