[mlpack-git] [mlpack] improve speed of SparseAutoencoder and make it more flexible (#451)

Ryan Curtin notifications at github.com
Fri Dec 4 09:38:14 EST 2015


I think serialization for the ANN code would be great, but I don't know if @zoq has any plans for it already.

To the best of my understanding, you should be able to implement a sparse autoencoder as an FNN just like you said.

> I think it would not be able to do that, I wrote the reasons at here!. Either the layer of ANN need to do some changes and apply type_traits + static_assert to tell the users "this layer is a bad choice" or create a new folder to collect meaningful layer for autoencoder(but this would generate duplicate codes).

I remember you writing that, I just wanted to make sure that was the only reason.  I think there are a couple ways to go here---type_traits with a static_assert could be a good way to warn users, or, you can provide template typedefs so that the user is generally not building their own autoencoder, but instead just using the pre-existing `SparseAutoencoder` typedef.  i.e.

```
using SparseAutoencoder = FNN<...>;
using DenoisingAutoencoder = FNN<...>; // with the FNN class, maybe other types of autoencoders are easy too and can be provided in the future
```

Do you think that's a better solution?  This allows regular users the flexibility to do whatever they want by defining their own `FNN<...>` for a custom autoencoder, and users who just want to use a standard sparse autoencoder can use the `SparseAutoencoder` typedef (or whatever other types of autoencoders end up being available).

---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/451#issuecomment-161982116
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20151204/9a7eb4c3/attachment.html>


More information about the mlpack-git mailing list