[mlpack-svn] [MLPACK] #345: Sparse Autoencoder Module
MLPACK Trac
trac at coffeetalk-1.cc.gatech.edu
Wed Apr 16 16:03:30 EDT 2014
#345: Sparse Autoencoder Module
----------------------------+-----------------------------------------------
Reporter: siddharth.950 | Owner:
Type: enhancement | Status: new
Priority: major | Milestone:
Component: mlpack | Resolution:
Keywords: | Blocking:
Blocked By: |
----------------------------+-----------------------------------------------
Comment (by rcurtin):
Ok, I've added the sparse autoencoder to trunk/ in r16432 and 16433.
Subsequent commits were my modifications, which were generally very minor.
I changed the way Sigmoid() is called to avoid extra matrix copies. If
I've done anything stupid let me know so I can revert the changes.
One question before I think we are done with this:
http://deeplearning.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity
has a picture of an autoencoder (great resource by the way; thanks for the
link). The layers are the input layer, L_1, the hidden layer, L_2, and
the output layer, L_3.
In the parameters to the sparse autoencoder constructor, we specify
visibleSize and hiddenSize. It seems clear that hiddenSize is the number
of nodes in layer L_2, and based on the diagram in the page I linked to,
it seems like visibleSize represents the size of L_1, which is always the
same as L_3.
Does it make sense to have the number of nodes in L_3 different than the
number of nodes in L_1? If not, should we modify the code to get
visibleSize from the input data's dimensionality?
--
Ticket URL: <http://trac.research.cc.gatech.edu/fastlab/ticket/345#comment:12>
MLPACK <www.fast-lab.org>
MLPACK is an intuitive, fast, and scalable C++ machine learning library developed at Georgia Tech.
More information about the mlpack-svn
mailing list