[mlpack] Doubt regarding functioning of Recurrent network and Feed Forward Network
Nikhil Yadala
nikhil.yadala at gmail.com
Mon Mar 21 10:42:02 EDT 2016
Hi Abhinav,
I had also worked on the implementation of this ann module. I
shall tell what i have understood, Marcus may correct ,if I am wrong
Yeah, The feed forward procedure used here is the same as that
used in ffn class. The training of a rnn is almost the same except that,
one of the input of each layer is a recurrent parameter and the other is
the input vector at time t.
Nikhil Yadala,
Btech-cse.
On Mon, Mar 21, 2016 at 7:12 PM, Abhinav Gupta <abhinavgupta440 at gmail.com>
wrote:
> I got the first one. Silly doubt.
>
> On Mon, Mar 21, 2016 at 7:07 PM, Abhinav Gupta <abhinavgupta440 at gmail.com>
> wrote:
>
>> Hii Marcus,
>> I have few doubts:
>> - in mlpack/core/optimizers/sgd/sgd_impl.hpp what is the value of the
>> variable function. I was following mlpack/tests/recurrent_network_tests.cpp
>> line no 95.
>> - while training the network we follow the same procedure
>> (net.Train(optimizer) ). So I'm not able to figure out how is the process
>> different for Feed Forward and Recurrent Networks. I read in comments that
>> RecurrentLayer has recurrentParameter instead of inputParameter, but how is
>> it creating the difference. Please can you point me in the right direction.
>>
>> Thanks,
>> Abhinav
>>
>
>
> _______________________________________________
> mlpack mailing list
> mlpack at cc.gatech.edu
> https://mailman.cc.gatech.edu/mailman/listinfo/mlpack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack/attachments/20160321/fdb5be53/attachment.html>
More information about the mlpack
mailing list