<div dir="ltr"><div><div><div><div>Hi marcus,Ryan,<br><br></div> I
have gone through the complete code of ann, I don't get THE exact idea
of how rnn is implemented in mlpack.I have a few queries <br><br> Could you tell me what the variables ,inputSize, outputSize , seqOut specify <br><br></div>How
is the output taken from the network, are we taking output after every
time instance or is it that we take the output at the end of input
(time) sequence.?<br><br></div>Also, As per what i understand regarding
BPTT,each subsequence(time) (typically k=3, at t-1,t,t+1) is considered
one layer , and the rest all is similar to ffn with a constraint that
the weight matrix is same at every layer. But I dont understand how the
BPTT is implemented in rnn.hpp (If this is not the way , its implemented
here, could be please direct me to the link, where I could get a better
understanding of what BPTT does and how it does)<br><br></div><div>Regarding
the project proposal, I am planning to implement bidirectional deep
rnn, So, that there is no need to code the double layer brnn explicitly.
, and also to give very less time to implement convolutional
auto-encoder, as the cnn,hpp does almost the same thing , the only tweak
that has to be done is to hard code the outputlayer to inputs(am I
wright? ) .Could you please give your views on these?<br><br><br></div><div>thanks,<br></div>Nikhil Yadala.</div>