<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">Hello Amitrajit,</div><div class=""><br class=""></div><div class=""></div><blockquote type="cite" class=""><div class="">I shall certainly look through the new references I find on the reading lists</div><div class="">before filling out my application. If there are any resources in particular that</div><div class="">you would like me to take note of, do mention them.</div></blockquote><div class=""><br class=""></div><div class="">Here are some papers for the Neuroevolution idea. A good theoretical</div><div class="">understanding of what these models do and why they work is a necessity to be</div><div class="">able to implement these well.</div><div class=""><br class=""></div><div class="">- HyperNEAT-GGP:</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://nn.cs.utexas.edu/downloads/papers/hausknecht.gecco12.pdf" class="">http://nn.cs.utexas.edu/downloads/papers/hausknecht.gecco12.pdf</a></div><div class="">- NEAT:</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://nn.cs.utexas.edu/?stanley:ec02" class="">http://nn.cs.utexas.edu/?stanley:ec02</a></div><div class="">- CMA-ES:</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://image.diku.dk/igel/paper/NfRLUES.pdf" class="">http://image.diku.dk/igel/paper/NfRLUES.pdf</a></div><div class="">- CoSyNE:</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="ftp://ftp.cs.utexas.edu/pub/neural-nets/papers/gomez.ecml06.ps" class="">ftp://ftp.cs.utexas.edu/pub/neural-nets/papers/gomez.ecml06.ps</a></div><div class="">- Multi-Objective Neuroevolution in Super Mario Bros.: </div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://www.diva-portal.org/smash/get/diva2:676807/FULLTEXT01.pdf" class="">http://www.diva-portal.org/smash/get/diva2:676807/FULLTEXT01.pdf</a></div><div class=""><br class=""></div><div class="">And here are some papers for the 'We need to go deeper' idea.</div><div class=""><br class=""></div><div class="">- Going Deeper with Convolutions:</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://arxiv.org/abs/1409.4842" class="">http://arxiv.org/abs/1409.4842</a></div><div class="">- Selective Search for Object Recognition:</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://koen.me/research/pub/uijlings-ijcv2013-draft.pdf" class="">http://koen.me/research/pub/uijlings-ijcv2013-draft.pdf</a></div><div class="">- Scalable Object Detection using Deep Neural Networks (multi-box):</div><div class=""><span class="Apple-tab-span" style="white-space:pre"> </span><a href="http://arxiv.org/abs/1312.2249" class="">http://arxiv.org/abs/1312.2249</a></div><div class=""><br class=""></div><div class="">And here are some papers on neural network models. </div><div class=""><br class=""></div><div class="">Restricted Boltzmann Machines (RBM)</div><div class="">- <a href="https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf" class="">https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf</a></div><div class="">- <a href="http://deeplearning.net/tutorial/rbm.html" class="">http://deeplearning.net/tutorial/rbm.html</a></div><div class=""><br class=""></div><div class="">Deep Belief Networks (DBN)</div><div class="">- <a href="http://www.cs.toronto.edu/~rsalakhu/papers/science.pdf" class="">http://www.cs.toronto.edu/~rsalakhu/papers/science.pdf</a></div><div class="">- <a href="http://deeplearning.net/tutorial/DBN.html" class="">http://deeplearning.net/tutorial/DBN.html</a></div><div class=""><br class=""></div><div class="">Radial Basis Function Networks (RBFN)</div><div class="">- <a href="http://www.cc.gatech.edu/~isbell/tutorials/rbf-intro.pdf" class="">http://www.cc.gatech.edu/~isbell/tutorials/rbf-intro.pdf</a></div><div class=""><br class=""></div><div class="">Bidrectional Recurrent networks (BRN)</div><div class="">Note: mlpack provides already an implementation for recurrent network</div><div class="">- <a href="http://www.di.ufpe.br/~fnj/RNA/bibliografia/BRNN.pdf" class="">http://www.di.ufpe.br/~fnj/RNA/bibliografia/BRNN.pdf</a></div><div class=""><br class=""></div><div class="">Convolutional Auto-Encoders (CAE)</div><div class="">- <a href="http://people.idsia.ch/~masci/papers/2011_icann.pdf" class="">http://people.idsia.ch/~masci/papers/2011_icann.pdf</a></div><div class=""><br class=""></div><div class="">Hopfield neural networks (HNN)</div><div class="">- <a href="http://page.mi.fu-berlin.de/rojas/neural/chapter/K13.pdf" class="">http://page.mi.fu-berlin.de/rojas/neural/chapter/K13.pdf</a></div><div class=""><br class=""></div><div class="">Keep in mind that you don't have to implement all of this models; A good project</div><div class="">will select a handful of architectures and implement them with tests and</div><div class="">documentation. Writing good tests is often the hardest part, so keep that in</div><div class="">mind when you create your project timeline.</div><div class=""><br class=""></div><div class=""></div><blockquote type="cite" class=""><div class="">I was wondering whether a connectionist approach would be better with regard to</div><div class="">implementing the Neuroevolution algorithms when dealing with Augmenting</div><div class="">Topologies. I would like your views on the matter.</div></blockquote><div class=""><br class=""></div><div class="">Basically it's a for performance reasons, but you can mimic a connectionist</div><div class="">model, by simply setting the weights in the LinearLayer to zero , so that</div><div class="">unit_11^(0) is only connected with unit_11^(1) and unit_12^(1)</div><div class="">and not with unit_13^(1). You can also implement a special Layer to get this</div><div class="">done even more easily.</div><div class=""><br class=""></div><div class=""><div class=""></div><blockquote type="cite" class=""><div class="">Also, would you like to see a basic implementation of CNE, using the existing</div><div class="">mlpack neural networks, as a warm-up task? I really look forward to contributing</div><div class="">to mlpack.</div></blockquote><div class=""><br class=""></div><div class="">Contributing is not a requirement for an application. Anyway, If you like to do</div><div class="">that as warm-up task, I'm here to help you out. Keep in mind that you have to</div><div class="">write a test, before I can merge anything in.</div></div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">Marcus</div><div class=""><br class=""></div><div class=""><br class=""></div><div><blockquote type="cite" class=""><div class="">On 07 Mar 2016, at 19:40, Amitrajit Sarkar <<a href="mailto:aaiijmrtt@gmail.com" class="">aaiijmrtt@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class=""><div class=""><div class=""><div class="">Hello Marcus,<br class=""><br class=""></div>I agree: each of these projects requires a lot of background study. However, my undergrad research work has been focused on neural networks and deep learning for over a year now. Hence I am already familiar with the concepts appearing on the Ideas page, as well as those previously mentioned in the mailing list, having implemented several myself. I shall certainly look through the new references I find on the reading lists before filling out my application. If there are any resources in particular that you would like me to take note of, do mention them.<br class=""><br class=""></div>I built mlpack from source, tried the tutorials, and started deciphering the source code. I understand that neural networks in mlpack use armadillo matrices for efficiency, a vectorized approach. I was wondering whether a connectionist approach would be better with regard to implementing the Neuroevolution algorithms when dealing with Augmenting Topologies. I would like your views on the matter.<br class=""><br class="">Also, would you like to see a basic implementation of CNE, using the existing mlpack neural networks, as a warm-up task? I really look forward to contributing to mlpack.<br class=""></div></div><div class=""><br class="">Regards,<br class=""></div>Amitrajit.<br class=""></div><div class="gmail_extra"><br class=""><div class="gmail_quote">On Mon, Mar 7, 2016 at 5:38 AM, Marcus Edel <span dir="ltr" class=""><<a href="mailto:marcus.edel@fu-berlin.de" target="_blank" class="">marcus.edel@fu-berlin.de</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word" class=""><div class="">Hello Amitrajit,</div><div class=""><br class=""></div><div class="">sorry for the slow response.</div><span class=""><div class=""><br class=""></div><div class=""><div class=""></div><blockquote type="cite" class=""><div class=""><font color="#5856d6" class="">I am especially interested in:</font></div><div class=""><font color="#5856d6" class=""><br class=""></font></div><div class=""><font color="#5856d6" class="">Neuroevolution Algorithms,</font></div><div class=""><font color="#5856d6" class="">Essential Deep Learning Modules,</font></div><div class=""><font color="#5856d6" class="">We Need To Go Deeper - Google LeNet.</font></div></blockquote></div><div class=""><br class=""></div></span><div class=""><div class="">I might suggest that you narrow your focus because each of these projects has a</div><div class="">significant amount of background knowledge that is necessary.</div></div><div class=""><br class=""></div><div class="">To learn more about each of the projects than what has been listed on the Ideas</div><div class="">page, take a look at the mailing list archives:</div><div class=""><br class=""></div><div class=""><a href="https://mailman.cc.gatech.edu/pipermail/mlpack/" target="_blank" class="">https://mailman.cc.gatech.edu/pipermail/mlpack/</a></div><span class=""><div class=""><br class=""></div><div class=""></div><blockquote type="cite" class=""><div class="">However, others are already working on the warmup tasks listed alongside the</div><div class="">projects. Are there any other tasks that I could try?</div></blockquote><div class=""><br class=""></div></span><div class="">Don't worry, contributing is not a requirement for an application. So if you</div><div class="">don't find anything that you think you can do, that's not necessarily a problem.</div><div class="">However, I'll see if I can add some more "easy" issues in the next couple of</div><div class="">days. On the other side, you are always welcome to just poke around the library</div><div class="">and try to fix any problems you find, or improve the speed of various parts.</div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">Marcus</div><div class=""><br class=""></div><div class=""><blockquote type="cite" class=""><div class=""><div class="h5"><div class="">On 06 Mar 2016, at 08:39, Amitrajit Sarkar <<a href="mailto:aaiijmrtt@gmail.com" target="_blank" class="">aaiijmrtt@gmail.com</a>> wrote:</div><br class=""></div></div><div class=""><div class=""><div class="h5"><div dir="ltr" class=""><div class=""><div class=""><div class="">Hi,<br class=""><br class="">I am Amitrajit Sarkar, a CS undergrad from Jadavpur University, India. I have been working on machine learning for over a year now. I even have my own neural networks <a href="https://github.com/aaiijmrtt/NET" target="_blank" class="">library</a>, which I wrote from scratch while trying to understand existing theories. I am very eager to contribute to mlpack for GSoC 2016, as almost all the projects excite me equally.<br class=""></div><br class=""></div>I am especially interested in:<br class=""><br class=""><a href="https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas#neuroevolution-algorithms" target="_blank" class="">Neuroevolution Algorithms,</a><br class=""><a href="https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas#essential-deep-learning-modules" target="_blank" class="">Essential Deep Learning Modules,</a><br class=""><a href="https://github.com/mlpack/mlpack/wiki/SummerOfCodeIdeas#we-need-to-go-deeper---googlenet" target="_blank" class="">We Need To Go Deeper - Google LeNet.</a><br class=""></div><div class=""><br class=""></div><div class="">I have implemented basic neuroevolution algorithms <a href="https://github.com/aaiijmrtt/LEARNING" target="_blank" class="">here</a>, and several deep learning modules <a href="https://github.com/aaiijmrtt/NET" target="_blank" class="">here</a>. I am certain that I can take up the tasks. However, others are already working on the warmup tasks listed alongside the projects. Are there any other tasks that I could try? I have a lot of experience with research work, and am a skilled coder.<br class=""><br class=""></div><div class="">I am attaching my CV for reference. You may find more about my interests on my <a href="http://aaiijmrtt.github.io/" target="_blank" class="">blog</a>.<br class=""></div><div class=""><br class="">Cheers,<br class=""></div>Amitrajit.<br class=""></div>
</div></div><span class=""><cv.pdf></span>_______________________________________________<br class="">mlpack mailing list<br class=""><a href="mailto:mlpack@cc.gatech.edu" target="_blank" class="">mlpack@cc.gatech.edu</a><br class=""><a href="https://mailman.cc.gatech.edu/mailman/listinfo/mlpack" target="_blank" class="">https://mailman.cc.gatech.edu/mailman/listinfo/mlpack</a><br class=""></div></blockquote></div><br class=""></div></blockquote></div><br class=""></div>
</div></blockquote></div><br class=""></body></html>