<div dir="ltr">Hi Ryan,<br><br>My name is Srinivas and I'm a Masters student at the Supercomputer Education Research Center (India). <br><br>I've worked with different frameworks like OpenMP, MPI, Hadoop and CUDA . I also have good knowledge in C++, JAVA and Python.<br><br>I'm interested in the "Parallel Stochastic Optimization Methods" project that you have offered. <br><br>I
realized what you are actually looking for is to fine-tune the
implementation of SGD for multi-core rather than extend it to
distributed memory systems.<br><br>I came across a paper by L ́eon Bottou titled "Stochastic Gradient Descent Tricks"<br><div><br>Link to paper : <a href="http://research.microsoft.com/pubs/192769/tricks-2012.pdf" target="_blank">http://research.microsoft.com/pubs/192769/tricks-2012.pdf</a><br>Link to code : <a href="https://github.com/npinto/bottou-sgd" target="_blank">https://github.com/npinto/bottou-sgd</a><br><br></div><div>It
offers a nice set of recommendations for implementing SGD. I would like
to know if it would be a good idea to use these recommendations to
implement and fine-tune SGD for <b>mlpack</b> ? <br><br></div><div>I would like to work on mlpack as a GSoC project. Please let me know what you think about this idea. <br></div><div> <br></div><div>Thanks in advance.<br><br></div><div>Srinivas.K<br><br>Master's Student,<br>Department of Computational and Data Science,<br></div>Supercomputer Education Research Center,<br>Indian Institute of Science, India<div class=""><div id=":2yt" class="" tabindex="0"><img class="" src="https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif"></div></div><br clear="all"><br>-- <br><div class="gmail_signature">its now or never</div>
</div>