[mlpack] Parallel Stochastic Optimization Methods

Lokesh Jain f2012827 at pilani.bits-pilani.ac.in
Fri Mar 11 08:10:57 EST 2016


Hi

I am Lokesh Jain, 4th year undergraduate B.E(Hons.) Computer Science and
M.Sc.(Hons.) Mathematics student at BITS Pilani, India. I am interested in
the project Parallel Stochastic Optimization Methods. I have worked over
projects on OpenMP, PThreads, MPI and CUDA. I had a doubt about the kind of
parallelism expected in the algorithm as in Stochastic Gradient Descent
there is a dependency in weights. The weights to be updated for one
training example depend on the updated weights from the previous training
example. So how do we parallelize the algorithm in terms of the iterations
over the training set? Do we parallelize the algorithm in terms of step
size as different threads running the algorithm for different step size?

Thank You
Regards
*Lokesh Jain*
*M.Sc. (Hons.) Mathematics | B.E. (Hons.) Computer Science Engineering*

*Birla Institute of Technology & Science,* *Pilani*
Pilani Campus, Pilani 333031, Rajasthan, India
Email Address: f2012827 at pilani.bits-pilani.ac.in
<f2012613 at pilani.bits-pilani.ac.in>
Phone: +91 9772050107
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack/attachments/20160311/6d4df81b/attachment-0001.html>


More information about the mlpack mailing list