[mlpack-git] [mlpack/mlpack] parallel sgd (#603)

Stephen Tu notifications at github.com
Thu Apr 7 14:03:19 EDT 2016


Several high level points:

I think you should provide the option for a Hogwild style implementation as well. I think this is generally what people think of when they think of parallel SGD. However, to do this correctly, one should also provide support for sparse gradients-- in fact this is the case when you actually expect parallel SGD to win. When gradients are fully dense, I think the current approach you have is probably the way to go, but its speedups will be inherently limited. 

Also echoing what ryan mentioned, the parallel averaging case here can be implemented by reusing the existing optimizer(s). 

---
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/603#issuecomment-207030510
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160407/f45b04df/attachment-0001.html>


More information about the mlpack-git mailing list