[mlpack-git] [mlpack/mlpack] parallel sgd (#603)
notifications at github.com
Thu Apr 7 14:03:19 EDT 2016
Several high level points:
I think you should provide the option for a Hogwild style implementation as well. I think this is generally what people think of when they think of parallel SGD. However, to do this correctly, one should also provide support for sparse gradients-- in fact this is the case when you actually expect parallel SGD to win. When gradients are fully dense, I think the current approach you have is probably the way to go, but its speedups will be inherently limited.
Also echoing what ryan mentioned, the parallel averaging case here can be implemented by reusing the existing optimizer(s).
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mlpack-git