[mlpack-git] [mlpack] Fix drop out layer (#463)

Marcus Edel notifications at github.com
Wed Oct 21 07:22:37 EDT 2015


I like to point out that we only need to initialize the scale parameter if we load an already trained network and use the model for prediction only. If we train the model the scale parameter is set in the else case of the forward pass. However, you pointed out an important case, we should take care of. One problem I see with the solution is that a change of the probability of setting a value to zero (ratio) has no effect, if we use the constructor list to initialize the scale parameter. So instead of using the constructor list or the else case of the forward pass, we could set the ratio at the beginning of the forward pass. Does this sound reasonable? Maybe there is a better solution?

---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/463#issuecomment-149859563
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20151021/2f3a3fa9/attachment.html>


More information about the mlpack-git mailing list