[mlpack-git] [mlpack/mlpack] Modeling LSH For Performance Tuning (#749)
notifications at github.com
Wed Aug 24 11:23:59 EDT 2016
> + double gammaChain =
> + - 2.0 * alpha * std::pow(k, beta) * std::log(x) * std::pow(x, gamma);
> + // 3x1 column vector (in matrix form).
> + gradient(0, 0) += error * alphaChain;
> + gradient(1, 0) += error * betaChain;
> + gradient(2, 0) += error * gammaChain;
> + }
> + // Return the average of each gradient after the summation is complete.
> + gradient(0, 0) /= ((double) M);
> + gradient(1, 0) /= ((double) M);
> + gradient(2, 0) /= ((double) M);
In `objectivefunction`, I initialize the point randomly. I'm not sure if that also plays a role, since I don't seed the random generator.
$ bin/mlpack_lshmodel -r iris.csv -p 0.5 -v
bin/mlpack_lshmodel -r iris.csv -p 0.6 -v
The first should converge to some real values, while the second should "converge" to NaN after 10000 iterations (which I imagine is the default maximum number of iterations).
Both the cost function and the gradients include logarithms, so I suspect we pass negative values somewhere in there. I'll investigate that and let you know.
I think that's also a bug in L_BFGS - shouldn't the iterations stop if objective is NaN?
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mlpack-git