[mlpack-git] [mlpack/mlpack] Modeling LSH For Performance Tuning (#749)

Yannis Mentekidis notifications at github.com
Wed Aug 24 11:23:59 EDT 2016


> +
> +    double gammaChain =
> +      - 2.0 * alpha * std::pow(k, beta) * std::log(x) * std::pow(x, gamma);
> +
> +    // 3x1 column vector (in matrix form).
> +    gradient(0, 0) += error * alphaChain;
> +    gradient(1, 0) += error * betaChain;
> +    gradient(2, 0) += error * gammaChain;
> +  }
> +
> +  // Return the average of each gradient after the summation is complete.
> +  gradient(0, 0) /= ((double) M);
> +  gradient(1, 0) /= ((double) M);
> +  gradient(2, 0) /= ((double) M);
> +}
> +

In `objectivefunction`, I initialize the point randomly. I'm not sure if that also plays a role, since I don't seed the random generator.

Try using
```
$ bin/mlpack_lshmodel -r iris.csv -p 0.5 -v
```
and
```
bin/mlpack_lshmodel -r iris.csv -p 0.6 -v
```
The first should converge to some real values, while the second should "converge" to NaN after 10000 iterations (which I imagine is the default maximum number of iterations).
Both the cost function and the gradients include logarithms, so I suspect we pass negative values somewhere in there. I'll investigate that and let you know.

I think that's also a bug in L_BFGS - shouldn't the iterations stop if objective is NaN?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/749/files/57c9d5e634d7d3d7e2ca1618353fe37d9e23b34a#r76078481
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160824/45a0cf41/attachment-0001.html>


More information about the mlpack-git mailing list