[mlpack-git] [mlpack/mlpack] Approximate Neighbor Search for Dual tree algorithms. (#684)

sumedhghaisas notifications at github.com
Mon Jun 13 07:28:21 EDT 2016


> @@ -150,6 +150,21 @@ class NearestNeighborSort
>        return DBL_MAX;
>      return a + b;
>    }
> +
> +  /**
> +   * Return the given value relaxed.
> +   *
> +   * @param value Value to relax.
> +   * @param epsilon Relative error (non-negative).
> +   *
> +   * @return double Value relaxed.
> +   */
> +  static inline double Relax(const double value, const double epsilon)
> +  {
> +    if (value == DBL_MAX)
> +      return DBL_MAX;
> +    return (1 / (1 + epsilon)) * value;

Okay so as I understand it, for KNN we have a strong preference; go ahead with the current range. I agree with option 4 for KFN. In case of users who understand KFN but not approximation (which will be the case with most of the users who use command line tool and which is the case with couple of my friends), most of them will prefer percentage option as it is clear. I mean the option suggested by @rcurtin, if the user wants furthest neighbors that are at least x% of the distance as the true furthest neighbor, they just pass in "x/100". This way the concept of epsilon will be consistent from KNN to KFN and the users won't have to go through this when using the executable.  

---
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/684/files/07879a2cc79b35b10d7fae687d6e27ad90a9f2d7#r66776567
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160613/70827df4/attachment.html>


More information about the mlpack-git mailing list