[mlpack-git] [mlpack] Mean shift clustering (#388)

Tue Feb 10 13:24:14 EST 2015

```I'm sorry for the slow response on this.  It took me some time to do some reading and some thinking.

First, some thoughts on the generalizability of mean shift clustering.  In the original paper by Fukunaga and Hostetler, the technique is derived for radially symmetric kernels, but could be re-derived for kernels satisfying a handful of conditions (the kernel is finite, the kernel decays to 0 as its inputs tend to infinity, the integral over the input space of the kernel is 1, the integral over the input space of the absolute value of the kernel is finite; see page 1 of the Fukunaga and Hostetler paper).  In my view, it is ideal to provide a general enough implementation for all kernels, but because we have not worked out the precise restrictions for non-radially-symmetric kernels and because the original formulation assumed radial symmetry, I think it is reasonable to assume a radially symmetric kernel.

For this paragraph, assume that we're using the Euclidean distance (I'll come back to this assumption later).  I'm also going to consider the bandwidth term as a function of the kernel k().  The derivation of the gradient of the density estimate is pretty simple (doing my best with the unicode here, and hoping I haven't made some stupid error):

∇f(x) = ∑ K'(∥ x - x_i ∥) d/dx(∥ x - x_i ∥).
= ∑ K'(∥ x - x_i ∥) d/dx((∥ x - x_i ∥^2)^1/2)
= ∑ K'(∥ x - x_i ∥) ((∥ x - x_i ∥^2)^(-1/2)) (x - x_i)

This is somewhat more complex than the nicer formulation in the paper you linked, which assumes that the kernel uses the squared distance ∥ x - x_i ∥^2 (for which the gradient is just 2(x - x_i)).  But we can't just plug in the squared Euclidean distance to every kernel, because the squared Euclidean distance does *not* satisfy the triangle inequality and is not a metric.  I haven't worked out the details, but this probably causes some stability issues with mean shift, and almost certainly would break the dual-tree implementation of mean shift, which will require the triangle inequality.

The Gaussian kernel is a nice exception, because it squares its input and thus ends up using the squared distance, yielding the nice expression which is easier to implement and calculate.

I said earlier I would revisit the assumption of the Euclidean distance.  Given the derivation above, it's pretty easy to plug in any d(x, x_i) in the place of ∥ x - x_i ∥ and then the gradient can be derived for that specific distance function.  I think it would be nice to generalize across possible distances, but this involves deriving the gradient for each of the distance metrics mlpack currently has implemented.  It would be nice if the mean shift implementation was sufficiently general to accept any distance metric (and then depend on its Gradient() function), but I don't see a reason to require it before accepting this contribution.

On the other hand, I do think it is absolutely necessary that the kernel type is parameterizable.  Implementing a Gradient() function in the various kernels which are shift-invariant is a good idea, but please don't implement dk(s)/ds, because this is not the actual gradient.  Instead, please implement the true gradient with respect to the input parameter (which is the distance between points), and use the long, ugly expression above which doesn't assume a squared distance.

Then, what we can do is use template specializations and SFINAE to catch those cases where the kernel uses the squared distance.  The GaussianKernel squares its input distance -- so it uses the squared distance.  You could add to the KernelTraits class and use SFINAE to specialize the gradient calculation for the GaussianKernel and other kernels that use the squared distance... something kind of like this:

```
// This is not necessarily the best design, just an example of SFINAE.
template<typename KernelType, typename MetricType> // MetricType if you want to make that generic too.
typename boost::enable_if<!kernel::KernelTraits<KernelType>::UsesSquaredDistance>::type* junk = 0)
{
// Slow, general implementation.
}

// This is not necessarily the best design, just an example of SFINAE.
template<typename KernelType, typename MetricType> // MetricType if you want to make that generic too.
typename boost::enable_if<kernel::KernelTraits<KernelType>::UsesSquaredDistance>::type* junk = 0)
{
// Super-fast implementation!
}
```

What do you think of this approach?  I think it's very important to provide generalized code where possible, and I think this is one of those instances.

---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/pull/388#issuecomment-73753554
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20150210/8ed25712/attachment.html>
```