Saturday, September 29, 2012

Improve the performance of your SVM

Though I am not very keen on differential geometry (others aren't either, but they claim to be researching on the field), I find it amusing to read a little bit of it when it is used along with kernel methods, and especially when you can improve the behavior of a SVM with it.

Amari and Wu are responsible for the following method: The idea is that, in order to increase class separability, we need to enlarge the spatial resolution around the boundary in the feature space. Take, for instance, the Riemannian distance along the manifold
$$
ds^2 = \sum_{i,j} g_{i,j} dx_i dx_j
$$
We need it to be large along the border of $f(\mathbf{x})=0$ and small between points of the same class. In practice, the boundary is not known, so we use the points the we know are closest to the boundary: the support vectors. A conformal transformation does the job
$$
\tilde{g}_{i,j}(\mathbf{x}) = \Omega (\mathbf{x}) g_{i,j} (\mathbf{x})
$$

This is very difficult to realize practically, so we consider a quasi-conformal transformation to induce the a similar map by directly modifying
$$
\tilde{K}(\mathbf{x_1},\mathbf{x_2}) = c(\mathbf{x_1}) c(\mathbf{x_2}) K(\mathbf{x_1},\mathbf{x_2})
$$
where $c(\mathbf{x})$ is a positive function, that can be built from the data as
$$
c(\mathbf{x}) = \sum_{i \in SV} h_i e^{\frac{\| \mathbf{x} - \mathbf{x}\|^2}{2\tau^2}}
$$
where $h_i$ is a parameter of the $i$-th support vector.

Thus, if you first train a SVM with a standard kernel, and then you compute $c(x)$ and make a new kernel with the previous expressions, your SVM will behave better.

The authors report higher classification accuracy and less support vectors than with standard kernels.

Check out the paper:
http://www.dcs.warwick.ac.uk/~feng/papers/Scaling%20the%20Kernel%20Function.pdf

No comments:

Post a Comment