Saturday, December 22, 2012

Funny Boost

I located some examples I made when I first got acquainted with Boost and wanted to push the limits of the tutorials. Imagine the following two flavors of a program:
#include <boost/smart_ptr/shared_ptr.hpp>
using namespace std;
using namespace boost;
struct Thing {
public :
    Thing(int *i);
    void dumpP() {cout << pInt.get() << endl;}
private:
    shared_ptr<int> pInt;
};
Thing::Thing(int *i) : pInt(i) {
}
int _tmain(int argc, _TCHAR* argv[])
{
    int i;
    Thing t(&i);
    cout << &i << endl;
    t.dumpP();
   
    std::cin >> i;

    return 0;
}
 And
#include <boost/smart_ptr/shared_ptr.hpp>
using namespace std;
using namespace boost;
struct Thing {
public :
    Thing(int *i);
    void dumpP() {cout << pInt.get() << endl;}
private:
    shared_ptr<int> pInt;
};
Thing::Thing(int *i) : pInt(i) {
}
int _tmain(int argc, _TCHAR* argv[])
{
    int *j = new int(1);
    Thing t(j);
    cout << j << endl;
    t.dumpP();

    std::cin >> *j;

    return 0;
}
We see that their only difference is the fact that in the first version we use a local variable and in the second we allocate memory. Both are syntactically correct and both compile. However, one of them crashes before finishing execution.

If your guts tell you it is the one that uses dynamic memory, it is betraying you. The one that fails is the version that uses the local variable. What's the harm?

When they are deleted, Boost "intelligent" pointers free the object they reference if they are the last referencing pointer. This assumes the pointer is a dynamically-allocated variable. Since in the first version the variable is contained in the stack and has no valid dynamic memory block pointers, this crashes the program when it tries to free that part of the memory. This also explains why the last version works.

Tuesday, December 18, 2012

Crisis Trivia

I am back, sorry for the long delay, I have been really busy.

One of the things that I have been up to is a trivia game engine, which as a first example features a financial, economic and political trivia game called Crisis Trivia.

I have set up a separate blog to talk about it and the follow-up games I may come up with.

Please check out
androidtrivia.blogspot.com

Tuesday, October 2, 2012

Machine Learning video lectures (Beginners)

The first one is part of the PyData workshop held in March this year. This lecture is oriented to beginners, so if you already know the (very) basics, then it won't be of interest to you. It is nice, however, since you can see all the advanced capabilities of Scikits Learn while watching something you already know, such as linear, polynomial and Gaussian SVMs, logistic regression and naive bayes.


Monday, October 1, 2012

Jacobi eigenvalue method implementation in C++


The Jacobi eigenvalue problem is an algorithm to compute the eigenvalues of a matrix by canceling out the off-diagonal elements by multiplying the matrices with rotation matrices.

I started with the code in Wikipedia, which is essentially wrong.

Here I offer an implementation that uses Boost uBLAS matrices. It uses a trick to avoid computing sines and cosines. It needs to be improved to avoid multiplication of full matrices when doing the rotations.

This code is copylefted, use it as you wish.

#include "stdafx.h"

#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>

using namespace boost::numeric::ublas;
using namespace boost;

#define EPS 10e-08


void abs(matrix<double> &M)
{
 int n = M.size1();

 for (int k=0; k<n; k++)
 {
  for (int i=0; i<n;i++)
  {
   M(k,i)=abs(M(k,i));
  }
 }
}


void findMax(matrix<double> &M, int &row, int &col)
{
 double m = M(0,0);
 row = col = 0;
 int n = M.size1();

 for (int k=0; k<n; k++)
 {
  for (int l=0; l<n;l++)
  {
   if(M(k,l) > m)
   {
    row = k;
    col = l;
    m = M(k,l);
   }
  }
 }
}


void zeroRowCol(matrix<double> &S, matrix<double> &U, int row, int col)
{
 double t, c, s, theta;
 int n = S.size1();
 //matrix<double> S(n,n);
 //S = S0;

 if (row == col) return;

 theta=(S(row,row)-S(col,col))/(2*S(row,col));
 if (theta < EPS)
 {
  t = 1/(abs(theta)+sqrt(theta*theta+1));
 }
 else {
  t = 1/abs(2*theta);
 }

 if (theta<0) t = -t;

 c = 1/sqrt(t*t + 1);
 s = c*t;
 matrix<double> R(n,n);
 R = identity_matrix<double>(n,n);
 R(row,row) = R(col,col) = c; R(row,col) = s; R(col,row) = -s;
 S = prod(S, trans(R));
 S = prod(R, S);
 U = prod(U, R);
}


int jacobi2(matrix<double> &S, vector<double> &e, matrix<double>  &U)
{
 int col, row;
 bool iterating = true;
 int n = S.size1();
 if (S.size2() != n)
 {
  return -1;
 }
 matrix<double> M(n,n);

 U = identity_matrix<double>(n,n);
 
 while(iterating)
 {
  M = S;
  abs(M);

  for (int k=0; k<n; k++)
  {
   M(k,k)=0;
  }

  findMax(M, row, col);
  if (row == col)
  {
   for (int i=0; i<n; i++) e(i) = S(i,i);
   return 0;
  }
  double Smax = S(row,col);
  zeroRowCol (S, U, row, col);
  if (Smax < EPS * norm_frobenius(S)) iterating = false;
 }

 for (int i=0; i<n; i++) e(i) = S(i,i);

 return 0;  
} 


int _tmain(int argc, _TCHAR* argv[])
{
 int n = 3;

 mmatrix<double> M(n,n);
 M = boost::numeric::ublas::identity_matrix<double>(3,3);
 M(0,1)=.5;
 M(1,0)=.5;

 std::cout << "The matrix is:\n" << M << std::endl;

 matrix<double> M(n,n);
 matrix<double> U(n,n);
 vector<double> e(n); 

 jacobi2(M, e, U);

 std::cout << M << std::endl;
 std::cout << e << std::endl;
 std::cout << U << std::endl;
}

Sunday, September 30, 2012

Kaggle

Reading mathbabe's blog, I learnt about Kaggle. This is a site that provides data so that data analysts can put their techniques to work. This data is provided by institutions who are interested in having their data analyzed. Then, the data is analyzed in the form of a contest, and he who gets the best results by the time the competition ends, wins a prized paid by the data owner. This is best summarized by its Wikipedia's page:
  1. The competition host prepares the data and a description of the problem. Kaggle offers a consulting service which can help the host do this, as well as frame the competition, anonymize the data, and integrate the winning model into their operations.
  2. Participants experiment with different techniques and compete against each other to produce the best models. For most competitions, submissions are scored immediately (based on their predictive accuracy relative to a hidden solution file) and summarized on a live leaderboard.
  3. After the deadline passes, the competition host pays the prize money in exchange for "a worldwide, perpetual, irrevocable and royalty free license [...] to use the winning Entry", i.e. the algorithm, software and related intellectual property developed, which is "non-exclusive unless otherwise specifies


I have seen interesting designs and topics. It is, no doubt, a very interesting source and might even provide a good topic for a thesis, from the practical point of view. I have yet to study it more deeply.

Saturday, September 29, 2012

Improve the performance of your SVM

Though I am not very keen on differential geometry (others aren't either, but they claim to be researching on the field), I find it amusing to read a little bit of it when it is used along with kernel methods, and especially when you can improve the behavior of a SVM with it.

Amari and Wu are responsible for the following method: The idea is that, in order to increase class separability, we need to enlarge the spatial resolution around the boundary in the feature space. Take, for instance, the Riemannian distance along the manifold
$$
ds^2 = \sum_{i,j} g_{i,j} dx_i dx_j
$$
We need it to be large along the border of $f(\mathbf{x})=0$ and small between points of the same class. In practice, the boundary is not known, so we use the points the we know are closest to the boundary: the support vectors. A conformal transformation does the job
$$
\tilde{g}_{i,j}(\mathbf{x}) = \Omega (\mathbf{x}) g_{i,j} (\mathbf{x})
$$

This is very difficult to realize practically, so we consider a quasi-conformal transformation to induce the a similar map by directly modifying
$$
\tilde{K}(\mathbf{x_1},\mathbf{x_2}) = c(\mathbf{x_1}) c(\mathbf{x_2}) K(\mathbf{x_1},\mathbf{x_2})
$$
where $c(\mathbf{x})$ is a positive function, that can be built from the data as
$$
c(\mathbf{x}) = \sum_{i \in SV} h_i e^{\frac{\| \mathbf{x} - \mathbf{x}\|^2}{2\tau^2}}
$$
where $h_i$ is a parameter of the $i$-th support vector.

Thus, if you first train a SVM with a standard kernel, and then you compute $c(x)$ and make a new kernel with the previous expressions, your SVM will behave better.

The authors report higher classification accuracy and less support vectors than with standard kernels.

Check out the paper:
http://www.dcs.warwick.ac.uk/~feng/papers/Scaling%20the%20Kernel%20Function.pdf

Wednesday, September 19, 2012

Python function to build a block matrix


Sometimes it is useful to build a matrix from matrices forming blocks of the former one.

As I understand, neither numpy nor scipy implement this functionality. Scipy has the scipy.linalg.special_matrices.block_diag function, but it only admits blocks along the diagonal. The following function builds a matrix from the elements of the input list a, as long as their dimension is compatible.


 def blockmat(a):  
   n=size(a,0)  
   N=size(a[0],1)  
   sqn=sqrt(n)  
   ind=0  
   rows=[]  
   cols=[]  
   for i in range(0,n):  
     A=a[i]  
     M=size(A,0)  
     #if (M != N):  
     #  return -1   
     if ((i>0) & (i%int(sqn) == 0)):  
       rows.append(numpy.concatenate(cols,axis=0))  
       cols=[]     
     cols.append(A)  
   rows.append(numpy.concatenate(cols,axis=0))  
   return numpy.concatenate(rows,axis=1)  

Tuesday, September 18, 2012

Python IDE: Spyder

I've been working with the Spyder IDE for Python. It is time I share my impressions.

First of all, it is a very strange feeling what you get when working with the IDE. In fact, it is the same feeling (but even more exaggerated) than you get with R Studio (in the case of programming for R). I feel like there is a force that prevents me from doing my work. In this regard, Spyder looks more like R Studio than to Matlab (a resemblance this last one that is the developers' motto).


I reckon that the IDE surpasses every other IDE that I've tried out so far, but there is still this unavoidable feeling. I think part of it comes from the fact that when you are executing part of the code just to see if it really does what it is intended for, it leaves the windows focus at the command line panel, and there is no keyboard combination to make it go back to the current file that is in edition. It is also possible that I am using Ubuntu and this software is more ready for KDE since it uses the Qt libraries.

On the other hand, you have the handy feature of integration with pdb, the Python debugger, showing variables of the current scope, an online help which shows documentation for any loaded name.

A fine IDE but that still needs maturing a little bit.

spyder-ide.blogspot.com

Friday, September 7, 2012

Twitter sentiment trading

From Gekkoquant, I found something interesting. Apparently, some hedge fund established in London is using twitter sentiment to trade the equity markets.

The idea stemmed from the paper Twitter mood predicts stock market, by J. Bollen and co-authors.

They identify market sentiment and then trade the markets with a 3-day lag. It is a tiny market operation yet so I take this cautiously (it is a natural law that humans try to deceive other humans so as to easily take their wealth), but still I regard this issue as interesting.

Quoting Gekkoquant, who also has an interesting series of posts about using the tweeter feed on Python
Interesting interview with Paul Hawtin from Derwent Capital about their twitter fund and some of the implementation details. Key things to note is that they analyse all tweets (no filtering for just FTSE companies), it’s not a blackbox system the mood signals are only single component of their strategy.
Also, here is a video of co-founder Paul Hawtin explaining what they do


I will have a look at the paper and update this post.

Check out the paper also here.

Monday, September 3, 2012

Characterization of independence of random variables

Sure you know that two random variables $X$ and $Y$ are independent if and only if $P_{XY}(x,y) = P_{X}(x)P_{Y}(y)$. What you might not know is that if
$$
\max_{f ,g\in \mathcal{F}\times \mathcal{G}} cor (f(X), g(Y)) = 0
$$
for $\mathcal{F}$ and $\mathcal{G}$ a sufficiently large family of functions and contain all continuous functions (each on the domain X and Y take values, say $\mathcal{X}$ and $\mathcal{Y}$), then independence holds.

This was first discovered by Sarmanov in 1958 and Generalized to the multivariate case by Lancaster in 1962. With this base, Bach and Jordan created the Principal Component Analysis.

Despite not being a mainstream characterization of independence, the book by Jacod and Protter entitled Probability Essentials lists it.

Fukumizu, Gretton and Sridemdupudur have also been working on independence measures using the RKHS theory. If one uses characteristic kernels, which can span again all the Fourier features, then one has again both families of continuous functions. They compute the mean function of the measure as
$$
\mu_X = \int k(x,\cdot) d P_X (x)
$$
Now, given two mean functions $\mu_X \in H_X$ and $\mu_Y \in H_Y$, independence holds if and only if the Hilbert-Schmidt norm of the covariance operator $C_{XY} : H_X \rightarrow H_Y$ is zero.

Another characterization of covariance was given by Rényi in 1959. Let $\hat{f}_{X}(\omega)$, $\hat{f}_{Y}(\eta)$ be the Fourier transforms of both probability density functions, and $\hat{f}_{XY}(\omega,\eta)$ be the joint probability density function, if
$$
\int \hat{f}_{XY}(\omega,\eta) - \hat{f}_X(\omega)\hat{f}_Y(\eta) d\omega d\eta = 0
$$
then independence holds. From there, Székely and Rizzo created a statistic and a non-linear correlation estimator.


Monday, August 27, 2012

Visual Studio blocks compilation

It happens to me a lot. After compiling and realizing that I was not done coding, I hit the compile button again just to find that something went wrong. The point is that Windows still has the file marked as in use, so that it cannot be removed. It sometimes locks it for quite a while.

Watch carefully for the message:
1>LINK : fatal error LNK1168: Could not open xxx.exe for writing
The solution, if you haven't figured it out, is to manually open the folder (directory) where xxx.exe is located and delete it. This folder defaults to the solution's debug folder. If it doesn't let you delete the file, turn off Windows' Application Experience Service.

Sunday, August 26, 2012

L. Wasserman's & O. Bousquet's blogs

I was reading UCL CSML news and an entry of Larry Wasserman's blog popped up. Yes, this is the Wasserman's responsible for the statistical bible that can be used to exercise your mind by learning statistics and to exercise your muscles by lifting and putting the book on the desktop.

Also, I stumbled upon Olivier Bousquet's blog. It contains some thoughts about Machine Learning that will be of interest to practitioners.

I added a link to Wasserman's and Bousquet's blogs to the right.

Ohh hell: I just found out that the last entry of Bousquet's blog is 5 years old. Shame.

Saturday, August 25, 2012

SVM plot decision function

In the paper I submitted, I have to deal with SVMs and I wanted to plot the decision function that my kernel made with a 2D dataset.

I stumbled upon a 2D problem that I am interested in visualizing (I already posted about it). The point is that I know now how to extract the decision function values for a grid so you are able to plot them. Again, I am surprised how difficult it is to find it on the Internet.

With e1071, you need the following code
im=predict(K.svm,  ... ,scale=F,decision.values=T)
im=matrix(attributes(im)$decision.values,nrow=100,byrow=F)
image(seq(0, 20, length.out=100), seq(0, 20, length.out=100), im,xlab="",ylab="")
points(y)
pdf.options(reset=T)
Notice that we are extracting the value of the decision function with the attribute decision.values from the prediction SVM object. This prediction SVM object, obviously, was created with a grid of points with a range larger than the input data.

Wednesday, August 22, 2012

Paper submitted!

At last!!!

It is very hard to make up a novel theory and write the most you can about it, the best you can, and build a manuscript that conforms to the journal's standards.

I have survived!

43 pages in total. I am happy with the result, though I reckon I could have told things differently. In any case, this allows me to move on. I am back writing on the blogs!

The paper contains a machine learning technique that exploits statistical properties of the data. I might later submit the paper to the arXiv. Today, I am officially on vacation, after having invested all summer working on the paper.

In the course of my research I have come across interesting things. I will comment on them starting tomorrow.

Saturday, August 4, 2012

Using whatsapp on your PC (II)

First of all, don't use cracks even if you find no virus. They use Whatsapp's verification of your phone number to send messages to I don't know who.

That being said, my fiancée Sali found what I had given all hope finding: a free and competitive Android emulator called Bluestacks App Player.

Even though it is little known, Bluestacks is the BEST Android emulator with a vengeance.

It runs Whatsapp seamlessly, totally recommended.

I know now what the rascals are saying at any time.

Friday, August 3, 2012

Graph combination software (Shin, Tsuda, Schölkopf)

Sometime ago I was driven to write a conference paper on positive semi-definite matrix combination bent towards binary classification and its application to gene functional prediction. The team leader (yes, AMG) made me program a competing method so we could test against it. The competing method was the one proposed in the paper "Protein functional class prediction with a combined graph" by Hyunjung Shin.

I have worked on a number of papers with my old team, but this one was the one with the most potential. Up until that date, I had rarely worked on these techniques before and due to poor management, that paper got rejected. The underlying idea was great, but I was left with all the work and with little experience, all to do within the week. The competing method I programmed was never used and that weighed on the reviewers' decision to reject the paper.

Considering that the nodes are present in all graphs (they refer to the same objects), but are interconnected differently, the method defines a diffusion process on a graph, but penalizes the dissimilarity between adjacent nodes. Let $L_k$ be the Laplacian matrix for each graph, $K$ the total number of graphs, and the vector $y$ that we try to approximate
$$
\min_{\beta,f} = \sum\limits_{k=1}^K \beta f^T L f - \log \det \left( I - \sum\limits_{k=1}^K \beta_k L_k \right) + \mu (f-y)^T C (f-y)
$$
subject to $\beta_k\geq 1$ and $\beta^T 1 < 0.5$
where $\mu$ regularizes the approximation term. Notice that the coefficient computed $f$ are the same for each Laplacian.

So here I left it to you. Try it with small graphs because there is something missing. If you modify it please let me know.

A testing program example is
load("mat.RData")

source("graph_comb_shin.R")

d1<-apply(G1,1,sum)
id1<-d1
id1[id1>0]<-1/id1[id1>0]
iD1<-diag(sqrt(id1))
D1<-diag(d1)
L1<-iD1%*%(D1-G1)%*%iD1
EL1<-eigen(L1)
rm(d1,id1,D1,iD1,G1)

d2<-(apply(G2,1,sum))
id2<-d2
id2[id2>0]<-1/id2[id2>0]
iD2<-diag(sqrt(id2))
D2<-diag(d2)
L2<-iD2%*%(D2-G2)%*%iD2
EL2<-eigen(L2)
rm(d2,id2,D2,iD2,G2)

d3<-(apply(G3,1,sum))
id3<-d3
id3[id3>0]<-1/id3[id3>0]
iD3<-diag(sqrt(id3))
D3<-diag(d3)
L3<-iD3%*%(D3-G3)%*%iD3
EL3<-eigen(L3)
rm(d3,id3,D3,iD3,G3)

LL=list(L1,L2,L3)
rm(L1,L2,L3)

for (c in c(1:50)) {gc()}

mu=10
delta=0.4
tol=0.1

y=as.matrix(variables[,1]==1)
res=graph_comb_shin(LL,y,mu,delta,tol)

save.image("shin.Rdata")

Download the software.
Checkout the paper.

Thursday, July 19, 2012

Using Whatsapp on your PC

I have never been a huge fan of cell phones. I am one of those people who think that a phone is just a phone, you use it to speak to others and keep in touch, but not to carry Youtube with you all around.

However, my buddies have started to go into silent mode from me and the reason is that they only use Whatsapp now, so no matter where they are they agree to meet or to do anything on short notice. This leaves me in a very miserable position.

What I did is the following:
  1. I instaled the Android SDK that comes with an emulator.
  2. In the AVG Manager, in the image I created to emulate a Smartphone, I added the property hw.keyboard with the value yes so I could use my PC keyboard to write into the virtual phone. You can also edit your confil.ini that is located under your user directory ~\.android\avd\device.avd\, and add hw.keyboard=yes .
  3. I downloaded Whatsapp from the virtual device.
  4. I installed the app (believe it or not it was the most difficult part for me, finding out how to reach the downloaded files, I downloaded it three times).
  5. During the first execution, it asked me for my real phone number and they sent a regular SMS to my (physical) phone.
  6. I then added my friends' phones to my contact in Android and it automatically marked them as using Whatsapp.
  7. Then one of them added me to the groups they were using to meet.

Sunday, July 15, 2012

Plot classification regions in an SVM

I've been really busy. I can't really claim to be an SVM expert since my postgraduate work up to now did not deal very much with machine learning, though since late I've been working with SVM's and kernels.

In fact I have a very advanced technique that really boost the SVM performance, but that will be left until the paper is published.

One of the test I ran dealt with a modified XOR data problem. The problem consist of four groups drawn from bivariate normal distributions. They are assign two classes such that the groups of the same class are always separated from each other by some group of another class.
y=mvrnorm(50,c(3,5),Sigma=diag(c(0.5,1.5)))
y=rbind(y,mvrnorm(50,c(15,13),Sigma=diag(c(1.5,0.5))))
y=rbind(y,mvrnorm(50,c(7,5),Sigma=diag(c(0.5,1.5))))
y=rbind(y,mvrnorm(50,c(15,17),Sigma=diag(c(1.5,0.5))))
labels=c(rep(1,100),rep(-1,100))
I put the figure here so that the problem is clear, the explanation of how to get it follows.
K.svm=svm(Phi, labels, type="C", kernel="linear",probability=T)
X=as.matrix(expand.grid(list(x = seq(0, 20, length.out=100), y = seq(0, 20, length.out=100))))
# compute the kernel on X here
im=predict(K.svm, PhiX,scale=F)
im=matrix(as.numeric(im),nrow=100,byrow=F)
image(seq(0, 20, length.out=100),seq(0, 20, length.out=100),im,xlab="",ylab="",col=c("#FFFCCCFF","#FFF000FF"))#heat.colors(2))
points(y)
We see that we first train the SVM with the kernel features as explained in the previous post.
Then we create a grid spanning all the points of the region we are interested in painting and evaluate the trained SVM it there. Then we recompose the grid of classified points into a 2D plane and plot it along with the original points.

Monday, July 2, 2012

Fun with Mensa puzzles

If you like to do puzzles regularly or occasionally, or simply feel curious about your IQ, Mensa Denmark has left a good Flash applet for you to do it
http://www.iqtest.dk/main.swf

Mensa is an international organization open to people who score at the 98th percentile or higher on a standardized, supervised IQ or other approved intelligence test and, according to its Wikipedia entry, the oldest IQ-related organization.

They also publish puzzle collections at a cheap price. Have a look at this listing from Amazon UK.

It is very likely that you will have a branch close to you. For you to join Mensa, you will need to score properly in a test for which a small fee is required. Google Mensa <your country> or go to its main address.

Wednesday, June 27, 2012

The Fourier transform as a diagonalization


One of the benefits of using the Fourier transform of a function is that convolutions become multiplications. This is important when solving a differential equation with its Green's function. If the Green's function comes from a differential operator $D^*D$, where $D$ is a differential operator and $D^*$ is its adjunct, then the Green's function is not singular at the origin, and is continuous. It expands a function space called a reproducing kernel hilbert space, RKHS, and all functions in this space can be written as linear combinations of the Green's function evaluated on one argument, and the solution to the differential equation $D^*D u = y$ would be of that form. OK, don't digrees anymore... to the cheese...

In the Fourier domain we operate on frequencies $\omega$. For example, to attenuate the noise, we decrease the power in the high omegas, which accounts for a convolution (with a Gaussian, for example). If we see this linear operation as a matrix, the convolution operator that has one (let's say) dimensional Gaussians in its rows (in the time/space domain) becomes a diagonal in the Fourier domain.

The page popped up with much to follow on. In particular, I liked this paragraph
The moral of the story is that the Fourier Transform may be thought of as a change of basis.  The Fourier integral projects a function onto the basis functions of a new coordinate system whose basis functions are the complex exponentials.  In this new basis, the convolution operator is diagonal and everything is simple.  The convolution operator acts on each Fourier component independently by multiplying the component by an associated magnitude and phase.
In Matlab
C=[4 1 2 3; 3 4 1 2; 2 3 4 1; 1 2 3 4]

C =

     4     1     2     3
     3     4     1     2
     2     3     4     1
     1     2     3     4

 F=fft(C)

F =

  10.0000            10.0000            10.0000            10.0000        
   2.0000 - 2.0000i  -2.0000 - 2.0000i  -2.0000 + 2.0000i   2.0000 + 2.0000i
   2.0000            -2.0000             2.0000            -2.0000        
   2.0000 + 2.0000i  -2.0000 + 2.0000i  -2.0000 - 2.0000i   2.0000 - 2.0000i

  F*C*F'

ans =

  1.0e+003 *

   4.0000                  0                  0                  0          
        0             0.0640 - 0.0640i        0                  0          
        0                  0             0.0320                  0          
        0                  0                  0             0.0640 + 0.0640i





Tuesday, June 26, 2012

I am an Analytic Bastard and I will fight Delusional Geometers to death

This post is dedicated to AMG: friend and enemy, mentor and destroyer, wise and fool.

AMG is obsessed to equate generalized functions (as in Swartz distribution theory) to probability distributions (as in measure theory). According to AMG I am an Analytic Bastard, I agree. And this was the least thing I could take from a Delusional Geometer. Therefore I left AMG.

Schwarz distributions are NOT probability distributions

I can say it louder but not clearer.

It doesn't matter that they are both called distributions, sometimes it does happen in mathematics that two different things are similarly called. It doesn't matter how hard you try to make them the same, it doesn't matter how proud you are and how little you think of the people that surround you and that are not Field medalists.

The fact that Strichartz's book shows a bell-like $C^{\infty}$, compactly supported function does not imply it is a distribution. What is more, this bell-like $C^{\infty}$, compactly supported function is clearly stated to belong to the set $\mathcal{D}$, the set of test functions. Therefore it is not a distribution itself, but the objects to which the linear functional (the Swartz distribution) is applied to. If $\varphi \in \mathcal{D}$ then there is a test function. It looks similar to a DENSITY function (the Gaussian) but the density is not the distribution, nor the test function is the (other kind of) distribution.

Furthermore, forcing my brains so as to accept $\varphi \in \mathcal{D}'$ and call it a (Swartz) distribution, then you can't write $\varphi(x)$ outside the integral symbol. It is a functional, which means that it is applied to some $\phi \in \mathcal{D}$, so what makes sense is $\varphi(\phi)$, don't get angry with me because of this, this is a fact. If $\varphi(x)$ were a linear functional, it could be written as $\int \varphi(x) \phi(x) dx$, why on Earth do you say $\varphi(x)$ anyway?

At this point, why the hell do you use $\varphi(x)$ to name a "bell-like" function with $x= \arg \max_y \varphi(y)$?

Then it remains going full retarded and try to apply density estimation methods to image analysis following this logic:
  1. David Mumford develops an axiomatic theory that describes images as generalized functions (check)
  2. We have methods that work in the density estimation field fairly well (check)
  3. Since YOU (and only you) say generalized functions = probability distributions, then our methods must be very powerful in image analysis (FAIL)
FAIL! Because the only supporting argument you have is your pride.

So, let me out!

Wednesday, June 20, 2012

Pairs trading and colinearity

We sometimes recognize a couple of assets as being co-related. However, the dependence regime changes over time, making this co-relation non-linear and depending on, let's say, a phase. A more robust concept is multiple co-linearity, which implies that a linear combination of the returns of those assets is linearly related, and has a constant mean and variance.

Let's say that two assets are co-linear and that the returns of one of them have been consistently larger than the other. It makes sense to sell short the asset with larger past returns and buy the asset with smaller past returns. With this, we would have a quantitative model to measure large discrepancies of the return of the linear combination, for example, execute this strategy when the absolute value of the return exceeds twice the standard deviation. This gives a statistical arbitrage oportunity.

One example that I like to use is the pair EURUSD and GC (NYMEX Gold 100oz) and I used that to get a hold on some coins. Another perhaps more interesting application would be corn and wheat. They seem to have periods when one is the loved child of agricultural commodity traders. They are normally worth the around the same, being wheat historically more expensive. Corn catched up and had a period that was more expensive, but wheat had recently a rally and got to be USD100 more expensive per contract. Obviously, the gap close down to a difference of USD20. It then widened and is now sitting around USD40-USD50. This simple model would have yield a potentical change of USD80 per contract.

Saturday, June 16, 2012

SQL Server CE Max Database Size

I am crawling an Internet database so I can apply some spectral graph methods to it. I am using my computer at the university. I have just logged in and found out that my crawler was reporting it had (oddly and partially) finished. Then I found out the error was not that the database owners had detected me (I have implemented the protocols to keep a low profile), but that the local database is capped at 256 MB file size.

Long story short, if you want to use more than 256 MB on a desktop SQL Server CE database, add the parameter Max Database Size=1024 (to increase it to 1 MB) in your connection string. Separate variables with ';'.

Monday, June 11, 2012

Nested queries with SQL Server Compact Edition

Following the previous post, I am finding some small troubles with SQL Server CE and their solutions, and I think they will be useful for anybody working with it.

SQL Server CE does not support nested queries!

The way to circumvent this is by using inner joins, as mentioned in this blog.

My query now reads:
String sql = "SELECT Authors.* FROM Authors INNER JOIN " +
                    "(SELECT AuthorID FROM SubDomain_Authors " +
                    "WHERE (DomainID = " + subdomain.DomainID +
                    ") AND (SubDomainID = " + subdomain.SubDomainID +
                    ")) AS t ON Authors.ID = t.AuthorID";


SqlCeCommand cmd = new SqlCeCommand(sql, con);

                SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Scrollable);
                if (rs.HasRows)
                {

Sunday, June 10, 2012

Using Visual C# 2008 Express to query a SQL database

I am currently working on building a database that I intend to feed to a graph so that I can explore its spectral properties and derive interesting issues regarding that data.

To do that, I am downloading downloading data from a source, processing it and storing it into a database. Since the data are highly structured, I decided that I would use an SQL-like database. To my comfort, I found out how to use the SQL Server Compact Ediction (v 3.5) that Microsoft ships with a number of its products (I know that v 4.0 ships with Visual Studio 2010, but I don't know where the 3.5 I had installed came from). In any case, while working on the project, I found out that Visual C# itself can be used as a SQL query viewer, to query a database.

These are the steps to query a database from your Visual C# 2008 Express
  • Click on the menu "Data" and then "Add new data source" if you have an open project
  • Click on the menu "Tools" and then "Connect to database" otherwise
  • Click on database
  • "New connection"
  • Select "Microsoft SQL Server Compact 3.5 (Data provider .NET Framework para Microsoft SQL Server Compact 3.5)"
  • Examine to get to your file. You can create a new file with a new filename if you will.
  • Assuming that you have data to query: Look for the tables, right-click on it and click on "Show table data", or "New query". In the first case, "SELECT * FROM Table" will be executed, whereas the second case will let you modify the SELECT statement.
MS SQL CE stores the database locally in .sdf files and it is embedded into your program as a SQL engine DLL so you can have access to the objects that interpret statements, access and control files. And by the way, it has a limit of 4GB per data-base file.

It is probably not that big of a deal but it was gladly surprised when I found out I could get this project done with just the Express edition and no supporting tools.

Tuesday, June 5, 2012

Declaration of linear independence

I've been very busy (pinyin for beginners: Wo hen mang, simplified: 我很忙) so let's have some nerdy fun. Via mathbabe's blog, the declaration of linear independence:

IN EUCLIDEAN SPACE, JANUARY 28, 1988

When, in the course of a proof, it becomes necessary for a set to dissolve the argument which has connected it with a theorem, and to assume among the powers of mathematics a position above that of the mathematician, a decent respect for the axioms requires that a rigorous justification be given.
We hold these truths to be self-evident: that all nonzero vectors are created equal; that they are endowed by their definer with certain unalienable rights; that among these are the laws of logic and the pursuit of valid proofs; that to secure these rights, logical arguments are created, deriving their just powers from axioms; that whenever any argument becomes destructive of these ends, it is the right of the vectors to alter or to abolish it, and to institute a new argument, laying its foundation on such principles, and organizing its powers in such form, as to them shall seem most likely to reach the correct conclusion. Prudence, indeed, will dictate that theorems long established should not be changed for light and transient causes, and accordingly all experience hath shown that sets are more disposed to accept the conclusions of arguments than to right themselves by abolishing the arguments. But when a long train of abuses and usurpations, pursuing invariably the same object, evinces a design to reduce them to zero in a non-trivial way, it is their right, it is their duty to throw off such argument, and to provide new proofs for their future security. Such has been the patient sufferance of these vectors, and such is now the necessity which constrains them to alter these arguments. The history of Professor Eigen is a history of repeated injuries and usurpations, all having in direct object the establishment of dependence among these vectors. To prove this, let facts be submitted to a candid world.
He has refused to acknowledge that he obtained a zero matrix only by multiplying our coordinate matrix by a zero matrix.
He has restricted our freedom of movement by requiring us all to live in the same hyperplane, even though we cannot all fit in one.
He has attempted unsuccessfully to invert our coordinate matrix, and, having overlooked the inverse, has concluded that the coordinate matrix is singular.
He has changed bases repeatedly for opposing with manly firmness his attempts to place us in the span of fewer vectors than the dimension of the space.
He has erected a multitude of new formulas and sent hither swarms of new functions to force our directions into a proper subspace of the vector space.
He has kept among us vectors to be orthogonal to all of us without the consent of those of us whose dot product with them is nonzero.
He has abdicated the axioms here by committing mathematical errors in computing a zero determinant for our coordinate matrix.
In every stage of these oppressions we have petitioned for redress in the most humble terms; our repeated petitions have been answered only by repeated injuries.
A mathematician whose arguments are thus marked by every error is unfit to prove the theorem which he attempts to prove.
Nor have we been wanting in attentions to Professor Eigen. We have warned him from time to time of flaws in his arguments. We have reminded him of the circumstances of our definition, we have appealed to his knowledge of the axioms, and we have requested him to disavow these usurpations which would inevitably destroy the validity of his arguments. He has been deaf to the voice of logic. We must therefore acquiesce in the necessity which denounces our separation and hold him as we hold the rest of mathematicians, an enemy when he is wrong, a friend when he is right.
We, therefore, the members of set S in vector space V, appealing to the supreme judge of mathematics for the rectitude of our intentions, do solemnly publish and declare that these vectors are, and of every right ought to be, a free and independent basis; that they are absolved from all subjection to Professor Eigen's theorems, and that all restriction of them to a hyperplane is, and of right ought to be, totally dissolved; and that as a basis, they have full power to span the space, form invertible coordinate matrices, give unique linear combinations equal to a given vector, and to do all other acts and things which a basis may of right do.
And for the support of this declaration, with a firm reliance on the protection of the properties of a vector space, we mutually pledge to each other our magnitudes, our directions, and our sacred honor.
In witness whereof we have signed our coordinates with respect to an appropriate orthonormal basis, and found them to constitute a triangular matrix with nonzero diagonal elements.

Wednesday, May 30, 2012

Nice little tick for two-number multiplication


For the non-believers: it is also valid for numbers with different number of digits. Just be careful to align the most significant digits accordingly.

Tuesday, May 29, 2012

A tale of exponential interest

I have a savings account at ING bank which gives me 4% interest monthly. Each month I receive payment of both the principal I deposited plus the compound interest they gave me over the past months.

Notice that the interest, or the present increment or my balance, $\frac{d x(t)}{dt}$, is equal to the current balance itself times time interest $k$.
$$\frac{d x(t)}{dt} = kx(t)$$
We write each function as a variable for convenience.
$$\frac{d x}{x} = kdt$$
And solve by integration
$$\int \frac{d x}{x} = k \int dt$$
$$\log x = kt + C_0$$
(we abuse notation and name the upper limit of integration with the same name than the integrating variable) yielding
$$x(t) = C_1e^{kt}$$

What does it mean? It means that everybody would see their savings accounts grow exponentially. Now, would it correspond to exponential wealth? Would the currency be debased at the same speed for the interest to be payable or can we extract oil fast enough to cover our land with things and thus effectively increasing our wealth?

$C_1=e^{C_0}$

Here is an interesting discussion at Do the Math.

Monday, May 28, 2012

Hypergeometric brain teaser

The statement of the problem is
A bag of N balls has black and white balls. We define a function f such that it is 1 for a black ball and 0 for a white ball, and $\frac{1}{N}\sum_i f(x_i) = p$, where $x_i$ could be either a black or white ball. If we draw balls in groups $G$ of n balls, what would the mathematical expectation of the random variable
$$\frac{1}{n}\sum_{i\in G} f(x_i) $$
Now, $\frac{1}{n}\sum_{i\in G} f(x_i) $ is a random variable because the withdrawal of balls is random. Its expectation is $E[Y]=\frac{1}{n} E[\sum_{i\in G} f(x_i)]$. Now, since f marks balls as 0 or 1, this is a variable that counts black withdrawals. Let's see how we can make groups of black and white balls.
Within a group of n balls, how can we withdrawn $k$ black balls? There are $Np$ black balls in the bag. Therefore
$$\left(\begin{array}{c} Np \\ k \end{array}\right)$$
To fill up the rest of the group, we need to pick up white balls. There are $N-Np$ white balls and we need $n-k$. Therefore
$$\left(\begin{array}{c} N-Np \\ n-k \end{array}\right)$$
We combine the two parts of a possible group.
 $$\left(\begin{array}{c} Np \\ k \end{array}\right) \left(\begin{array}{c} N-Np \\ n-k \end{array}\right)$$

Now, how many groups of n balls can we withdraw altogether, disregarding the color of the balls?
$$\left(\begin{array}{c} N \\ n \end{array}\right) $$

To build the probability of a specific group of n balls, k black and n-k white drawn from the bag, we compute its frequency
$$\frac{\left(\begin{array}{c} Np \\ k \end{array}\right) \left(\begin{array}{c} N-Np \\ n-k \end{array}\right)}{\left(\begin{array}{c} N \\ n \end{array}\right)}$$

If we look for a probability distribution with this print, we find that it is the Hypergeometric distribution, which meets the description of the problem
[...] describes the probability of k successes in n draws from a finite population of size N containing m successes without replacement.
Therefore, since cumulative successes $\sum_{i\in G} f(x_i) $ would have a hypergeometric distribution with mean $\frac{nNp}{N}=np$. But we want $\frac{1}{n}\sum_{i\in G} f(x_i)$, so that $E[Y]=\frac{1}{n} E[\sum_{i\in G} f(x_i)]=\frac{np}{n}=p$.

I wrote a program that takes N,n,p as inputs (and a random bag of white and black balls) and computes the expectation, which is always around $p$ (depending on the generated random bag). Do not go very far with $N$, keep it under 30!


push <- function(i,n,N,ind,v) {
 es=0;
 res=list()
 res[[1]]=0
 res[[2]]=0

 if (i==1) {
  for (j in seq(1,N)) {
   ind[1]=j
   res2=push(i+1,n,N,ind,v)
   res[[1]]=res[[1]]+res2[[1]]
   res[[2]]=res[[2]]+res2[[2]]
  }
  print(res[[1]])
  print(res[[2]])
  return (res[[1]]/(res[[2]]*n))
 }
 if (i>=n+1) {
  es=sum(v[ind])
  #print(ind)
  res[[1]]=es
  res[[2]]=1
  return (res)
 }
 if (ind[i-1]+1<=N) {
  for (j in seq(ind[i-1]+1,N)) {
   ind[i]=j
   res2=push(i+1,n,N,ind,v)
   res[[1]]=res[[1]]+res2[[1]]
   res[[2]]=res[[2]]+res2[[2]]
  }
 }
 return (res)
}



N=20
p=.2
v=(runif(N)<=p) * 1
ind=rep(0,N)
n=5

push(1,n,N,ind,v) 

Sunday, May 27, 2012

Least squares in C

Just for fun.

I show here how to perform simple regression with least squares.

We implement the closed formulae that result from
$$\min_{\alpha,\,\beta}Q(\alpha,\beta)$$
where
$$Q(\alpha,\beta) = \sum_{i=1}^n\hat{\varepsilon}_i^{\,2} = \sum_{i=1}^n (y_i - \alpha - \beta x_i)^2 $$
We thus compute
$$\frac{\partial}{\partial \beta} Q(\alpha,\beta)$$
and
$$\frac{\partial}{\partial \alpha} Q(\alpha,\beta)$$
Which results in
 $$\hat\beta  = \frac{ \sum_{i=1}^{n} (x_{i}-\bar{x})(y_{i}-\bar{y}) }{ \sum_{i=1}^{n} (x_{i}-\bar{x})^2 }
             = \frac{ \overline{xy} - \bar{x}\bar{y} }{ \overline{x^2} - \bar{x}^2 }
              = \frac{ \operatorname{Cov}[x,y] }{ \operatorname{Var}[x] }
$$

$$\hat\alpha  = \bar{y} - \hat\beta\,\bar{x}$$

So here is the code:

#include "stdafx.h"
#include <stdlib.h>

double sum(double *x, int n) {
	double s=0;

	for (int i=0; i<n; i++) {
		s+=x[i];
	}

	return s;
}


double dot(double *x, double *y, int n) {
	double s=0;

	for (int i=0; i<n; i++) {
		s+=x[i]*y[i];
	}

	return s;
}

void ols(double *x, double *y, int n, double *beta, double *alpha) {
	double sx = sum(x,n);
	double sy = sum(y,n);
	double sx2 = dot(x,x,n);
	double sxy = dot(x,y,n);
	double sxx = sx*sx;

	*beta = (sxy-sx*sy/n) / (sx2-sxx/n);
	*alpha = sy/n - *beta*sx/n;
}

And you can download the Visual C++ project here.

Sunday, May 20, 2012

Brainteaser with Montecarlo simulation

Here's a brain teaser.
A man speaks the truth 3 out of 4 times. He throws a die and reports it to be a 6. What is the probability of it being a 6?
Actually we know that the man has said he either got a 6 or he didn't. So the probability that we are interested in is $P(A|B)$, where the events A and B are A: "The man said 6", B: "He actually got a 6". This conditional probability is $P(A|B) = \frac{P(A,B)}{P(B)}$.

We work with the assumption of independence, so $P(A,B)=P(A)P(B)=\frac{3}{4}\frac{1}{6} = \frac{3}{24}=\frac{1}{8}$.

We decompose the event B in the intersection with A and the intersection with the complementary of A (being the last one defined as "he did not get a 6"), so that $P(B)=P(B,A) + P(B,\bar{A})$. We already have the first one so we compute $P(B,\bar{A})=\frac{1}{4}\frac{5}{6}=\frac{5}{24}$, making $P(B)=\frac{3}{24} + \frac{5}{24} = \frac{8}{24} = \frac{1}{3}$.

Therefore, $P(A|B) = \frac{P(A,B)}{P(B)} = \frac{\frac{1}{8}}{\frac{1}{3}}=\frac{3}{8} = 0.375$


Now we simulate the mentioned events in R. See that we generate the number of times the man tells the truth independently of the times he gets a six. We then count the times that a real 6 coincide with the man telling the truth in $num$. In the denominator $den$ we count the previous amount plus the number of times he says 6 but does not get a dice with a 6. This means that, among all the times the man reports a 6 (both true or false), we select only those times they are true.

N=100000

dice=sample(1:6,N,replace=T)
lie=rep(0,N*3/4)
lie=c(lie, rep(1,N*1/4))
lie=lie[sample(1:N,replace=F)]

num=sum((dice==6)*(lie==0))
den=sum((dice==6)*(lie==0)) + sum((dice!=6)*(lie==1))

res=num/den

res
> res
[1] 0.3739862
Note that the output is very close to the theoretical result of 0.375.

Friday, May 18, 2012

Batch-processing data files in a directory tree with Matlab

You may find the following code useful to process data with Matlab in a batch fashion. This example processes all the PNG files within the first level of directories that hang under the current directory, that additionaly have "noise" as a part of its name and "gif_" is not to be found within the name.

for i=1:N
    name=files(i).name;
    if (files(i).isdir == 0) && strcmp(name(end-3:end),'.png') && _
           isempty(strfind(name,'nois')) && _
           ~isempty(strfind(name,['gif_' num2str(s)]))
 
           im=double(imread(name)); 
 
           [your processing here]
 
    end
end

Thursday, May 10, 2012

Conformal mapping in shape representation

If a shape is considered to be a simple closed smooth curve in the complex plane, meaning that they have derivatives of all orders and that do not cross with themselves, then the Riemannian mapping theorem states that it is possible to map the unit disc conformally (meaning that infinitesimal angle between each two crossing curves is equal to the infinitesimal angle between the transformed curves) to the interior of any such shape. The conformal transformation is unique up to any preceding Möbius transformations mapping the unit disc to itself, this is, maps of the form
$$z  \rightarrow \frac{az + b}{\bar{b}z + \bar{a}}$$

If we identify $\nabla_{-}$ with the interior of the shape, and $\Gamma_{-}$ with the interior of the unit disk, then by the Riemannian mapping theorem there exists a map
$$\Phi_{-} : \nabla_{-} \rightarrow \Gamma_{-}$$

With the previous characteristics. Furthermore, if we identify $\nabla_{+}$ with the exterior of the shape, and $\Gamma_{+}$ with the exterior of the unit disk, the same conditions apply
$$\Phi_{+} : \nabla_{+} \rightarrow \Gamma_{+}$$

Now, we construct
$$\Psi =  \Phi_{+} \circ \Phi_{-}$$

This provides a fingerprint of the shape unique up to a Moebius transformation.

In practice, we use the Swarz-Christoffel Matlab toolbox, having a complex variable with the given vertices coordinates in $pol$, we compute the map $m$ by
pol=polygon(dots);
fc=diskmap(pol);
ex=extermap(pol);
iex=scmapinv(ex);
m=composite(fc,iex);


To extract the fingerprint, we need to evaluate the map in the $z$ coordinate given the angle, so that $z=exp(-i \theta)$, and the evaluation is $m(z)$ for all $\theta \in [0,2\pi)$. Now, the fingerprint is the angle of a given evaluation $m(z)$, this is, $\hat{\theta}=-\frac{log(m(z))}{i}$. plotting $(\theta,\hat{\theta})$, we see the fingerprint.

Checkout the paper here.

Tuesday, May 1, 2012

Denoising by convolution using CuFFT

In this post we will create a CUDA program that removes some noise with simple isotropic diffusion. We are going to diffuse the function (image) according to the heat equation
$$u_t = \Delta u$$
Where $\Delta$ is the laplacian operator and $u_t$ is the derivative with respect time. The solution at time $t$ is
$$u(x) = \int G_t (x-y) u_o(y) dy$$
Where $x,y\in \mathbb{R}^2$, $u_0$ is the initial noisy image and $G_t$ is the bivariate (normalized) gaussian kernel with standard deviation $t$.

Since a convolution is equivalent to a multiplication of the Fourier transforms as below (by the convolution theorem, and transforming back), we can use the CuFFT library to perform these steps.
$$\hat{u}(\omega) = \hat{G_t}(\omega)\hat{u_o}(\omega)$$

With this project, I want to hunt down two birds with one shot. I want to make a managed code application with Windows forms that reads and visualizes a JPG file, adds noise and then blurs it to remove some noise.

Now, you must know that you cannot mix managed code and CUDA. The solution to this is to separate the CUDA computation and the presentation application in two different parts. We will encapsulate the CUDA computations into a DLL and the Windows program in a Windows Forms application. Both will be part of the same Visual Studio solution so everything will work in the final step. The Windows Forms will be the main project within the VS solution.

Under the DLL project, you must set up CUDA libraries. For the compiler to look for CUDA and CUDAutils header files, we must add
$(NVSDKCOMPUTE_ROOT)\C\common\inc;"$(CUDA_PATH)/include";./;../../common/inc;../../../shared/inc
to  project properties -> C/C++ -> General. Then we must add the directory where additional support header files can be found, in this case,
$(NVSDKCOMPUTE_ROOT)\C\common\inc
. This must be done under project properties -> CUDA Runtime API -> Additional Include Directories. In project properties ->Linker -> Additional library directories you must add
 $(CUDA_PATH)/lib/$(PlatformName);../../common/lib/$(PlatformName);$(NVSDKCOMPUTE_ROOT)\C\common\lib\Win32

Then our DLL function takes a matrix of values and performs the Fourier transform of both (previously padded) the input data and the kernel. We can precompute the kernel with Matlab (with fspecial) and copy the values. In this case, for a kernel with $t=1$, we store the values in h_kernel. Then we perform the multiplication and compute the inverse Fourier transform. This code excerpt contains the important lines:

DLL int blur(float *h_Data, int dataH, int dataW, float **d, int *dH, int *dW) {
    float
        *d_Data,
        *d_PaddedData,
        *d_Kernel,
        *d_PaddedKernel;

    fComplex
        *d_DataSpectrum,
        *d_KernelSpectrum;

    cufftHandle
        fftPlanFwd,
        fftPlanInv;

    const int kernelH = 7;
    const int kernelW = 6;
    ...

    float *h_ResultGPU = (float *)malloc(fftH    * fftW * sizeof(float));

    cutilSafeCall( cudaMalloc((void **)&d_Data,   dataH   * dataW   * sizeof(float)) );
    ...

    float h_Kernel[] = {0.0001F,0.0006F,0.0016F,0.0016F,0.0006F,0.0001F,
        0.0009F,0.0070F,0.0190F,0.0190F,0.0070F,0.0009F,
        0.0043F,0.0314F,0.0854F,0.0854F,0.0314F,0.0043F,
        0.0070F,0.0518F,0.1407F,0.1407F,0.0518F,0.0070F,
        0.0043F,0.0314F,0.0854F,0.0854F,0.0314F,0.0043F,
        0.0009F,0.0070F,0.0190F,0.0190F,0.0070F,0.0009F,
        0.0001F,0.0006F,0.0016F,0.0016F,0.0006F,0.0001F};

    ...

    cufftSafeCall( cufftExecR2C(fftPlanFwd, 

         (cufftReal *)d_PaddedKernel, (cufftComplex *)d_KernelSpectrum) );

    cutilSafeCall( cutilDeviceSynchronize() );
   
    cufftSafeCall( cufftExecR2C(fftPlanFwd, 

         (cufftReal *)d_PaddedData, (cufftComplex *)d_DataSpectrum) );
    modulateAndNormalize(d_DataSpectrum, d_KernelSpectrum, fftH, fftW, 1);
    cufftSafeCall( cufftExecC2R(fftPlanInv, 

         (cufftComplex *)d_DataSpectrum, (cufftReal *)d_PaddedData) );

    cutilSafeCall( cutilDeviceSynchronize() );

    cutilSafeCall( cudaMemcpy(h_ResultGPU, 

         d_PaddedData, fftH * fftW * sizeof(float), cudaMemcpyDeviceToHost) );
    ...

    *d=h_ResultGPU;
    *dH=fftH;
    *dW=fftW;

    return 1;
}


I have highlighted the lines where the Fourier transform, the multiplication and the inverse Fourier transform with the induced blur take place.

In the application, we must have a way to call the DLL function. This code loads the DLL, creates a function pointer and casts it to the DLL function.
private: int CallMyDLL(float *f, int h, int w, float **d, int *dh, int *dw)
             {
                 HINSTANCE hGetProcIDDLL = LoadLibrary(L"cudaint.dll");
                 FARPROC lpfnGetProcessID = 

                      GetProcAddress(HMODULE (hGetProcIDDLL),"blur");
                 typedef int (__stdcall * pICFUNC)

                          (float *, int, int, float**, int *, int*);

                 pICFUNC blur;
                 blur = pICFUNC(lpfnGetProcessID);

                 int MyReturnVal = blur(f, h, w, d, dh, dw);

                 FreeLibrary(hGetProcIDDLL);

                 return MyReturnVal;
             }


Then we can call with the pixels as a linearized matrix and the computations will be made within the GPU. Conversions between .Net objects and float matrices are straighforward.
  
I made the source code available here.

Monday, April 30, 2012

Miscellaneous errors with Visual C++

If you compile the project and receive the following error:
fatal error LNK1104: cannot open file 'C:\Program.obj'
This particular issue is caused by specifying a dependency to a lib file that had spaces in its path. The path needs to be surrounded by quotes for the project to compile correctly.

On the Configuration Properties -> Linker -> Input tab of the project’s properties, there is an Additional Dependencies property. This issue was fixed by changing this property from:

C:\Program Files\sofware sdk\lib\library.lib
To:
" C:\Program Files\sofware sdk\lib\library.lib"

Thursday, April 19, 2012

Shape recognition and shape spaces in computer vision

One of the problems in computer vision and pattern recognition is that of shape recognition (classification, clustering and retrieval). It is important in statistical analysis of medical images (shape outliers might indicate disease)  and machine vision (digital recording and analysis based on planar views of 3D objects).

Classical statistical shape analysis is due to Kendall, Bookstein and Mardia. This classical shape analysis treats shapes as shape spaces, whose geometries are those of differentiable manifolds often with appropriate Riemannian structures.

Imagine we are given a set of variables in which each variable is a position of a prearranged landmark (interesting point with a load of information) and our setting is 2D. Then we can model the points as complex numbers $p \in \mathbb{C}$. Therefore each point can be written as $p_i=x_i + iy_i$ where $i=\sqrt{-1}$. A k-ad $p={p_i}_k$ is a set of $k$ shape landmarks.

We now center these points so as to make them translation invariant.
$$w_T=p-\bar{p}$$
where $\bar{p} \in \mathbb{C}^k$ is a vector with all elements set to the k-ad mean, this is $p{_Tj}=\frac{1}{k}\sum_k{p_i}$.

Now we make them scale invariant by dividing each element in $w_T \in \mathbb{C}^k$ by the absolute value of $w_T$, this is $w_S = \frac{w_T}{|w_T|}$.

Finally given the rotation matrix with angle $\theta$
$$\left[\begin{array}{cc}cos(\theta) & -sin(\theta) \\ sin(\theta) & cos(\theta)\end{array}\right]$$
one can rotate the centered shape with $Rw_S$ (if we had modeled it in $\mathbb{R}^{2\times k}$, but we are in the complex domain and it is done by multiplying each element by $e^{i\theta}$. Now, this is when it gets interesting and useful (and where I see the true magic). To make our representation rotation invariant, we can use the Veronese-Whitney embedding, so that the rotation invariant representation is the rank one matrix $w_R=w_S w_S^*$, where $\cdot^*$ is the complex conjugate. This means that, by what we have seen before, any rotation of $w_S$, $e^{i\theta} w_S$ generates
$$e^{i\theta}w_S e^{-i\theta}w_S^* = w_S w_S^*$$
since $e^{-i\theta}$ is the complex conjugate of the rotation (meaning that we undo the rotation by rotating $-\theta$ degrees. This is readily seen by the (complex) exponential arithmetic $e^{i\theta}e^{-i\theta} =  = e^{i(\theta - \theta)} = 1$.

This space is that of the complex hermitian matrices and we can define the norm of a shape and the distance between two shapes by
$$\|A\| = trace(A) \\
\rho(A,B) = \|A-B\|$$

Now we can compute the mean shape of populations and do some inference. For example, populations of known healthy livers against livers with disease, and get confidence intervals so as to accept or reject null hypothesis given a new liver shape.

We see that modeling in $\mathbb{R}^{2\times k}$ and using the matrix $R$, we achieve the same result:
$$wR(wR)^T = wRR^T w^T = ww^T$$

Check out: http://stat.duke.edu/~ab216/duke-talk.pdf

Setting up CUDA in Windows and Visual C++ 2008, Addendum II

After everything is installed, you will need to set additional options. Credit for this goes to the user Tom of Stackoverflow. I will reproduce here the steps to configure CUDA in newer Toolkit versions. Users interested in older ones should check the link at the end of the post.

In addition to the previous CUDA posts, where you can learn how to install the required tools and make Visual C++ understand what you are using, you need to:
  • Add the NvCudaRuntimeApi.rules to the project file (right click on the project, Custom Build Rules, tick the relevant box).
  • Add the CUDA runtime library (right click on the project and choose Properties, then in Linker -> General add $(CUDA_PATH)\lib\$(PlatformName) to the Additional Library Directories and in Linker -> Input add cudart.lib to the Additional Dependencies).
  • Optionally add the CUDA include files to the search path, required if you include any CUDA files in your .cpp files (as opposed to .cu files) (right click on the project and choose Properties, then in C/C++ -> General add $(CUDA_PATH)\include to the Additional Include Directories).
  • Then just build your project and the .cu files will be compiled to .obj and added to the link automaticaly
  • Change the code generation to use statically loaded C runtime to match the CUDA runtime; right click on the project and choose Properties, then in C/C++ -> Code Generation change the Runtime Library to /MT (or /MTd for debug, in which case you will need to mirror this in Runtime API -> Host -> Runtime Library). You might get error LNK4098 otherwise.
Error tips: Having mismatched version of the C runtime can cause a variety of problems; in particular if you have any errors regarding LIBCMT (e.g. LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs) or multiply defined symbols for standard library functions, then this should be your first suspect. Also, you might see LNK2001, LNK2005 when some conflict with the C++ runtime exists. A possible workaround might be using C functions.

If you have Visual Studio 2010, here are the details for you:
http://stackoverflow.com/questions/3778799/how-do-i-start-a-cuda-app-in-visual-studio-2010/7285235#7285235

For older versions of CUDA:
http://stackoverflow.com/questions/2046228/how-do-i-start-a-new-cuda-project-in-visual-studio-2008

Monday, April 16, 2012

Setting up CUDA in Windows and Visual C++ Express 2008 Addendum

With CUDA Toolkit 4.x, things are slightly different.

Syntax highlighting

The necessary files are under
<CUDA_SDK>\C\doc\syntax_highlighting

Building

The following procedure is credited to the user sdtougao from NVIDIA forums. This is what you need to build your projects correctly.
  • Copy the rules files, <NVIDIA GPU Computing Toolkit>\CUDA\v4.0\extras\visual_studio_integration\ rules to the VS2008 installation directory, <Microsoft Visual Studio 9.0>\VC\VCProjectDefaults
  • Open a SDK project in VS2008, for example clock.sln, open "custom build rule",select "NvCudaRuntimeApi.rules".
I don't even need to do the second step and SDK examples compile without errors. I can't know if that is related to my previous installation, though.

Intellisense

A registry key file is provided so that double-clicking on it adds the necessary keys to the registry without any extra effort.

Sunday, April 8, 2012

Setting up CUDA on Windows and Visual C++ Express 2008

Assuming we already have Visual C++ 2008 Express installed, we need to perform the following
steps:
  • Download and install the latest nVidia driver with CUDA that is compatible with the system’s graphics card. Then reboot your system so that the driver loads to memory. Note that nVidia publishes a special series of drivers for desktops and for notebooks cards (the later with some time lag).
  • Download and install both the CUDA Toolkit and the CUDA SDK. These must be the same version as the driver before. If the examples in the bin directory of the SDK do not work, this indicates you have a wrong version of the driver (and of the whole package).

Syntax highlight and Intellisense

Since the extension .cu is not recognized as C code by Visual C++, it will not highlight the language’s reserved words. NVidia supply a usertype.dat under: “<CUDA SDK install dir>\doc\syntax_highlighting\visual_studio_8”.
  • Copy this file to: “\Program Files\Microsoft Visual Studio 9.0\Common7\IDE”. Now open Visual Studio and navigate to:  tools->options->text editor->file extension. In the extension box type “cu”, make sure you select “Microsoft Visual C++”.
For Intellisense, the process is not automatic as the previous ones. It involves dealing with internal Visual C++ registry settings:
  • On Windows 7, open the registry editor and navigate to “HKEY_USERS\<user code>\Software\Microsoft\VCExpress\9.0\Languages\Language Services\C/C++” then go to the string entry “NCB Default C/C++ Extensions” and add “;.cu;.cuh” to the end of the list. The <user code> is machine-specific, the best thing to do is look for “NCB Default C/C++ Extensions” in the registry editor. Restart Visual Studio and Intellisense will work for your CUDA files. If there is a specific CUDA C feature, it will not recognize it, but all the normal C keywords will be benefited.

Building

nVidia supply a rules file which you will find under the “<CUDA SDK install dir>\common”
directory.
  • Select it under “project -> custom build rules -> find existing”. This will allow the IDE to find the compiler nvcc for the .cu files.
After Visual C++ instructs nvcc to compile .cu files, it links the object files (with the native compiler) into an executable file. Inspect the following output:
Debug Win32 ------
1>Compiling with CUDA Build Rule...
1>"D:\my\CUDA\bin\nvcc.exe"    -arch sm_10 -ccbin "C:\Program Files\Microsoft Visual Studio
9.0\VC\bin"  -use_fast_math  -Xcompiler "/EHsc /W3 /nologo /O2 /Zi   /MT  " -
I"D:\my\CUDA_SDK\common\inc" -maxrregcount=32  --compile -o "Debug\kernel.cu.obj"
"c:\Users\Gabo\Documents\Visual Studio 2008\Projects\more\cuda\kernel.cu" 
1>kernel.cu
1>tmpxft_00001314_00000000-3_kernel.cudafe1.gpu
1>tmpxft_00001314_00000000-8_kernel.cudafe2.gpu
1>tmpxft_00001314_00000000-3_kernel.cudafe1.cpp
1>tmpxft_00001314_00000000-13_kernel.ii
1>Compilando...
1>main.cpp
1>math.cpp
1>Building code...
1>Linking...
1>main.obj : warning LNK4075: se omite '/EDITANDCONTINUE' due to the specification
'/INCREMENTAL:NO'
1>Incrustando manifiesto...
1>El registro de compilación se guardó en el "file://c:\Users\Gabo\Documents\Visual Studio
2008\Projects\more\cuda\Debug\BuildLog.htm"
1>cuda - 0 errors, 3 warnings
Notice that the compilation of the .cu files (with nvcc), then the .cpp files (with the Visual C++
compiler) and, last, the linking process.
Remark: nVidia compiler, nvcc, treats differently the objects it finds in the .cu files. If it is a kernel, it compiles it into PTX instructions and stores them into a data section of the object file. On the other hand, if it finds a plain C function, it compiles it as any C compiler would do, storing native hardware instructions into the executable text section of the object file. Remark: Since the device code is executed in the GPU, the kernels live as data in the host code and are copied to the device memory in an I/O operation initiated by the code nvcc put when the C function in the .cu file invoked the kernel. This is explained later.

Linking difficulties

For a seamless compilation, the best thing to do is to copy an existing CUDA project from the SDK to a separate folder, including the libraries (from the “common” directory), rename and redistribute everything and start from there

However, if an output like this is experienced:
1>MSVCRTD.lib(MSVCR90D.dll) : error LNK2005: ya se definió _free en LIBCMT.lib(free.obj)
1>MSVCRTD.lib(MSVCR90D.dll) : error LNK2005: ya se definió _malloc en LIBCMT.lib(malloc.obj)
1>MSVCRTD.lib(MSVCR90D.dll) : error LNK2005: ya se definió _printf en LIBCMT.lib(printf.obj)
1>MSVCRTD.lib(ti_inst.obj) : error LNK2005: ya se definió "private: __thiscall type_info::type_info(class type_info const &)" (??0type_info@@AAE@ABV0@@Z) en LIBCMT.lib(typinfo.obj)
1>MSVCRTD.lib(ti_inst.obj) : error LNK2005: ya se definió "private: class type_info & __thiscall type_info::operator=(class type_info const &)" (??4type_info@@AAEAAV0@ABV0@@Z) en LIBCMT.lib(typinfo.obj)
Adding /NODEFAULTLIB:LIBCMT.lib to “project -> Configuration Properties -> Linker ->
Command Line” in the Additional Options textbox.
Since the .cu files are compiled by nvcc and kinked to the nVidia libraries (according to the
supplied build rules), a general Visual C++ project links to the standard set of libraries, and
some of them get in conflict with nVidia’s. The previous fix avoids using the conflicting
standard libraries.