This SIGIR 2006 poster shows that the learning of a ranking function, given pairwise preferences over a known set of document ids, can be formalized as a specialized probit regression problem. The idea comes from this 2003 paper, which in addition explained how to solve this specialized probit regression problem using existing statistical software. (Though, in the case of Web page ranking, I guess no exiting statistical software can be employed.)
For each query, the relevance of document i is denoted by , and given some pairs of
We can estimate by maximizing the likelihood function
If we assume that the relevance weights are distributed normally with mean and variance , then the cumulative density function becomes , the normal cumulative distribution, and the likelihood becomes:
However, my point is that this method is hard to implement in search engine, as it makes use of document ids and clicks on documents, there exist a huge number of parameters (equals to the number of query-document pairs), and relatively a much much less training data set.