# Difference between revisions of "a Dynamic Bayesian Network Click Model for Web Search Ranking"

Line 12: | Line 12: | ||

The documents ranked on a result list of a given query are presented through a sequence in DBN. For a given position i, there is an observed variable <math>C</math><sub>i</sub> indicating whether there was a click at position <math>i</math>. There are three hidden binary variables defined for each position <math>i</math> as follows: | The documents ranked on a result list of a given query are presented through a sequence in DBN. For a given position i, there is an observed variable <math>C</math><sub>i</sub> indicating whether there was a click at position <math>i</math>. There are three hidden binary variables defined for each position <math>i</math> as follows: | ||

− | * <math> E </math><sub>i</sub>: | + | * <math> E </math><sub>i</sub>: whether the user examined the document at position <math>i</math>. |

− | + | * <math> A </math><sub>i</sub>: whether the user was attracted by the document at position <math>i</math> (i.e. perceived relevance). | |

+ | * <math> S </math><sub>i</sub>: whether the user was satisfied by the document at position <math>i</math> (i.e. actual relevance). | ||

in order to model examination, perceived relevance, and actual relevance. The Expectation Maximization algorithm is used to find the maximum likelihood estimate of the perceived relevance and the actual relevance variables. The forward-backward algorithm is used to to compute the posterior probabilities of the rest of the hidden variables. | in order to model examination, perceived relevance, and actual relevance. The Expectation Maximization algorithm is used to find the maximum likelihood estimate of the perceived relevance and the actual relevance variables. The forward-backward algorithm is used to to compute the posterior probabilities of the rest of the hidden variables. |

## Revision as of 19:08, 8 November 2011

One of the most common click models in Web search, known as the *position model*, is based on the position bias on the displayed ranked results. Under this model, it is assumed that the chance of click decreases towards the lower ranks on result pages due to the reduced visual attention from the user. A more recent click model, referred to as the *cascade model* of user behaviour, assumes that the user scans search results from top to bottom and eventually stops because either their information need is satisfied or their patience is exhausted.

The benefit of the cascade model over the position model is its ability to explain click with respect to the relevance of the previous documents; therefore, the later model has shown state-of-the-art performance over the former one. However, the cascade model makes a strong assumption that there is only one click per search; hence, it can not explain the abandoned search or search with multiple clicks. Moreover, none of these models distinguish the perceived relevance and the actual relevance. The perceived relevance is the relevance of a document judged by the user based on their examination of the document as it is shown on a result page. The actual relevance is the relevance of the document judged by the user once she/he clicks on it and sees its content.

A Dynamic Bayesian Network (DBN) model is proposed in this paper in order to study the user's browsing and click behaviour, and eventually to infer the relevance of the documents. The proposed model addresses the issues with the above models through the following assumptions about the user's click and browsing behaviour:

- The user makes a linear traversal through the results and decides whether to click based on the perceived relevance of the document.
- The user chooses to examine the next document if she/he is unsatisfied with the clicked document (based on the actual relevance).
- A click does not necessarily mean that the user is satisfied with the clicked document. With respect to this, the proposed model attempts to distinguish the perceived relevance and the actual relevance.
- There is no limit on the number of clicks that a user can make during a search.

The documents ranked on a result list of a given query are presented through a sequence in DBN. For a given position i, there is an observed variable [math]C[/math]_{i} indicating whether there was a click at position [math]i[/math]. There are three hidden binary variables defined for each position [math]i[/math] as follows:

- [math] E [/math]
_{i}: whether the user examined the document at position [math]i[/math]. - [math] A [/math]
_{i}: whether the user was attracted by the document at position [math]i[/math] (i.e. perceived relevance). - [math] S [/math]
_{i}: whether the user was satisfied by the document at position [math]i[/math] (i.e. actual relevance).

in order to model examination, perceived relevance, and actual relevance. The Expectation Maximization algorithm is used to find the maximum likelihood estimate of the perceived relevance and the actual relevance variables. The forward-backward algorithm is used to to compute the posterior probabilities of the rest of the hidden variables.

Three types of experiments are conducted in the paper to validate DBN and to compare it with the existing models. First, they evaluate the click model in terms of the predicted click rate at position 1. Then they use the predicted relevance as a feature in a ranking function. In the last set of experiments, they use the predicted relevance as a supplementary information to train a ranking function.

The empirical results from the experiments on the logs of a commercial search engine indicate that DBN can accurately explain the observed clicks. They show that the function learned with the predicted relevance is not far from being as good as a function trained with a large amount of editorial data. They further show that combining both types of information can lead to an even more accurate ranking function.