SIGIR 2010 Paper Reading


In this post, I would like to talk about several interesting papers from SIGIR 2010. Note, this only reflects my view of these scientific work and does not necessarily correct and thorough.

  • On Statistical Analysis and Optimization of Information Retrieval Effectiveness Metrics
    This paper is more theoretical rather than practical. The main contribution is that the authors argue that the optimal ranking problem should be factorized into two distinct yet interrelated stages: the relevance prediction stage and ranking decision stage. The paper shows that a number of IR metrics (e.g., Average Precision, DCG, Reciprocal Rank) can be decomposed into the two stages mentioned above. Therefore, the overall strategy is to directly optimize the decomposed metrics. The authors show the improved performance over simple language models. However, the paper does not compare to Learning to Rank techniques where the metrics are also optimized. In all, this is an interesting paper for whose really work in Ad-Hoc retrieval fields.
  • Evaluating and Predicting Answer Quality in Community QA
    This paper is straightforward. The authors wanted to predict the best answers (in their words, “answer quality”) in the Community QA sites.  They firstly used a number of subjective features obtained from Amazon Technical Turks and found it difficulty to do so. Then, they used a bunch of automatically extracted features (most are meta-information features) and show the improved performance. The work is simple and indeed related to my work in SIGIR 2009.  They still do not answer a question that whether a so-called “best answer” really a true “best” answer among all others to the corresponding questions. Moreover, classification approaches are not compared to retrieval-based methods in this paper.
  • Segmentation of Multi-Sentence Questions: Towards Effective Question Retrieval in cQA Services
    This is another paper in QA. This work is one extension to many previous work. For example, in “question detection”, the authors proposed a one-class SVM method to obtain the training dataset. In addition, the authors proposed a graph-based method to segment questions into multiple sub-questions. Overall, the authors show that their method can give a significant boost to question matching and retrieval, compared to traditional Bag-of-Word methods. Additionally, the authors show that the Sequential Patterns Mining and Syntactical Patterns Mining can also improve the performance of question detection. One thing is not clear is that which retrieval model the authors used in the paper.
  • Multi-Style Language Model for Web Scale Information Retrieval
    This paper is interesting. It introduces two interesting points. First, it shows the significant gap between query language model and document model where the paper also demonstrated that the anchor and title language model are more near the queries. The second point made by this paper is how to estimate a language model by considering an open vocabulary, namely, an infinite vocabulary. The problem for an open vocabulary language model is how to assign probability mass to unseen terms and how to adjust the mass to seen terms. This paper show one simple method with closed form expressions. This “smoothed” language model is also embedded with a multi-component language model where the model utilizes multiple fields for a document.
  • Mining the Blogosphere for Top News Stories Identification
    This paper is straightforward and interesting. The problem addressed in the paper is to rank news stories according to blogosphere in a given day. Here, the authors treated the “date” as the query. The overall framework falls into language model framework. In order to know how likely all blog posts relevant to the query date, the authors utilize a clustering method to group blog posts into topics and estimate the query language model from these clusters. News headline language model is estimated by a standard Dirichlet smoothed language model. Then, the distance between language model is calculated through KL-divergence. The authors proposed two heuristics to identify the importance of news stories. In all, the paper is well-written and well-organized. However, it is not clear why the authors do not use Multiple-Document representation for a blog, compared to a clustering algorithm. In addition, there are several important parameters are tuned manually, for example, the spread of a news story. This prevent the system used in real applications.
  • Serendipitous Recommendations via Innovators
    This paper reveals one interesting yet not heavily explored area in recommendation systems, the “surprise” of recommendations. The author argues that a recommender which achieved high accuracy may not help users a lot since most recommended items are popular items that can be discovered by users anyway. If a recommender wants to show something really interesting, it should provide provide some items that may not be found by users without any help. Therefore, the author proposed to use “time” as a measure to identify the success of recommendation.  However, the algorithm proposed in the paper is not very intuitive. Anyway, I think it’s still an interesting paper and worth to read.
  • Temporal Diversity in Recommender Systems
    This paper is simple and easy to follow. The main idea of the paper is to show that the temporal dynamics of recommender systems, especially in Netflix. One “obvious” observation of the paper is that users lose “patient” when they see same recommendations over time. Therefore, the authors claim that diversity should be taken into account by recommenders.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.