SIGIR 2012 was held in Portland, Oregon last week. I attended the conference and found it very interesting and delightful. Let me recap the conference and interesting papers in this short post.
This year, one of the main themes in SIGIR mentioned many times by different people is how to evaluate IR models effectively. Two tutorials dedicated to evaluation method and many more papers are talking about this topic. The one presented by Don Metzler and Oren Kurland in the morning is more towards basic research methods and how to design a reasonable experimental method to validate any ideas. The slides are here. Although the tutorial is basic stuff, it remains valuable as it is the most important part in any research paper and it can go wrong easily. The afternoon tutorial presented by Emine Yilmaz, Evangelos Kanoulas and Ben Carterette are more advanced as it dealt with user models and evaluation metrics. More specifically, the tutorial gave a clear explanation of nearly all evaluation methods and their corresponding user models and sought more depth on the topic. One thing I felt disappointing is that all tutorials focus on static analysis of evaluation methods and off-line evaluations. It would be nice that more on-line evaluation methods can be mentioned and the bridge between two paradigms can be discussed.
I really enjoy the first keynote from this year’s Salton Award winner Norbert Fuhr, entitled “Information Retrieval as Engineering Science”. The basic argument Norbert wanted to make is that IR should base on provable and constructive theories where current IR research seems ignore them at all. He had an provoking example to compare building bridges and building search engines. For bridges, given a certain knowledge of the span and the location and maybe other factors (e.g., budget), engineers can build one single bridge that has nice engineering properties such as beauty and endurance. While on the search engine part, the story is quite different. You probably need to build several search engines or prototypes to have a good feeling of which architecture should be used and what features might be relevant. You even run several systems simultaneously and do A/B testing to determine which one should remain. There’s no theory behind it. However, Norbert didn’t provide any clue on how to obtain such theories.
In regular paper sessions, here’re an incomplete take of what I think is interesting:
- Adaptive Diversification of Recommendation Results via Latent Factor Portfolio This paper is about how to use portfolio theory to recommender systems. The basic idea is to exploit mean-variance balance.
- Personalized Click Shaping through Lagrangian Duality for Online Recommendation This work is to balance several aspects of optimizing a recommendation system. The whole idea is to formalize the problem as a multi-objective optimization problem and put different criterion as constraints.
- Friend or Frenemy? Predicting Signed Ties in Social Networks This work is to uncover signed social network through a fully unsupervied procedure.
- Cognos: Crowdsourcing Search for Topic Experts in Microblogs This paper talks about how to utilize “lists” in Twitter to obtain better quality of expertise finding.
The best paper award goes to: Time-Based Calibration of Effectiveness Measuresby Mark D Smucker (University of Waterloo), Charles L. A. Clarke (University of Waterloo) and best student paper award goes to: Top-k Learning to Rank: Labeling, Ranking and Evaluation by Shuzi Niu (Institute of Computing Technology, CAS) Jiafeng Guo Yanyan Lan (Chinese Academy of Sciences) Xueqi Cheng (Institute of Computing Technology, CAS)
cognos is a nice work – they manage to get in sigir without any complicated formula and with a great practical problem 🙂 as for diversity in recommender systems, our wsdm paper Auralist might be of interest http://www.cs.ucl.ac.uk/fileadmin/UCL-CS/research/Research_Notes/RN_11_21.pdf
– dan
http://www.cl.cam.ac.uk/~dq209/index.html
Thanks for your pointer for the paper.
Nice summary Liangjie! 😛
Thanks.