Wan et al. [1] discussed how to leverage complementarity and loyalty to learn better item representations from shopping baskets. The main idea is to learn two sets of representations from co-occurred items from the same baskets where these two different representations entail item’s complementarity. Building on top the learned representations, the authors invented an algorithm called adaLoyal to determine a purchase would be a mixture of frequency-based behaviors (a.k.a, loyalty) and from item representations.
Zamani et al. [2] discussed how to learn one neural ranking function to replace two-phase approaches where the first phase ranking is usually a retrieval model and the second phase is a more complex model like neural nets. The main idea of the paper is to utilize L1 regularization and learn a sparse representation for both queries and documents with certain desirable conditions. These representations can be further used as indexing to retrieve documents efficiently in the inference time.
Ren et al. [3] discussed a long-term problem in online advertising, which is to allocate credits to user behaviors in the journey of ads conversion. Traditionally, the credit allocation is either done by some heuristics or simple models like Logistic Regression. In this work, authors utilized sequential pattern modeling, notably RNN, to model user behaviors, with considerations of different pre-conversation behavior types. In addition, the paper also introduced a way to conduct offline evaluation for credit allocation.
Gysel et al. [4] discussed how to utilize similar items to improve product search. The similarity is primarily determined from text similarity and semantic similarity through pushing similar items share closeness in the latent semantic space.
Wang et al. [5] discussed a unified framework for learning to rank problems where the traditional LambdaRank can be easily explained as an optimization procedure for an objective function, previously unknown. Under this framework, the authors proposed a generic EM algorithm to solve learning to rank problems and demonstrated several use cases of the framework with different setups, yielding different loss functions to optimize different metrics.
Zhang et al. [6] discussed how to combine multiple information sources on search engine result pages to infer better overall relevance. In particular, they exploited visual patterns from search result screenshots, title semantics, snippet semantics, and HTML DOM structure semantics. All these modules are combined through an Attention layer where weights are learned jointly. Another contribution of the paper is to release such dataset with graded relevance judgments. This paper won the Best paper award.
Jun Hu and Ping Li [7] argued that traditional collaborative ranking would not easily learn model parameters in the optimal setting. In particular, one inherent issue is that, vanilla learning would not necessarily hold ordering information and because of logistic loss, the model could learn arbitrarily model parameters that may not improve the objective function (loss). In this paper, the authors proposed a method to jointly learn column and row order as well as pointwise loss and demonstrate the effectiveness of proposed method.
Xuanhui Wang, Cheng Li, Nadav Golbandi, Michael Bendersky, and Marc Najork. 2018. The LambdaLoss Framework for Ranking Metric Optimization. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). ACM, New York, NY, USA, 1313-1322. DOI: https://doi.org/10.1145/3269206.3271784
Jun Hu and Ping Li. 2018. Collaborative Multi-objective Ranking. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). ACM, New York, NY, USA, 1363-1372. DOI: https://doi.org/10.1145/3269206.3271785