Liangjie Hong


Install Numpy and Scipy on CentOS without Root Privilege 4

Sometimes you want to install Numpy and Scipy on a remote CentOS machine without root privilege, which is usually true when you are using a university server. Before you proceed to the following instructions, you need to make sure that a copy of Python is installed. This is also done without root privilege, meaning that you may install it in an alternative directory, rather than system directory.

Prerequisite:

  1. Download the latest version of LAPACK and extracted it into path [LAPACK].
  2. Download the latest version of BLAS and extracted it into path [BLAS].

Note that in almost all tutorials on how to install Numpy and Scipy on Linux machines discuss how to install them with ATALS. This is possible only if you have the root privilege where you can turn off CPU threshoding . Since we do not have root privilege, we can only install them with LAPACK and BLAS.

Step 1: Install Numpy

  1. Edit “site.cfg”
    a) Enable “[DEFAULT]” section and add

    src_dirs = [BLAS]:[LAPACK]

    b) Add

    [blas_opt]
    libraries = f77blas, cblas
     
    [lapack_opt]
    libraries = lapack, f77blas, cblas
  2. Type the following command in the shell:
    python setup.py build --fcompiler=gnu95

    which will compile the package with “gfortran”.

  3. Type the following command in the shell:
    python setup.py install

Step 2: Install Scipy

Once Numpy is installed.  Scipy can be easily built and installed through normal “python setup.py build” and “python setup.py install” process. Remember that these command should be accompanied with “–fcompiler=gnu95”.


Sorting Tuples in C++ 2

In this post, I would like to show how to create a tuple object in C++ 11 and how to sort tuples.

Here is the code for creating tuples and doing the sort. It is pretty straightforward.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#include <iostream>
#include <string>
#include <vector>
#include <tuple>
#include <algorithm>
using namespace std;
 
typedef tuple<string,double,int> mytuple;
 
bool mycompare (const mytuple &lhs, const mytuple &rhs){
  return get<1>(lhs) < get<1>(rhs);
}
 
int main(void){
  vector<mytuple> data;
  data.push_back(make_tuple("abc",4.5,1));
  data.push_back(make_tuple("def",5.5,-1));
  data.push_back(make_tuple("wolf",-3.47,1));
  sort(data.begin(),data.end(),mycompare);
  for(vector<mytuple>::iterator iter = data.begin(); iter != data.end(); iter++){
    cout << get<0>(*iter) << "\t" << get<1>(*iter) << "\t" << get<2>(*iter) << endl;
  }
}

The code is successfully compiled by G++ 4.6.1 with the option “-std=gnu++0x”.


Two Forms of Logistic Regression

There are two forms of Logistic Regression used in literature. In this post, I will build a bridge between these two forms and show they are equivalent.

Logistic Function & Logistic Regression

The common definition of Logistic Function is as follows:

    \[P(x) = \frac{1}{1+\exp(-x)} \;\; \qquad (1)\]

where x \in \mathbb{R} is the variable of the function and P(x) \in [0,1]. One important property of Equation (1) is that:

    \[ <span class="ql-right-eqno"> (1) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-26f49852a50c1f981273da59a3d7de67_l3.png" height="213" width="233" class="ql-img-displayed-equation " alt="\begin{eqnarray*}P(-x) &=& \frac{1}{1+\exp(x)} \nonumber \\&=& \frac{1}{1+\frac{1}{\exp(-x)}} \nonumber \\&=& \frac{\exp(-x)}{1+\exp(-x)} \nonumber \\&=& 1 - \frac{1}{1+\exp(-x)} \nonumber \\&=& 1 - P(x) \; \; \qquad (2)\end{eqnarray*}" title="Rendered by QuickLaTeX.com"/> \]

The form of Equation (2) is widely used as the form of Logistic Regression (e.g., [1,2,3]):

    \[ <span class="ql-right-eqno"> (2) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-0d2ec58cfdefbc85b337c9195b7e9d47_l3.png" height="96" width="333" class="ql-img-displayed-equation " alt="\begin{eqnarray*}P(y = 1 \, | \, \boldsymbol{\beta}, \mathbf{x}) &=& \frac{\exp(\boldsymbol{\beta}^{T} \mathbf{x})}{1 + \exp(\boldsymbol{\beta}^{T} \mathbf{x})} \nonumber \\P(y = 0 \, | \, \boldsymbol{\beta}, \mathbf{x}) &=& \frac{1}{1 + \exp(\boldsymbol{\beta}^{T} \mathbf{x})} \;\; \qquad (3)\end{eqnarray*}" title="Rendered by QuickLaTeX.com"/> \]

where \mathbf{x} is a feature vector and \boldsymbol{\beta} is a coefficient vector. By using Equation (2), we also have:

    \[ <span class="ql-right-eqno"> (3) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-ab78efd52d92fe6f7a6a48a6387e0bce_l3.png" height="19" width="276" class="ql-img-displayed-equation " alt="\begin{equation*}P(y=1 \, | \, \boldsymbol{\beta}, \mathbf{x}) = 1 - P(y=0 \, | \, \boldsymbol{\beta}, \mathbf{x})\end{equation*}" title="Rendered by QuickLaTeX.com"/> \]

This formalism of Logistic Regression is used in [1,2] where labels y \in \{0,1\} and the functional form of the probability to generate different labels is different. Another formalism introduced in [3] unified the two forms into one single equation by integrating the label and the prediction together:

    \[ <span class="ql-right-eqno"> (4) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-adce25678a6ba44c60e8e18024431334_l3.png" height="43" width="347" class="ql-img-displayed-equation " alt="\begin{equation*}P(g= \pm 1 \, | \, \boldsymbol{\beta}, \mathbf{x}) = \frac{1}{1 + \exp( - g\boldsymbol{\beta}^{T} \mathbf{x})} \;\; \qquad (4)\end{equation*}" title="Rendered by QuickLaTeX.com"/> \]

where g \in \{\pm 1\} is the label for data item x. It is also easily to verify that P(g=1 \, | \, \boldsymbol{\beta}, \mathbf{x}) = 1 - P(g=-1 \, | \, \boldsymbol{\beta}, \mathbf{x}).

The Equivalence of Two Forms of Logistic Regression

At first glance, the form (3) and the form (4) looks very different. However, the equivalence between these two forms can be easily established. Starting from the form (3), we can have:

    \[ <span class="ql-right-eqno"> (5) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-e86023543d98b569af5f56638c5993ff_l3.png" height="175" width="278" class="ql-img-displayed-equation " alt="\begin{eqnarray*}P(y = 1 \, | \, \boldsymbol{\beta}, \mathbf{x}) &=& \frac{\exp(\boldsymbol{\beta}^{T} \mathbf{x})}{1 + \exp(\boldsymbol{\beta}^{T} \mathbf{x})} \nonumber \\&=& \frac{1}{\frac{1}{\exp(\boldsymbol{\beta}^{T} \mathbf{x})} + 1} \nonumber \\&=& \frac{1}{\exp(-\boldsymbol{\beta}^{T} \mathbf{x}) + 1} \nonumber \\&=& P(g= 1 \, | \, \boldsymbol{\beta}, \mathbf{x})\end{eqnarray*}" title="Rendered by QuickLaTeX.com"/> \]

We can also establish the equivalence between P(y=0 \, | \, \boldsymbol{\beta}, \mathbf{x}) and P(g=-1 \, | \, \boldsymbol{\beta}, \mathbf{x}) easily by using property (2). Another way to establish the equivalence is from the classification rule. For the form (3), we have the following classification rule:

    \[ <span class="ql-right-eqno"> (6) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-bb3dda211fa7695381356592f2eac0f8_l3.png" height="117" width="225" class="ql-img-displayed-equation " alt="\begin{eqnarray*}\frac{\frac{\exp(\boldsymbol{\beta}^{T} \mathbf{x})}{1 + \exp(\boldsymbol{\beta}^{T} \mathbf{x})}}{\frac{1}{1 + \exp(\boldsymbol{\beta}^{T} \mathbf{x})}} & > & 1 \;\; \rightarrow \;\; y = 1 \nonumber \\\exp(\boldsymbol{\beta}^{T} \mathbf{x}) & > & 1 \nonumber \\\boldsymbol{\beta}^{T} \mathbf{x} & > & 0\end{eqnarray*}" title="Rendered by QuickLaTeX.com"/> \]

An exactly same classification rule for the form (4) can also be obtained as:

    \[ <span class="ql-right-eqno"> (7) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-110b1c3ae6ac1d5eeddfaffa87acae4c_l3.png" height="166" width="264" class="ql-img-displayed-equation " alt="\begin{eqnarray*}\frac{\frac{1}{1 + \exp( - \boldsymbol{\beta}^{T} \mathbf{x})}}{\frac{1}{1 + \exp( \boldsymbol{\beta}^{T} \mathbf{x})}} & > & 1 \;\; \rightarrow \;\; g = 1 \nonumber \\\frac{1 + \exp(\boldsymbol{\beta}^{T} \mathbf{x})}{1 + \exp( - \boldsymbol{\beta}^{T} \mathbf{x})} & > & 1 \nonumber \\\exp(\boldsymbol{\beta}^{T} \mathbf{x}) & > & 1 \nonumber \\\boldsymbol{\beta}^{T} \mathbf{x} & > & 0\end{eqnarray*}" title="Rendered by QuickLaTeX.com"/> \]

Therefore, we can see that two forms essentially learn the same classification boundary.

Logistic Loss

Since we establish the equivalence of two forms of Logistic Regression, it is convenient to use the second form as it can be explained by a general classification framework. Here, we assume y is the label of data and \mathbf{x} is a feature vector. The classification framework can be formalized as follows:

    \[ <span class="ql-right-eqno"> (8) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-7b2ccccdd2c250978dabe7bef0c428ee_l3.png" height="42" width="183" class="ql-img-displayed-equation " alt="\begin{equation*}\arg\min \sum_{i} L\Bigr(y_{i},f(\mathbf{x}_{i})\Bigl)\end{equation*}" title="Rendered by QuickLaTeX.com"/>\]

where f is a hypothesis function and L is loss function. For Logistic Regression, we have the following instantiation:

    \[ <span class="ql-right-eqno"> (9) </span><span class="ql-left-eqno">   </span><img src="https://www.hongliangjie.com/wp-content/ql-cache/quicklatex.com-38105dc931573fa3c94d601dccba3b8a_l3.png" height="59" width="296" class="ql-img-displayed-equation " alt="\begin{eqnarray*}f(\mathbf{x}) &=& \boldsymbol{\beta}^{T} \mathbf{x} \nonumber \\L\Bigr(y,f(\mathbf{x})\Bigl) &=& \log \Bigr( 1 + \exp(-y f(\mathbf{x})\Bigl)\end{eqnarray*}" title="Rendered by QuickLaTeX.com"/>\]

where y \in \{ \pm 1 \}.

References

[1] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001.
[2] Tom M. Mitchell. Machine learning. McGraw Hill series in computer science. McGraw-Hill, 1997.
[3] Jason D. M. Rennie. Logistic Regression. http://people.csail.mit.edu/jrennie/writing, April 2003.


Random Number Generation with C++ 0x in GCC 2

In this post, I woud like to explore how random number would be generated by using newly added features from C++ 0x in GCC.

Rather than explaining the details of C++ 0x, I just post code here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
#include <random>
#include <iostream>
#include <string>
#include <stdio.h>
#include <time.h>
 
using namespace std;
 
int main()
{
  int n;
  double p, lambda, shape, mu, sigma;
  mt19937 eng(time(NULL));
 
  uniform_int_distribution<int> uniform_int(3,7);
  for(int i=0; i < 10; i++)
    cout << "[Uniform INT distribution]:" << uniform_int(eng) << endl;
  cout << endl;
 
  uniform_real_distribution<double> uniform_real(0.0,1.0);
  for(int i=0; i < 10; i++)
    cout << "[Uniform REAL distribution]:" << uniform_real(eng) << endl;
  cout << endl;
 
  n = 5;
  p = 0.3;
  binomial_distribution<int> binomial(n, p);
  for(int i=0; i < 10; i++)
    cout << "[Binomial distribution]:" << binomial(eng) << endl;
  cout << endl;
 
  lambda = 4.0;
  exponential_distribution<double> exponential(lambda);
  for(int i=0; i < 10; i++)
    cout << "[Exponential distribution]:" << exponential(eng) << endl;
  cout << endl;
 
  shape = 3.0;
  gamma_distribution<double> gamma(shape);
  for(int i=0; i < 10; i++)
    cout << "[Gamma distribution]:" << gamma(eng) << endl;
  cout << endl;
 
  p = 0.5;
  geometric_distribution<int> geometric(p);
  for(int i=0; i < 10; i++)
    cout << "[Geometric distribution]:" << geometric(eng) << endl;
  cout << endl;
 
  mu = 3.0; sigma = 4.0;
  normal_distribution<double> normal(mu, sigma);
  for(int i=0; i < 10; i++)
    cout << "[Gaussian distribution]:" << normal(eng) << endl;
  cout << endl;
 
  lambda = 7.0;
  poisson_distribution<int> poisson(lambda);
  for(int i=0; i < 10; i++)
    cout << "[Poission distribution]:" << poisson(eng) << endl;
  cout << endl;
 
  p = 0.6;
  bernoulli_distribution bernoulli(p);
  for(int i=0; i < 10; i++)
    cout << "[Bernoulli distribution]:" << bernoulli(eng) << endl;
  cout << endl;
 
  return (0);
}

Note, the code is very clean in the sense that you don’t need any extra libraries at all.

Please compile with:

g++ -std=gnu++0x

The GCC I used is GCC 4.6.1

Several references:
1. http://www.johndcook.com/test_TR1_random.html (sort of out-dated)
2. http://www.johndcook.com/cpp_TR1_random.html (sort of out-dated)
3. http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.200x (current status of C++ 0x in GCC)
4. http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt01ch03s02.html ( a list of header files of C++ in GCC)


Axiomatic Analysis and Optimization of Information Retrieval Models

This is an “unusual” research aspect of Information Retrieval (IR). By trying to compare and analyze different IR models in a formal way, Axiomatic Framework can show some interesting and even astonishing results of IR models. For instance, it can show that IR models should satisfy certain number of constraints. If a model cannot satisfy some of them, we can expect its performance being worse. This is the type of comparison without any experiments at all, though the claims are indeed justified by empirical studies.

Materials:

  • Axiomatic Analysis and Optimization of Information Retrieval Models by ChengXiang Zhai at ICTIR11 [Slides]
  • Yuanhua Lv, ChengXiang Zhai. Lower-Bounding Term Frequency Normalization. Proceedings of the 20th ACM International Conference on Information and Knowledge Management  (CIKM’11), 2011. [PDF]
  • Hui Fang, Tao Tao, and Chengxiang Zhai. 2011. Diagnostic Evaluation of Information Retrieval ModelsACM Transactions on Information Systems (TOIS) 29, 2, Article 7 (April 2011), 42 pages. [PDF]
  • Hui Fang, ChengXiang Zhai, Semantic Term Matching in Axiomatic Approaches to Information RetrievalProceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR’06 ), pages 115-122. [PDF]
  • Hui Fang, ChengXiang Zhai, An Exploration of Axiomatic Approach to Information RetrievalProceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR’05 ), 480-487, 2005. [PDF]
  • Hui Fang, Tao Tao, ChengXiang Zhai, A formal study of information retrieval heuristicsProceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval ( SIGIR’04), pages 49-56, 2004. [PDF]
  • Hui Fang‘s PhD dissertation. [PDF]

Topic Models meet Latent Factor Models

There is a trend in research communities to bring two well-established classes of models together, topic models and latent factor models. By doing so, we may enjoy the ability to analyze text information with topic models and incorporate the collaborative filtering analysis with latent factor models. In this section, I wish to discuss some of these efforts.

Three papers will be covered in this post are listed at the end of the post. Before that, let’s first review what latent factor models are. Latent factor models (LFM) are usually used in collaborative filtering context. Say, we have a user-item rating matrix \mathbf{R} where r_{ij} represents the rating user i gives to item j. Now, we assume for each user i, there is a vector \mathbf{u}_{i} with the dimensionality k, representing the user in a latent space. Similarly, we assume for each item j, a vector \mathbf{v}_{j} with the same dimensionality representing the item in a same latent space. Thus, the rating r_{ij} is therefore represented as:

    \[ r_{ij} = \mathbf{u}_{i}^{T} \mathbf{v}_{j} \]

This is the basic setting for LFM. In addition to this basic setting, additional biases can be incorporated, see here. For topic models (TM), the simplest case is Latent Dirichlet Allocation (LDA). The story of LDA is like this. For a document d, we first sample a multinomial distribution \boldsymbol{\theta}_{d}, which is a distribution over all possible topics. For each term position w in the document, we sample a discrete topic assignment z from \boldsymbol{\theta}_{d}, indicating which topic we use for this term. Then, we sample a term v from a topic \boldsymbol{\beta}, a multinomial distribution over the vocabulary.

For both LFM and TM, they are methods to reduce original data into latent spaces. Therefore, it might be possible to link them together. Especially, items in the LFM are associated with rich text information. One natural idea is that, for an item j, the latent factor \mathbf{v}_{j} and its topic proportional parameter \boldsymbol{\theta}_{j} somehow gets connected. One way is to directly equalize these two variables. Since \mathbf{v}_{j} is a real-value variable and \boldsymbol{\theta}_{j} falls into a simplex, we need certain ways to keep these properties. Two possible methods can be used:

  1. Keep \boldsymbol{\theta}_{j} and make sure it is in the range of [0, 1] in the optimization process. Essentially put some constraint on the parameter.
  2. Keep \mathbf{v}_{j} and use logistic transformation to transfer a real-valued vector into simplex.

Hanhuai and Banerjee showed the second technique in their paper by combining Correlated Topic Model with LFM. Wang and Blei argued that this setting suffers from the limitation that it cannot distinguish topics for explaining recommendations from topics important for explaining content since the latent space is strictly equal. Thus, they proposed a slightly different approach. Namely, each \mathbf{v}_{j} derives from \boldsymbol{\theta}_{j} with item-dependent noise:

    \[ \mathbf{v}_{j} = \boldsymbol{\theta}_{j} + \epsilon_{j} \]

where \epsilon_{j} is a Gaussian noise.

A different approach is to not directly equal these two quantities but let me impact these each other. One such way explored by Hanhuai and Banerjee is that \boldsymbol{\theta}_{j} influences how \mathbf{v}_{j} is generated. More specifically, in Probabilistic Matrix Factorization (PMF) setting, all \mathbf{v}s are generated by a Gaussian distribution with a fixed mean and variance. Now, by combining LDA, the authors allow different topic has different Gaussian prior mean and variance values. A value similar to z is firstly generated from \boldsymbol{\theta}_{j} to decide which mean to use and then generate \mathbf{v}_{j} from that particular mean and variance.

A totally different direction was taken by Agarwal and Chen. In their fLDA paper, there is no direct relationship between item latent factor and content latent factor. In fact, their relationship is realized by the predictive equation:

    \[ r_{ij} = \mathbf{a}^{T} \mathbf{u}_{i} + \mathbf{b}^{T} \mathbf{v}_{j} + \mathbf{s}_{i}^{T} \bar{\mathbf{z}}_{j}\]

where \mathbf{a}, \mathbf{b} and \mathbf{s}_{i} are regression weights and \bar{\mathbf{z}}_{j} is the average topic assignments for item j. Note, \mathbf{s}_{i} is a user-dependent regression weights. This formalism encodes the notion that all latent factors (including content) will contribute to the rating, not only item and user factors.

In summary, three directions have been taken for integrating TM and LFM:

  1. Equal item latent factor and topic proportion vector, or make some Gaussian noise.
  2. Let topic proportion vector to control the prior distribution for item latent factor.
  3. Let item latent factor and topic assignments, as well as user latent factor, contribute the rating.

Reference:

  • Deepak Agarwal and Bee-Chung Chen. 2010. fLDA: matrix factorization through latent dirichlet allocation. In Proceedings of the third ACM international conference on Web search and data mining (WSDM ’10). ACM, New York, NY, USA, 91-100. [PDF]
  • Hanhuai Shan and Arindam Banerjee. 2010. Generalized Probabilistic Matrix Factorizations for Collaborative Filtering. In Proceedings of the 2010 IEEE International Conference on Data Mining (ICDM ’10). IEEE Computer Society, Washington, DC, USA, 1025-1030. [PDF]
  • Chong Wang and David M. Blei. 2011. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’11). ACM, New York, NY, USA, 448-456.[PDF]

An Easy Reading Tutorial for Bayesian Non-parametric Models

The “god-father” of LDA, David Blei, recently published a tutorial on Bayesian Non-parametric Models, with one of his student. The whole tutorial is easy-reading and provides very clear overview of Bayesian Non-parametric Models. In particular, Chinese Restaurant Process (CRP) and Indian Buffet Process are discussed in a very intuitive way. For those who are interests in technical details about these models, this tutorial may be just a starting point and the Appendix points out several ways to discuss models more formally, including inference algorithms.

One specific interesting property shown in this tutorial is the “exchangeable” property for CRP, which I wish to re-state as below.

Let c_{n} be the table assignment of the nth customer. A draw from CPR can be generated by sequentially assigning observations to classes with probability:

    \[P(c_{n} = k | \mathbf{c}_{1:n-1}) = \begin{cases}\frac{m_{k}}{n-1+\alpha}, & \mbox{if } k \leq \mathbf{K}_{+} \mbox{ (i.e., $k$ is a previously occupied table)} \\\frac{\alpha}{n-1+\alpha}, & \mbox{otherwise (i.e., $k$ is the next unoccupied table)}\end{cases}\]

where m_{k} is the number of customers sitting at table k, and \mathbf{K}_{+} is the number of tables for which m_{k} > 0. The parameter \alpha is called the concentration parameter. The CRP exhibits an important invariance property: The cluster assignments under this distribution are exchangeable. This means p(\mathbf{c}) is unchanged if the order of customers is shuffled.

Consider the joint distribution of a set of customer assignments c_{1:N}. It decomposes according to the chain rule:

    \[p(c_{1}, c_{2}, \cdots , c_{N}) = p(c_{1}) p(c_{2} | c_{1}) \cdots p(c_{N} | c_{1}, c_{2}, \cdots , c_{N-1}) \]

where each terms comes from above equation. To show that this distribution is exchangeable, we will introduce some new notation. Let \mathbf{K}(c_{1:N}) be the number of groups in which these assignments place the customers, which is a number between 1 and N. Let I_{k} be the set of indices of customers assigned to the kth group, and let N_{k} be the number of customers assigned to that group. Now, for a particular group k the joint probability of all assignments in this group is:

    \[ \frac{\alpha}{I_{k,1}-1+\alpha} \frac{1}{I_{k,2}-1+\alpha} \frac{2}{I_{k,3}-1+\alpha} \cdots \frac{N_{k}-1}{I_{k,N}-1+\alpha} \]

where each term in the equation represents a customer. The numerator can be re-written as \alpha (N_{k}-1)!. Therefore, we have:

    \[ p(c_{1}, c_{2}, \cdots , c_{N}) = \prod_{k=1}^{K} \frac{\alpha (N_{k}-1)!}{(I_{k,1}-1+\alpha)(I_{k,2}-1+\alpha)\cdots (I_{k,N_{k}}-1+\alpha)} \]

Finally, notice that the union of \mathbf{I}_{k} across all groups k identifies each index once, because each customer is assigned to exactly one group. This simplifies the denominator and let us write the joint as:

    \[ p(c_{1}, c_{2}, \cdots , c_{N}) = \frac{\alpha^{K} \prod_{k=1}^{K} (N_{k}-1)! }{\prod_{i=1}^{N} (i-1+\alpha)} \]

This equation only depends on the number of groups \mathbf{K} and the size of each group \mathbf{N}_{k}.


Simple Geographical Calculations

In this post, I would like to share some simple code to calculate geographical distances by using latitude and longitude points from some third-party services. This is particular useful when we wish to compute the average distances users travel from the check-in or geo-tagging information from Twitter, for instance. The code is straightforward and simple.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import math
import sys
import os
 
## Convert a location into 3d Corordinates
## location is a list of [latitude,longtidue]
## return: a list of [x,y,z]
def convert_location_cor(location):
    x_n = math.cos(math.radians(location[0])) * math.cos(math.radians(location[1]))
    y_n = math.cos(math.radians(location[0])) * math.sin(math.radians(location[1]))
    z_n = math.sin(math.radians(location[0]))
    return [x_n,y_n,z_n]
 
## Convert a 3d Corordinates into a location
## cor is a list of [x,y,z]                                                                                                                          
## return: a list of [latitude, longtitude]
def convert_cor_location(cor):
    r = math.sqrt(cor[0] * cor[0] + cor[1] * cor[1]+ cor[2] * cor[2])
    lat = math.asin(cor[2] / r)
    log = math.atan2(cor[1], cor[0])
    return [math.degrees(lat),math.degrees(log),math.degrees(r)]
 
## Compute the geographical midpoint of a set of locations
## location_list is a list of locations [locaiton 0, location 1, location 2]
## return: the location of midpoint                                                                                              
def geo_midpoint(location_list):
    x_list = []
    y_list = []
    z_list = []
    for i in range(len(location_list)):
	m = convert_location_cor(location_list[i])
	x_list.append(m[0])
	y_list.append(m[1])
	z_list.append(m[2])
    x_mean = sum(x_list) / float(len(location_list))
    y_mean = sum(y_list) / float(len(location_list))
    z_mean = sum(z_list) / float(len(location_list))
    return convert_cor_location([x_mean,y_mean,z_mean])
 
## Compute the distance between two locations
## a and b are two locations: [lat 1, lon 1] [lat 2, lon 2]
## return: the distance in KM
def geo_distance(a,b):
    theta = a[1] - b[1]
    dist = math.sin(math.radians(a[0])) * math.sin(math.radians(b[0])) \
     + math.cos(math.radians(a[0])) * math.cos(math.radians(b[0])) * math.cos(math.radians(theta))
    dist = math.acos(dist)
    dist = math.degrees(dist)
    distance = dist * 60 * 1.1515 * 1.609344
    return distance
 
## main program
if __name__ == '__main__':
    l_list = []
    l_list.append([-8.70934,115.173695])
    l_list.append([-8.70934,115.235514])
    l_list.append([-8.591728,115.235514])
    l_list.append([-8.591728,115.173695])
    midpoint = geo_midpoint(l_list)
    print geo_distance([-8.70934,115.173695],[-8.70934,115.235514])

A Must Read for Logistic Regression

I came across an old technical report written by Michael Jordan (no, not the basketball guy):

Why the logistic function? A tutorial discussion on probabilities and neural networks“. M. I. Jordan. MIT Computational Cognitive Science Report 9503, August 1995.

The material is amazingly straightforward and easy to understand. It answers (or at least partially) a long-standing question for me, why the form of logistic function is used in regression? Regardless of how it was used in the first place, the report shows that it is actually can be derived from a simple binary classification case where we wish to estimate the posterior probability:

    \[ P(w_{0}|\mathbf{x}) = \frac{P(\mathbf{x}|w_{0})P(w_{0})}{P(\mathbf{x})} \]


where w_{0} can be thought as class label and \mathbf{x} can be treated as feature vector. We can expand the denominator and introduce an exponential:

    \[ P(w_{0}|\mathbf{x}) = \frac{P(\mathbf{x}|w_{0})P(w_{0})}{P(\mathbf{x}|w_{0})P(w_{0})+P(\mathbf{x}|w_{1})P(w_{1})}=\frac{1}{1+\exp\{-\log a - \log b\}} \]


where a=\frac{P(\mathbf{x}|w_{0})}{P(\mathbf{x}|w_{1})} and b= \frac{P(w_{0})}{P(w_{1})}. Without achieving anything but only through mathematical maneuvering, we have already had the flavor how logistic function can be derived from simple classification problems. Now, if we specify a particular distribution form of P(\mathbf{x}|w) ( the class-conditional densities), for instance, Gaussian distribution, we can recover the logistic regression easily.

However, the whole point of the report is not just to show where logistic function comes into play, but showing how discriminative models and generative models in this particular setting are only the two sides of the same coin. In addition, Jordan demonstrated that these two sides are simply NOT equivalent but should be treated carefully when different learning criteria is considered. In general, a simple take-away is that the discriminative model (logistic regression) is more “robust” where generative model might be more accurate if the assumption is correct.

More details, please refer to the report.