One critical component in Gibbs sampling for complex graphical models is to be able to draw samples from discrete distributions. Take latent Dirichlet allocation (LDA) as an example, the main computation focus is to draw samples from the following distribution:
(1)
where is the number of tokens in the document assigned to the topic , excluding the token , is the number of times token assigned to the topic , excluding , and is the total number of tokens assigned to the topic .
So, a straightforward sampling algorithm works as follows:
- Let be the right-hand side of Equation \eqref{eq:lda} for topic , which is an un-normalized probability.
- We compute the accumulated weights as: and .
- Draw and find where and
The last line is essentially to find the minimum index that the array value is greater than the random number .
One difficulty to deal with \eqref{eq:lda} is that the right hand side might be too small and therefore overflow (thinking about too many near-zero numbers multiplying). Thus, we want to deal with probabilities in log-space. We start to work with:
(2)
and store in but remember that now each value represents un-normalized log probability. The next step is to compute the accumulated weights, this time as accumulated probabilities but in log-space! Thanks to the trick mentioned in [Notes on Calculating Log Sum of Exponentials], we are able to compute log sum efficiently. Please use the last equation there to compute the accumulated weights. The last step is to draw the random number. We compute and again find the minimum index that satisfy the array value is greater than .Notes:
- The log-sampling algorithm for LDA is implemented in Fugue Topic Modeling Package.
- Unless you really face the issue of overflow, sampling in log-space is usually much slower than the original space as log and exp are expensive functions to compute.