In *Infinite Latent Feature Models and the Indian Buffet Process*, the authors point out that the Gibbs sampling inference in a latent class model depends on the following full conditional distribution of latent class (Equation 17):

The left-hand side of this equation is a general form. For LDA, it is .

I am interested in the right-hand side, because it looks not trivial to compute . Do we really need to compute it? I do not think so, because is independent with given .

An example of above statement is the derivation of LDA’s full conditional distribution, where we did the following expansion:

In the right-most part of the equation, is omitted because given , it is only possible to generate but not .

Continuing the derivation, we have

This leads to the LDA Gibbs sampling rule appearing in the literature. So Eqn. 17 can be rewritten as

### Like this:

Like Loading...

*Related*