Improving performance of topic models by variable grouping
Topic models have a wide range of applications, including modeling of text documents, images, user preferences, product rankings, and many others. However, learning optimal models may be difficult, especially for large problems. The reason is that inference techniques such as Gibbs sampling often converge to suboptimal models due to the abundance of local minima in large datasets.
In this paper, we propose a general method of improving the performance of topic models. The method, called `grouping transform', works by introducing auxiliary variables which represent assignments of the original model tokens to groups. Using these auxiliary variables, it becomes possible to resample an entire group of tokens at a time. This allows the sampler to make larger state space moves. As a result, better models are learned and performance is improved. The proposed ideas are illustrated on several topic models and several text and image datasets. We show that the grouping transform significantly improves performance over standard models.
- download PDF (226K)
Bart, E. Improving performance of topic models by variable grouping. 22nd International Joint Conference on Artificial Intelligence (IJCAI); 2011 July 16-22; Barcelona, Spain. Menlo Park, CA: AAAI Press; 2011: 1178-1185.
©2011, International Joint Conferences on Artificial Intelligence. All rights reserved. Not to be reproduced in any form without permission in writing from the publisher.