Topic models evaluation in Gensim - lda

I've been experimenting with LDA topic modelling using Gensim. I couldn't seem to find any topic model evaluation facility in Gensim, which could report on the perplexity of a topic model on held-out evaluation texts thus facilitates subsequent fine tuning of LDA parameters (e.g. number of topics). It would be greatly appreciated if anyone could shed some light on how I can perform topic model evaluation in Gensim. This question has also been posted on metaoptimize.

Found the answer on the gensim mailing list.
In short, the bound() method of LdaModel computes a lower bound on perplexity, based on a held-out corpus.

Related

Multi-attention based supervised Feature Selection in Multivariate time series

I have been working on a multivariate time series problem. The dataset has at least 40 different factors. I tried to select only the appropriate features before training the model. I came across a paper called "A Multiattention-Based Supervised Feature Selection Method for Multivariate Time Series. The link to the paper:"https://www.hindawi.com/journals/cin/2021/6911192/
The paper looks promising however I could not find the the implementation of it. I would like to know if anyone has come across a similar paper and knows how to implement the architecture in the paper?
If not, I want to know alternate methods to find only the appropriate features for my multivariate time series before training the model.

How do shared parameters in actor-critic models work?

Hello StackOverflow Community!
I have a question about Actor-Critic Models in Reinforcement Learning.
While listening policy gradient methods classes of Berkeley University, it is said in the lecture that in the actor-critic algorithms where we both optimize our policy with some policy parameters and our value functions with some value function parameters, we use same parameters in both optimization problems(i.e. policy parameters = value function parameters) in some algorithms (e.g. A2C/A3C)
I could not understand how this works. I was thinking that we should optimize them separately. How does this shared parameter solution helps us?
Thanks in advance :)
You can do it by sharing some (or all) layers of their network. If you do, however, you are assuming that there is a common state representation (the intermediate layer output) that is optimal w.r.t. both. This is a very strong assumption and it usually doesn't hold. It has been shown to work for learning from image, where you put (for instance) an autoencoder on the top both the actor and the critic network and train it using the sum of their loss function.
This is mentioned in PPO paper (just before Eq. (9)). However, they just say that they share layers only for learning Atari games, not for continuous control problems. They don't say why, but this can be explained as I said above: Atari games have a low-dimensional state representation that is optimal for both the actor and the critic (e.g., the encoded image learned by an autoencoder), while for continuous control you usually pass directly a low-dimensional state (coordinates, velocities, ...).
A3C, which you mentioned, was also used mostly for games (Doom, I think).
From my experience, in control sharing layers never worked if the state is already compact.

How to choose action in TD(0) learning

I am currently reading Sutton's Reinforcement Learning: An introduction book. After reading chapter 6.1 I wanted to implement a TD(0) RL algorithm for this setting:
To do this, I tried to implement the pseudo-code presented here:
Doing this I wondered how to do this step A <- action given by π for S: I can I choose the optimal action A for my current state S? As the value function V(S) is just depending on the state and not on the action I do not really know, how this can be done.
I found this question (where I got the images from) which deals with the same exercise - but here the action is just picked randomly and not choosen by an action policy π.
Edit: Or this is pseudo-code not complete, so that I have to approximate the action-value function Q(s, a) in another way, too?
You are right, you cannot choose an action (neither derive a policy π) only from a value function V(s) because, as you notice, it depends only on the state s.
The key concept that you are probably missing here, it's that TD(0) learning is an algorithm to compute the value function of a given policy. Thus, you are assuming that your agent is following a known policy. In the case of the Random Walk problem, the policy consists in choosing actions randomly.
If you want to be able to learn a policy, you need to estimate the action-value function Q(s,a). There exists several methods to learn Q(s,a) based on Temporal-difference learning, such as for example SARSA and Q-learning.
In the Sutton's RL book, the authors distinguish between two kind of problems: prediction problems and control problems. The former refers to the process of estimating the value function of a given policy, and the latter to estimate policies (often by means of action-value functions). You can find a reference to these concepts in the starting part of Chapter 6:
As usual, we start by focusing on the policy evaluation or prediction
problem, that of estimating the value function for a given policy .
For the control problem (finding an optimal policy), DP, TD, and Monte
Carlo methods all use some variation of generalized policy iteration
(GPI). The differences in the methods are primarily differences in
their approaches to the prediction problem.

How to use previously generated topic-word distribution matrix for the new LDA topic generation process?

Let's say that we have executed the LDA topic generation process (with Gibbs sampling) once. Now for the next round of LDA topic generation, how to make use of the already existing topic matrix? Does any library support this kind of feature?

LDA topic distribution at training process and inference process

I have a question about LDA, a popular topic modeling technique.
A LDA model is created from a certain training documents set.
Then, topic distribution over documents of a data set which is same with the one used for training process is inferred based on the LDA model.
In this case, is the topic distribution created at the training process same with the one created at the inference process?
I'm asking it, because I have tried plda. It does not output the topic distribution at the training process, but outputs at the inferring process. Thus, I think if created topic distributions are almost identical, I can use plda even it has no output at training process.