I am relatively new to mallet and need to know:
- are the words in each topic that mallet produces rank ordered in some way?
- if so, what is the ordering (i.e.) is 1st in a topic list the one with the highest distribution across the corpus?
Thanks!
they are ranked based on probabilities from the training, i.e. the first word is most probable to appear in this topic, the 2nd is less probable, the 3rd less and so on.. These are not directly related to term frequencies although surely the words with highest tfidf weights are more likely to be most probable. Also, Gibbs sampling has a lot to do with how words are ranked in topics - due to randomness in sampling you can get quite different probabilities for words within topics. Try, for example, to save the model and then retrain using --input-model option - the topics will look very much alike but not the same.
That said, if you need to see actual weights of terms in the corpus unrelated to LDA, you can use something like NLTK in Python to check frequency distributions and also something like sklearn for TFIDF to get more meaningful weight distributions.
Related
Transformers can handle variable length input, but what if the number of words might correlate with the target? Let's say we want to perform a sentiment analysis for some reviews where the longer reviews are more probable to be bad. How can the model harness this knowledge? Of course a simple solution could be to add this count as a feature after the self-attention layer. However, this hand-crafted-like approach wouldn't reveal more complex relations, for example if there is a high number of word X, it correlates with target 1, except if there is also high number of word Y, in which case the target tends to be 0.
How could this information be included using deep learning? Paper recommendations in the topic are also well appreciated.
I am trying to build a deep neural networks that takes in a set of documents and predicts the category it belongs.
Since number of documents in each collection is not fixed, my first attempt was to get a mapping of documents from doc2vec and use the average.
The accuracy on training is high as 90% but the testing accuracy is low as 60%.
Is there a better way of representing a collection of documents as a fixed length vector so that the words they have in common are captured?
The description of your process so far is a bit vague and unclear – you may want to add more detail to your question.
Typically, Doc2Vec would convert each doc to a vector, not "a collection of documents".
If you did try to collapse a collection into a single vector – for example, by averaging many doc-vecs, or calculating a vector for a synthetic document with all the sub-documents' words – you might be losing valuable higher-dimensional structure.
To "predict the category" would be a typical "classification" problem, and with a bunch of documents (represented by their per-doc vectors) and known-labels, you could try various kinds of classifiers.
I suspect from your description, that you may just be collapsing a category to a single vector, then classifying new documents by checking which existing category-vector they're closest-to. That can work – it's vaguely a K-Nearest-Neighbors approach, but with every category reduced to one summary vector rather than the full set of known examples, and each classification being made by looking at just one single nearest-neighbor. That forces a simplicity on the process that may not match the "shapes" of the real categories as well as a true KNN classifier, or other classifiers, could achieve.
If accuracy on test data falls far below that observed during training, that can indicate that significant "overfitting" is occurring: the model(s) are essentially memorizing idiosyncrasies of the training data to "cheat" at answers based on arbitrary correlations, rather than learning generalizable rules. Making your model(s) smaller – such as by decreasing the dimensionality of your doc-vectors – may help in such situations, by giving the model less extra state in which to remember peculiarities of the training data. More data can also help - as the "noise" in more numerous varied examples tends of cancel itself out, rather than achieve the sort of misguided importance that can be learned in smaller datasets.
There are other ways to convert a variable-length text into a fixed-length vector, including many based on deeper learning algorithms. But, those can be even more training-data-hungry, and it seems like you may have other factors to improve before trying those in-lieu-of Doc2Vec.
We're running LDA using gensim and we're getting some strange results for perplexity. We're finding that perplexity (and topic diff) both increase as the number of topics increases - we were expecting it to decline. We've tried lots of different number of topics 1,2,3,4,5,6,7,8,9,10,20,50,100. We've also played around with alpha (symmetric and auto) and keep getting the same results.
Our documents have 20+ words but most of them are 20-30. Are these documents too small for LDA to work?
Should we try to increase the amount of training data (we are running on 100k)? Or increase number of passes (but it looks like it has converged)?
Thanks
Assume I have N text documents and I run LDA in the following 2 ways,
run LDA over the N documents at once
run on each document separately, so for N documents you run the algorithm N times
I'm aware of what number of topics to choose as well; in the first case i can select N to be the number of topics (assuming each document is about a single topic) but if I run it on each document separately not sure how to select the number of topics as well...?
What's going on in these two cases?
Latent Dirichlet Allocation is intended to model the topic and word distributions for each document in a corpus of documents.
Running LDA over all of the documents in the corpus at once is the normal approach; running it on a per-document basis is not something I've heard of. I wouldn't recommend doing this. It's difficult to say what would happen, but I wouldn't expect the results to be near as useful because you couldn't meaningfully compare one document/topic or topic/word distribution with another.
I'm thinking that your choice of N for the number of topics might be too high (what if you had thousands of documents in your corpus?), but it really depends on the nature of the corpus you are modelling. Remember that LDA assumes a document will be a distribution over topics, so it might be worth rethinking the assumption that each document is about one topic.
LDA is a statistical model that predicts or assigns topics to documents, it works by distributing the words of each document over topics, (randomly the first time) then repeats this step a number of iterations (could be 500 iterations) until the words that are assigned to the topics are almost stable, now it can assign N topics to a document according to the most frequent words in the document that has a high probability in the topic.
so it does not make sense to run it over one document since the words that is assigned to the topic in the first iteration will not change over iterations because you are using only one document, and the topics that is assigned to document will be meaningless
Let's say I have harvested the posts from a forum. Then I removed all the usernames and signatures, so that now I only know what post was in which thread but not who posted what, or even how many authors there are (though clearly the number of authors cannot be greater than the number of texts).
I want to use a Markov model (look at which words/letters follow which ones) to figure out how many people used this forum, and which posts were written by the same person. To vastly simplify, perhaps one person tends to say "he were" while another person tends to say "he was" - I'm talking about model that works with this sort of basic logic.
Note how there are some obvious issues with the data: Some posts may be very short (one word answers). They may be repetitive (quoting each other or using popular forum catchphrases). The individual texts are not very long.
One could suspect that it would be rare for a person to make consecutive posts or that it is likely that people are more likely to post in threads they have already posted in. Exploiting this is optional.
Let's assume the posts are plaintexts and have no markup, and that everyone on the forum uses English.
I would like to obtain a distance matrix for all texts T_i such that D_ij is the probability that text T_i and text T_j are written by the same author, based on word/character pattern. I am planning to use this distance matrix to cluster the texts, and ask questions such as "What other texts were authored by the person who authored this text?"
How would I actually go about implementing this? Do I need a hidden MM? If so, what is the hidden state? I understand how to train an MM on a text and then generate a similar text (eg. generated Alice in the Wonderland) but after I train a frequency tree, how do I check a text with it to get the probability that it was generated by that tree? Should I look at letters, or words when building the tree?
My advice is put aside the business about the distance matrix and think first about a probabilistic model P(text | author). Constructing that model is that hard part of your work; once yo have it, you can compute P(author | text) via Bayes' rule. Don't put the cart before the horse: the model might or might not involve distance metrics or matrices of various kinds, but don't worry about that, just let it fall out of the model.
You might want to take a look at Hierarchical Clustering. With this algorithm you can define your own distance function and it will give you clusters based on it. If you define a good distance function, the resulting clusters will correspond to one author each.
This is probably quite hard to do though and you might need a lot of posts to really get an interesting result. Nevertheless, I wish you good luck!
You mention a Markov model in your question. Markov models are about sequences of tokens and how one token depends on previous tokens and possibly internal state.
If you want to use probabilistic methods you might want to use a different kind of statistical model that is not so much based on sequences but on bags or sets of words or features.
For example you could use the most K frequent words of the text and create all M-grams of tokens in each post where the nonfrequent words are replaced by empty placeholders. This could allow you to learn phrases commonly used by different authors.
In addition you could use single words as features, so that a post gets as features all words in the post (here you can ignore frequent words and use only rare words - the same authors might be interested in the same topics or use the same words or do the same spelling mistakes).
Additionally you can try to capture the style of authors in features: how many paragraphs, how long sentences, how many commas per sentence, does the author use capitalization or not, are numbers spelled out or not, etc ... these are all features that are not sequences as you would use in a HMM but features assigned to each post.
In summary: even though sequences are certainly important to catch phrases you definitely want more than just a sequence model.