Input documents to LDA - lda

Assume I have N text documents and I run LDA in the following 2 ways,
run LDA over the N documents at once
run on each document separately, so for N documents you run the algorithm N times
I'm aware of what number of topics to choose as well; in the first case i can select N to be the number of topics (assuming each document is about a single topic) but if I run it on each document separately not sure how to select the number of topics as well...?
What's going on in these two cases?

Latent Dirichlet Allocation is intended to model the topic and word distributions for each document in a corpus of documents.
Running LDA over all of the documents in the corpus at once is the normal approach; running it on a per-document basis is not something I've heard of. I wouldn't recommend doing this. It's difficult to say what would happen, but I wouldn't expect the results to be near as useful because you couldn't meaningfully compare one document/topic or topic/word distribution with another.
I'm thinking that your choice of N for the number of topics might be too high (what if you had thousands of documents in your corpus?), but it really depends on the nature of the corpus you are modelling. Remember that LDA assumes a document will be a distribution over topics, so it might be worth rethinking the assumption that each document is about one topic.

LDA is a statistical model that predicts or assigns topics to documents, it works by distributing the words of each document over topics, (randomly the first time) then repeats this step a number of iterations (could be 500 iterations) until the words that are assigned to the topics are almost stable, now it can assign N topics to a document according to the most frequent words in the document that has a high probability in the topic.
so it does not make sense to run it over one document since the words that is assigned to the topic in the first iteration will not change over iterations because you are using only one document, and the topics that is assigned to document will be meaningless

Related

How to reveal relations between number of words and target with self-attention based models?

Transformers can handle variable length input, but what if the number of words might correlate with the target? Let's say we want to perform a sentiment analysis for some reviews where the longer reviews are more probable to be bad. How can the model harness this knowledge? Of course a simple solution could be to add this count as a feature after the self-attention layer. However, this hand-crafted-like approach wouldn't reveal more complex relations, for example if there is a high number of word X, it correlates with target 1, except if there is also high number of word Y, in which case the target tends to be 0.
How could this information be included using deep learning? Paper recommendations in the topic are also well appreciated.

Removing outlier documents from corpus

How can I remove outlier documents from corpus before passing it to LDA ?
I am doing topic modeling using LDA.I have a large source of data from different websites. I want to classify them into 5 categories but the presence of outlier documents give inaccurate results.
Can anyone please help with this issue.I want only those articles related to any 5 categories to be present after classification.
You need to take a subset of your current dataset as input in your model. Are there particular characteristics of the articles that are outliers? If, for example, the length of some articles are too long, you could subset by:
corpus = corpus[corpus['text'].str.len() < 1000]
Or, if you find some outliers manually, you could delete them manually by:
corpus = corpus[corpus['title'] != 'Stackoverflow saved my life']
Easy way: throw out words that are so frequent that they tell us little about the topic as well as words that are too infrequent appearing in < 15 rows then keep 100,000 words off the top
dictionary_15.filter_extremes(no_below=15, no_above=0.5, keep_n=100000)
Hard way: If you only want documents within a certain topic you can build a two layer LDA which first allocates topics generally then build a second lda by filtering those documents classified in the first layer to your target topic and allocating. I would build an LDA with say five topics output them into a csv then create a new doc by sorting and filtering in Alteryx or even excel might be easier than python and using that document to perform the second step.

MALLET Ranking of Words in a topic

I am relatively new to mallet and need to know:
- are the words in each topic that mallet produces rank ordered in some way?
- if so, what is the ordering (i.e.) is 1st in a topic list the one with the highest distribution across the corpus?
Thanks!
they are ranked based on probabilities from the training, i.e. the first word is most probable to appear in this topic, the 2nd is less probable, the 3rd less and so on.. These are not directly related to term frequencies although surely the words with highest tfidf weights are more likely to be most probable. Also, Gibbs sampling has a lot to do with how words are ranked in topics - due to randomness in sampling you can get quite different probabilities for words within topics. Try, for example, to save the model and then retrain using --input-model option - the topics will look very much alike but not the same.
That said, if you need to see actual weights of terms in the corpus unrelated to LDA, you can use something like NLTK in Python to check frequency distributions and also something like sklearn for TFIDF to get more meaningful weight distributions.

How to implement a simple Markov model to assign authors to anonymous texts?

Let's say I have harvested the posts from a forum. Then I removed all the usernames and signatures, so that now I only know what post was in which thread but not who posted what, or even how many authors there are (though clearly the number of authors cannot be greater than the number of texts).
I want to use a Markov model (look at which words/letters follow which ones) to figure out how many people used this forum, and which posts were written by the same person. To vastly simplify, perhaps one person tends to say "he were" while another person tends to say "he was" - I'm talking about model that works with this sort of basic logic.
Note how there are some obvious issues with the data: Some posts may be very short (one word answers). They may be repetitive (quoting each other or using popular forum catchphrases). The individual texts are not very long.
One could suspect that it would be rare for a person to make consecutive posts or that it is likely that people are more likely to post in threads they have already posted in. Exploiting this is optional.
Let's assume the posts are plaintexts and have no markup, and that everyone on the forum uses English.
I would like to obtain a distance matrix for all texts T_i such that D_ij is the probability that text T_i and text T_j are written by the same author, based on word/character pattern. I am planning to use this distance matrix to cluster the texts, and ask questions such as "What other texts were authored by the person who authored this text?"
How would I actually go about implementing this? Do I need a hidden MM? If so, what is the hidden state? I understand how to train an MM on a text and then generate a similar text (eg. generated Alice in the Wonderland) but after I train a frequency tree, how do I check a text with it to get the probability that it was generated by that tree? Should I look at letters, or words when building the tree?
My advice is put aside the business about the distance matrix and think first about a probabilistic model P(text | author). Constructing that model is that hard part of your work; once yo have it, you can compute P(author | text) via Bayes' rule. Don't put the cart before the horse: the model might or might not involve distance metrics or matrices of various kinds, but don't worry about that, just let it fall out of the model.
You might want to take a look at Hierarchical Clustering. With this algorithm you can define your own distance function and it will give you clusters based on it. If you define a good distance function, the resulting clusters will correspond to one author each.
This is probably quite hard to do though and you might need a lot of posts to really get an interesting result. Nevertheless, I wish you good luck!
You mention a Markov model in your question. Markov models are about sequences of tokens and how one token depends on previous tokens and possibly internal state.
If you want to use probabilistic methods you might want to use a different kind of statistical model that is not so much based on sequences but on bags or sets of words or features.
For example you could use the most K frequent words of the text and create all M-grams of tokens in each post where the nonfrequent words are replaced by empty placeholders. This could allow you to learn phrases commonly used by different authors.
In addition you could use single words as features, so that a post gets as features all words in the post (here you can ignore frequent words and use only rare words - the same authors might be interested in the same topics or use the same words or do the same spelling mistakes).
Additionally you can try to capture the style of authors in features: how many paragraphs, how long sentences, how many commas per sentence, does the author use capitalization or not, are numbers spelled out or not, etc ... these are all features that are not sequences as you would use in a HMM but features assigned to each post.
In summary: even though sequences are certainly important to catch phrases you definitely want more than just a sequence model.

What is the time complexity of lookups in directed acyclic word graphs?

A directed acyclic word graph is a great data structure for certain tasks. I can't find any information on the time complexity of performing a lookup though.
I would guess it depends linearly on the average word length, and logarithmically on the number of words in the graph.
So is it O(L * log W), where W is the number of words and L is the average word length?
I think that complexity is just O(L). Number of operations is proportional to length of word and it does not matter how many entries structure have. (there might be differences based on implementation of node searching but that is in worst case and worst implementation just constant whit upper limit equal to size of alphabet)
I’d say it’s just O(L). For each lookup of a word of n characters, you always follow at most n edges, irrespective of how many other edges there are.
(That’s assuming a standard DAWG in which each node has outgoing edges for every letter of the alphabet, i.e. 26 for English. Even if you have fewer outgoing edges per node and therefore more levels, the number of edges to follow is still at most a constant multiple of n, so we still get O(L).)
How many words you already have in your structure seems to be irrelevant.
Even if, at each step, you perform a linear search for the correct edge to follow from the current node, this is still constant-time because the alphabet is bounded, and therefore so is the number of outgoing edges from each node.