Deep Learning methods for Text Generation (PyTorch) - deep-learning

Greetings to everyone,
I want to design a system that is able to generate stories or poetry based on a large dataset of text, without being needed to feed a text description/start/summary as input at inference time.
So far I did this using RNN's, but as you know they have a lot of flaws. My question is, what are the best methods to achieve this task at the time?
I searched for possibilities using Attention mechanisms, but it turns out that they are fitted for translation tasks.
I know about GPT-2, Bert, Transformer, etc., but all of them need a text description as input, before the generation and this is not what I'm seeking. I want a system able to generate stories from scratch after training.
Thanks a lot!

edit
so the comment was: I want to generate text from scratch, not starting from a given sentence at inference time. I hope it makes sense.
yes, you can do that, that's just simple code manipulation on top of the ready models, be it BERT, GPT-2 or LSTM based RNN.
How? You have to provide random input to the model. Such random input can be randomly chosen word or phrase or just a vector of zeroes.
Hope it helps.
You have mixed up several things here.
You can achieve what you want either using LSTM based or transformer based architecture.
When you said you did it with RNN, you probably mean that you have tried LSTM based sequence to sequence model.
Now, there is attention in your question. So you can use attention to improve your RNN but it is not a required condition. However, if you use transformer architecture, then it is built in the transormer blocks.
GPT-2 is nothing but a transformer based model. Its building block is a transformer architecture.
BERT is also another transformer based architecture.
So to answer your question, you should and can try using LSTM based or transformer based architecture to achieve what you want. Sometimes such architecture is called GPT-2, sometimes BERT depending on how it is realized.
I encourage you to read this classic from Karpathy, if you understand it then you have cleared most of your questions:
http://karpathy.github.io/2015/05/21/rnn-effectiveness/

Related

Stacking/chaining CNNs for different use cases

So I'm getting more and more into deep learning using CNNs.
I was wondering if there are examples of "chained" (I don't know what the correct term would be) CNNs - what I mean by that is, using e.g. a first CNN to perform a semantic segmentation task while using its output as input for a second CNN which for example performs a classification task.
My questions would be:
What is the correct term for this sequential use of neural networks?
Is there a way to pack multiple networks into one "big" network which can be trained in one a single step instead of training 2 models and combining them.
Also if anyone could maybe provide a link so I could read about that kind of stuff, I'd really appreciate it.
Thanks a lot in advance!
Sequential use of independent neural networks can have different interpretations:
The first model may be viewed as a feature extractor and the second one is a classifier.
It may be viewed as a special case of stacking (stacked generalization) with a single model on the first level.
It is a common practice in deep learning to chain multiple models together and train them jointly. Usually it calls end-to-end learning. Please see the answer about it: https://ai.stackexchange.com/questions/16575/what-does-end-to-end-training-mean

Information about Embeddings in the Allen Coreference Model

I'm an Italian student approaching the NLP world.
First of all I'd like to thank you for the amazing work you've done with the paper " Higher-order Coreference Resolution with Coarse-to-fine Inference".
I am using the model provided by allennlp library and I have two questions for you.
in https://demo.allennlp.org/coreference-resolution it is written that the embedding used is SpanBERT. Is this a BERT embedding trained regardless of the coreference task? I mean, could I possibly use this embedding just as a pretrained model on the english language to embed sentences? (e.g. like https://huggingface.co/facebook/bart-base )
is it possible to modify the code in order to return, along with the coreference prediction, also the aforementioned embeddings of each sentence?
I really hope you can help me.
Meanwhile I thank you in advance for your great availability.
Sincerely,
Emanuele Gusso
SpanBERT is a version of BERT pre-trained to produce useful embeddings on text spans. SpanBERT itself has nothing to do with coreference resolution. The original paper is https://arxiv.org/abs/1907.10529, and the original source code is https://github.com/facebookresearch/SpanBERT, though you might have an easier time using the huggingface version at https://huggingface.co/SpanBERT.
It is definitely possible to get the embeddings as output, along with the coreference predictions. I recommend cloning https://github.com/allenai/allennlp-models, getting it to run in your environment, and then changing the code until it gives you the output you want.

Creating a dataset of images for object detection for extremely specific task

Even though I am quite familiar with the concepts of Machine Learning & Deep Learning, I never needed to create my own dataset before.
Now, for my thesis, I have to create my own dataset with images of an object that there are no datasets available on the internet(just assume that this is ground-truth).
I have limited computational power so I want to use YOLO, SSD or efficientdet.
Do I need to go over every single image I have in my dataset by my human eyes and create bounding box center coordinates and dimensions to log them with their labels?
Thanks
Yes, you will need to do that.
At the same time, though the task is niche, you could benefit from the concept of transfer learning. That is, you can use a pre-trained backbone in order to help your model to learn faster/achieve better results/need fewer annotations example, but you will still need to annotate the new dataset on your own.
You can use software such as LabelBox, as a starting point, it is very good since it allows you to output the format in Pascal(VOC) format, YOLO and COCO format, so it is a matter of choice/what is more suitable for you.

Does BERT and other language attention model only share cross-word information in the initial embedding stage?

I study visual attention models but have recently been reading up on BERT and other language attention models to fill a serious gap in my knowledge.
I am a bit confused by what I seem to be seeing in these model architectures. Given a sentence like "the cat chased the dog". I would have expected cross information streams between the embeddings of each word. For example, I would have expected to see a point in the model where the embedding for "cat" is combined with the embedding for "dog", in order to create the attention mask.
Instead what I seem to be seeing, (correct me if I am wrong) is that the embedding of a word like "cat" is initially set up to include information about the words around them. So that each embedding of each word includes all of the other words around them. Then each of these embeddings are passed through the model in parallel. This seems weird to me and redundant. Why would they set up the model in this way?
If we were to block out cat. "the ... chased the dog." Would we then, during inference, only need to send the "..." embedding through the model?
The embeddings don't contain any information about the other embeddings around them. BERT and other models like OpenGPT/GPT2 don't have context dependent inputs.
The context related part comes later. What they do in attention based models is use these input embeddings to create other vectors which then interact with each other and using various matrix multiplications, summing, normalizing and this helps the model understand the context which in turn helps it do interesting things including language generation etc.
When you say ' I would have expected to see a point in the model where the embedding for "cat" is combined with the embedding for "dog", in order to create the attention mask.', you are right. That does happen. Just not at the embedding level. We make more vectors by matrix multiplying the embeddings with learned matrices that then interact with each other.

Where to find deep learning based prediction model

I need to find a deep learning based prediction model, where can I find it?
You can use Pytorch and Tensorflow pretrained models.
https://pytorch.org/docs/stable/torchvision/models.html
They can be automatically downloaded. There are some sample codes, that you can try:
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
If you are interested in deep learning, I suggest you review the basics of it in cs231n stanford. Your question is a bit odd, because you first need to define your task specifically. Prediction is not a good description. You could look for models for classification, segmentation, object detection, sequence2sequence(like translation), and so on...
Then you need to know how to search through projects on github, and then you need to know python (in most cases), and then use a pretrained model or use your own dataset to train or fine-tune the model for that task. Then you could pray that you have found a good model for your task, after that you need to validate the results on a test set. However, implementation of a model for real-life scenarios is another thing that you need to consider many other things, and you usually need some online-learning strategy, like Federated Learning. I hope that I could help you.