list of fastai models - deep-learning

FastAI uses AWD-LSTM for text processing. They provide pretrained models with get_language_model(). But I can't find proper documentation on what's available.
Their github example page is really a moving target. Model names such as lstm_wt103 and WT103_1 are used. In the forums I found wt103RNN.
Where can I find an updated list of pretrained models and their download URLs ?

URLs is defined in fastai.datasets, there are constants for two models: WT103, WT103_1.
AWS bucket has just the two models.

Related

Is it possible to pull custom fields from SolidWorks files on Forge?

I have translated SolidWorks files on AutoDesk Forge, however, the Forge metatadata / objects / properties call of these files only provides the objectid and name. I know I've got several fields in the files, just wondering if I have to wire up some strange way to pull them out before sending them, figuring it may not be supported through the Forge API. Thanks!
The Model Derivative service is usually doing a pretty good job with extracting the metadata of your designs. Note however that the metadata might be available on a different level in the logical hierarchy.
Here's the metadata I see in one of my sample SolidWorks files when I simply click on one of the parts:
And this is the metadata I see when I select its parent element:

Building a Pipline Model using allennlp

I am pretty new to allennlp and I am struggling with building a model that does not seem to fit perfectly in the standard way of building model in allennlp.
I want to build a pipeline model using NLP. The pipeline consists basically of two models, let's call them A and B. First A is trained and based on the prediction of the full train A, B trained afterwards.
What I have seen is that people define two separate models, train both using the command line interface allennlp train ... in a shell script that looks like
# set a bunch of environment variables
...
allennlp train -s $OUTPUT_BASE_PATH_A --include-package MyModel --force $CONFIG_MODEL_A
# prepare environment variables for model b
...
allennlp train -s $OUTPUT_BASE_PATH_B --include-package MyModel --force $CONFIG_MODEL_B
I have two concerns about that:
This code is hard to debug
It's not very flexible. When I want to do a forward pass of the fully trained model I have write another script that bash script that does that.
Any ideas on how to do that in a better way?
I thought about using a python script instead of a shell script and invoke allennlp.commands.main(..) directly. Doing so at least you have a joint python module you can run using a debugger.
There are two possibilities.
If you're really just plugging the output of one model into the input of another, you could merge them together into one model and run it that way. You can do this with two already-trained models if you initialize the combined model with the two trained models using a from_file model. To do it at training time is a little harder, but not impossible. You would train the first model like you do now. For the second step, you train the combined model directly, with the inner first model's weights frozen.
The other thing you can do is use AllenNLP as a library, without the config files. We have a template up on GitHub that shows you how to do this. The basic insight is that everything you configure in one of the Jsonnet configuration files corresponds 1:1 to a Python class that you can use directly from Python. There is no requirement to use the configuration files. If you use AllenNLP this way, have much more flexibility, including chaining things together.

Ray RLllib: Export policy for external use

I have a PPO policy based model that I train with RLLib using the Ray Tune API on some standard gym environments (with no fancy preprocessing). I have model checkpoints saved which I can load from and restore for further training.
Now, I want to export my model for production onto a system that should ideally have no dependencies on Ray or RLLib. Is there a simple way to do this?
I know that there is an interface export_model in the rllib.policy.tf_policy class, but it doesn't seem particularly easy to use. For instance, after calling export_model('savedir') in my training script, and in another context loading via model = tf.saved_model.load('savedir'), the resulting model object is troublesome (something like model.signatures['serving_default'](gym_observation) doesn't work) to feed the correct inputs into for evaluation. I'm ideally looking for a method that would allow for easy out of the box model loading and evaluation on observation objects
Once you have restored from checkpoint with agent.restore(**checkpoint_path**), you can use agent.export_policy_model(**output_dir**) to export the model as a .pb file and variables folder.

How to create a keypoint detection model for human with custom dataset

I am trying to build a key-points detection model for human, as there are many pretrained networks available to generate key-points, but i want to practice myself to create a keypoint detection model with custom dataset, cant find anything in web if someone have some info's then please share.
I want more points specified to the human body, but to do so i need to create a custom model to generate such kind of key-points in human body, i checked some annotation tools but those annotation tool helps to adjust the points they have already specified when taking dataset like COCO etc, i think we cant add more points to the image. i just want to build a new model with custom key-points.
please share your views about my view on to the problem and please suggest some links if you have any idea about the same
I have created a detailed github repo Custom Keypoint Detection for dataset preparation, model training and inference on Centernet-hourglass104 keypoint detection model based on Tensorflow Object detection API with examples.
This could help you in training your keypoint detection model on custom dataset.
Any issues related to the project can be raised in the github itself and doubts can be cleared here.

Can i export models from Rapidminer (similar to Datarobot Prime)?

I have seen some automl tools being able to export the models (including the features) as an approximate model in python. For example Datarobot has Prime which is pretty cool.
Is this something we can do in Rapidminer as well ?
you have several options here, depending on your actual use case:
In RapidMiner you can store any model in your repository and run it on any other RapidMiner instance with the generic Apply Model Operator.
For most of the models you can use the pmml extension to export it in a common format.
If you are interested in the parameters and the description of the models, the converters extension has operators to transform a model into an example set.