Arbitrary deep network structures in Keras - deep-learning

I'm new to Keras. Is it possible to write arbitrary deep learning graph structures using Keras (say something like Faster R-CNN model)?

Yes, Keras 1.0 has the functional API, and older versions of Keras have a graph API. Implementing Faster R-CNN with them is pretty easy.
More information at this guide. You should prefer the functional API since it is easier to use and the prefered way. The Graph API was removed in Keras 1.0 I believe.

Related

Is there a way to visualize the embeddings obtained from Wav2Vec 2.0?

I'm looking to train a word2vec 2.0 model from scratch, but I am a bit new to the field. Crucially, I would like to train it using a large dataset of non-human speech (i.e. cetacean sounds) in order to capture the underlying structure.
Once the pre-training is performed, is it possible to visualize the embeddings the model creates, in a similar way to how latent features are visualized in image processing when using e.g. CNNs? Or are the representations too abstract to be mapped to a spectrogram?
What I would like to do is to see what features the network is learning as the units of speech.
Thanks in advance for the help!

Backbone network in Object detection

I am trying to understand the training process of a object deetaction deeplearng algorithm and I am having some problems understanding how the backbone network (the network that performs feature extraction) is trained.
I understand that it is common to use CNNs like AlexNet, VGGNet, and ResNet but I don't understand if these networks are pre-trained or not. If they are not trained what does the training consist of?
We directly use a pre-trained VGGNet or ResNet backbone. Although the backbone is pre-trained for classification task, the hidden layers learn features which can be used for object detection also. Initial layers will learn low level features such as lines, dots, curves etc. Next layer will learn learn high-level features that are built on top of low-level features to detect objects and larger shapes in the image.
Then the last layers are modified to output the object detection coordinates rather than class.
There are object detection specific backbones too. Check these papers:
DetNet: A Backbone network for Object Detection
CBNet: A Novel Composite Backbone Network Architecture for Object Detection
DetNAS: Backbone Search for Object Detection
High-Resolution Network: A universal neural architecture for visual recognition
Lastly, the pretrained weights will be useful only if you are using them for similar images. E.g.: weights trained on Image-net will be useless on ultrasound medical image data. In this case we would rather train from scratch.

How to merge two ONNX deep learning models

I have two models that are in ONNX format. Both models are similar (both are pre-trained deep learning models, ex. ResNet50 models). The only difference between them is that the last layers are optimized/retrained for different data sets.
I want to merge the first k layers of these two models, as shown below. This should enhance the performance of inference.
To make my case clearer these are examples from other machine learning tools to implement this feature Ex. Pytorch, Keras.
You can use the ONNX package and the APIs it exposes (https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md) to mutate models/graphs.
The sclblonnx package provides a number of higher level functions to edit ONNX graphs, including the ability to merge two subgraphs.
sclblonnx package does not support dynamic size inputs.
For dynamic size inputs, one solution would be writing your own code using ONNX API as stated earlier.Another solution would be converting the two ONNX models to a framework(Tensorflow or PyTorch) using tools like onnx-tensorflow or onnx2pytorch. You could manipulate the networks in the Tensorflow or Pytorch and export the whole network to Onnx format.

Reinforcement Learning tools

What is the difference between Tensorforce, Kerasrl, and chainerrl used for Reinforcement Learning?
AS far as I've found all three work with OpenAI gym environments and have the same reinforcement learning algorithms that have been implemented. Is there a difference in performance?
they are different Deep learning bank-ends.
TensorFlow , Keras and Chainer are different libraries used for inference of Neural network based AI algorithms.
Open AI is a Reinforcement Learning library.
These are two different technologies.
if you want Reinforcement learning for Tensorflow, back-end and RL tf library, checkout
https://github.com/google/dopamine
This has no connection with OpenAI. Pure Google tech.
short answer: Keras is more "High Level" than tensorflow in the sense that you can write code quicker with Keras but it's less flexible. Checkout this this post for instance.
Tensorflow , keras and Chainer all these are frameworks. These frameworks can be used to implement Deep Reinforcement learning models. As Jaggernaut said Keras is more of high level (meaning : pretty easy to learn) Keras uses Tensorflow backend to function.

A simple Convolutional neural network code

I am interested in convolutional neural networks (CNNs) as a example of computationally extensive application that is suitable for acceleration using reconfigurable hardware (i.e. lets say FPGA)
In order to do that I need to examine a simple CNN code that I can use to understand how they are implemented, how are the computations in each layer taking place, how the output of each layer is being fed to the input of the next one. I am familiar with the theoretical part (http://cs231n.github.io/convolutional-networks/)
But, I am not interested in training the CNN, I want a complete, self contained CNN code that is pre-trained and all the weights and biases values are known.
I know that there are plenty of CNN libraries, i.e. Caffe, but the problem is that there is no trivial example code that is self contained. even for the simplest Caffe example "cpp_classification" many libraries are invoked, the architecture of the CNN is expressed as .prototxt file, other types of inputs such as .caffemodel and .binaryproto are involved. openCV2 libraries is invoked too. there are layers and layers of abstraction and different libraries working together to produce the classification outcome.
I know that those abstractions are needed to generate a "useable" CNN implementation, but for a hardware person who needs a bare-bone code to study, this is too much of "un-related work".
My question is: Can anyone guide me into a simple and self-contained CNN implementation that I can start with?
I can recommend tiny-cnn. It is simple, lightweight (e.g. header-only) and CPU only, while providing several layers frequently used within the literature (as for example pooling layers, dropout layers or local response normalization layer). This means, that you can easily explore an efficient implementation of these layers in C++ without requiring knowledge of CUDA and digging through the I/O and framework code as required by framework such as Caffe. The implementation lacks some comments, but the code is still easy to read and understand.
The provided MNIST example is quite easy to use (tried it myself some time ago) and trains efficiently. After training and testing, the weights are written to file. Then you have a simple pre-trained model from which you can start, see the provided examples/mnist/test.cpp and examples/mnist/train.cpp. It can easily be loaded for testing (or recognizing digits) such that you can debug the code while executing a learned model.
If you want to inspect a more complicated network, have a look at the Cifar-10 Example.
This is the simplest implementation I have seen: DNN McCaffrey
Also, the source code for this by Karpathy looks pretty straightforward.