I am trying to make a PPO model using the stable-baselines3 library. I want to use a policy network with an LSTM layer in it. However, I can't find such a possibility on the library's website although it exists on the previous version of stable-baselines here https://stable-baselines.readthedocs.io/en/master/modules/policies.html#stable_baselines.common.policies.MlpLstmPolicy.
Does this possibility exist in stable-baselines3 (not stable-baselines)? if not, is there any other possibility I can do this? Thanx.
From the migration doc.
https://stable-baselines3.readthedocs.io/en/master/guide/migration.html
Breaking Changes¶
LSTM policies (MlpLstmPolicy, CnnLstmPolicy) are not supported for
the time being (see PR #53 for a recurrent PPO implementation)
Currently this functionality does not exist on stable-baselines3.
However, on their contributions repo (stable-baselines3-contrib) they have an experimental version of PPO with LSTM policy. I have not tried it myself, but according to this pull request it works.
You can find it on the feat/ppo-lstm branch, which may get merged onto master soon.
Related
I recently trained a stable-baselines PPO model for a couple of days and it is performing well on test environments. Essentially, I am trying to iterate on this model. I was wondering if it is possible to use this model as a new baseline for future model training. So, instead of starting with some naive policy for my environment, it could use this model as a starting spot and potentially learn a better approach to solving the environment.
The answer is yes. You basically need to do following things to achieve this:
Save your PPO model after the training in any environment you used from stable-baseline repository.
Load the saved PPO model for your new environment training rather than create a new PPO model. In this case, the starting spot will be the trained policy and the policy will evolve from there for better.
One thing you might be interested in is Transfer Learning, which basically systematically does what you want to do (train a model in one environment and use that as a baseline for another new environment to save training time for the second environment). The key to make it work is make sure there is high similarity between the two environments. If they are totally different, the pre-trained policy from the first environment may not help much.
In addition, the network architecture of the PPO is the same when you do this. In reality, the optimal network architecture for different environments are different.
Does tidymodels now provide a means to tune classification model thresholds? I believe this was slated as an upcoming feature in the Spring of 2020. I looked around the tidymodels website, but have not seen a mention the feature.
As Julia says, there is an indirect method to do this.
We plan on making it a fully tunable parameter (like other parameters) but a few things have pushed this back but it is near the top of our development list.
I have been trying to tackle a problem where I need to track multiple people through multiple camera viewpoints on a real-time basis.
I found a solution DeepCC (https://github.com/daiwc/DeepCC) on DukeMTMC dataset but unfortunately, this solution has been taken down because of data confidentiality issues. They were using Fast R-CNN for object detection, triplet loss for Re-identification and DeepSort for real-time multiple object tracking.
Questions:
1. Can someone share some other resources regarding the same problem?
2. Is there a way to download and still use the DukeMTMC database for multiple tracking problem?
3. Is anyone aware when the official website (http://vision.cs.duke.edu/DukeMTMC/) will be available again?
Please feel free to provide different variations of the question :)
Intel OpenVINO framewors has all part of this task:
Objects detection with pretrained Faster RCNN, SSD or YOLO.
Reidentification models.
And complete demo application.
And you can use another models. Or if you want to use detection on GPU then take opencv_dnn_cuda for detection and OpenVINO for reidentification.
A good deep learning library that I have used in the past for my work is called Mask R-CNN, or Mask Regions-Convolutional Neural-Network. Although I have only used this algorithm on images and not on videos, the same principles apply, and it's very easy to make the transition to detection objects in a video. The algorithm uses Tensorflow and Keras, where you can split your input data, i.e images of people, into two sets, training, and validation.
For training, use a third party software like via, to annotate the people in the images. After the annotations have been drawn, you will export a JSON file with all annotations drawn, which will be used for the training process. Do the same thing for the validation phase, BUT make sure the images in the validation have not been seen before by the algorithm.
Once you have annotated both groups and generated JSON files, you then can start training the algorithm. Mask R-CNN makes it very easy to train, with all you need to do is pass one line full of commands to start it. If you want to train data on your GPU instead of your CPU, then install Nvidia's CUDA, which works very well with supported GPUs, and requires no coding after the installation.
During the training stage, you will be generating weights files, which are stored in the .h5 format. Depending on the number of epochs you choose, there will be a weights file generated per epoch. Once the training has finished, you then will just have to reference that weights file anytime you want to detect relevant objects, i.e. in your video feed.
Some important info:
Mask R-CNN is somewhat of an older algorithm, but it still works flawlessly today. Although some people have updated the algorithm to Tenserflow 2.0+, to get the best use out of it, use the following.
Tensorflow-gpu 1.13.2+
Keras 2.0.0+
CUDA 9.0 to 10.0
Honestly, the hardest part for me in the past was not using the algorithm, but finding the right versions of Tensorflow, Keras, and CUDA, that all play well with each other, and don't error out. Although the above-mentioned versions will work, try and see if you can upgrade or downgrade certain libraries to see if you can get better results.
Article about Mask R-CNN with video, I find it to be very useful and resourceful.
https://www.pyimagesearch.com/2018/11/19/mask-r-cnn-with-opencv/
The GitHub repo can be found below.
https://github.com/matterport/Mask_RCNN
EDIT
You can use this method across multiple cameras, just set up multiple video captures within a computer vision library like OpenCV. I assume this would be done with Python, which both Mask R-CNN and OpenCV are primarly based in.
I've started using Raven for my last project. When my boss learned about it, he mentioned it's based on Access and he had very bad experience with multiple users and Access. Now I have to either switch or prove to him he is wrong.
No, it isn't. The confusion is because RavenDB can use ESENT for data storage and ESENT used to be called Jet Blue. It was called Jet Blue because it was originally developed to replace the Jet Red engine which was/is used in Access. The Wikipedia entry is quite accurate about the history and differences.
Laurion's answer is correct, but I also wanted to point out that in Raven you can swap out the ESENT storage engine for another that Oren developed called Munin.
From Ayende's blog post about Munin.
Raven.Munin is the actual implementation of a low level managed storage for RavenDB. I split it out of the RavenDB project because I intend to make use of it in additional projects.
At its core, Munin provides high performance transactional, non relational, data store written completely in managed code. The main point in writing it was to support the managed storage in RavenDB, but it is going to be used for Raven MQ as well, and probably a bunch of other stuff as well. I’ll post about Raven MQ in the future, so don’t bother asking about it.
Munin is a low level api, not something that you are likely to use directly. And it was explicitly modeled to give me an interface similar in capability to what Esent gives me, but in purely managed code.
Google has described a novel framework for distributed processing on Massive Graphs.
http://portal.acm.org/citation.cfm?id=1582716.1582723
I wanted to know if similar to Hadoop (Map-Reduce) are there any open source implementations of this framework?
I am actually in process of writing a Pseudo distributed one using python and multiprocessing module and thus wanted to know if someone else has also tried implementing it.
Since public information about this framework is extremely scarce. (A link above and a blog post at Google Research)
Apache Giraph http://giraph.apache.org
Phoebus https://github.com/xslogic/phoebus
Bagel https://github.com/mesos/spark/pull/48
Hama http://hama.apache.org/
Signal-Collect http://code.google.com/p/signal-collect/
HipG http://www.cs.vu.nl/~ekr/hipg/
The main Hadoop project for distributed graph processing is the Hama project. Its still in incubation though.
The project has broken its work into two areas; a matrix package and a graph package.
Update:
A better option would be the Apache Giraph project which is based on Google Pregel.
Yes, a new project called Golden Orb, which is an open-source Pregel implementation written in Java that runs on both HBASE and Cassandra.
It has been submitted to Apache incubator for approval, and Ravel, the company behind Golden Orb, said they are releasing it this month (http://www.raveldata.com/goldenorb/).
See http://www.quora.com/Graph-Databases/What-open-source-graph-databases-support-horizontal-scaling
UPDATE: GraphX is GraphLab2 on Spark implemented by Joey Gonzalez, the creator of GraphLab2.
Spark's unique primitives make GraphX-Pregel the fastest JVM-based Pregel implementation. Spark is written in Scala, but Spark has a Java and Python API.
See...
GraphX: A Resilient Distributed Graph System on Spark (PDF)
Introduction to GraphX, by Joseph Gonzalez, Reynold Xin - UC Berkeley AmpLab 2013 (YouTube)
My Hacker News comment/overview on Spark.
P.S. There is also Bagel, which was the first cut at Pregel on Spark. It works; however, GraphX will be the way forward.
Two projects from Carnegie Mellon University provide Pregel-style computation
on graphs:
GraphLab http://graphlab.org
GraphChi http://graphchi.org
The programming model is not exactly same as Pregel, as they are not based on messaging but on modifying the graph (edge, vertex) data directly. Basically, it is easy to emulate Pregel in these framework.
There is also Signal/Collect a framework written in Scala and now using Akka
http://code.google.com/p/signal-collect/
https://github.com/uzh/signal-collect
From their website:
In Signal/Collect an algorithm is written from the perspective of vertices and edges. Once a graph has been specified the edges will signal and the vertices will collect. When an edge signals it computes a message based on the state of its source vertex. This message is then sent along the edge to the target vertex of the edge. When a vertex collects it uses the received messages to update its state. These operations happen in parallel all over the graph until all messages have been collected and all vertex states have converged.
Many algorithms have very simple and elegant implementations in Signal/Collect. You find more information about the programming model and features in the project wiki. Please take the time to explore some of the example algorithms below.
I create a framework called Phoebus. It is an implementation of Pregel written in Erlang. Checkout my blog entry for applying Pregel model to path finding as well..
Apache Giraph is currently in Incubator and under very active development, with committers from LinkedIn, Twitter, Facebook and academia looking to bring it up to production scale very quickly. It is pretty directly modeled on Pregel and was originally developed at Yahoo! Research. We're looking for new contributors and have several introductory JIRA issues to help people get started with the project. We'd love to have you get involved.
Stanford Students have developed an open Source implementation of Pregel.
http://infolab.stanford.edu/gps/