I've an idea in reinforcement learning and I am stuck in implementing it so please help me:
my idea is : I want to make three agents by using DQN but each agent must use DQN to learn and catch the other agent in the same environment, so is there a way to do this. thank you.
Related
I am trying to train and test the VQA model on https://github.com/akirafukui/vqa-mcb with my own dataset.
I learned a deeplearning unit at Uni, but I still don't know how to make use of the model in my code. Also, I don't understand the Prerequisites section in readme.
Could you please give me some directions or useful resources on the web? Thank you very much!
I’m trying to use Reinforcement Learning to solve a problem that involves a ton of simultaneous actions. For example, the agent will be able to take actions that can result in a single action, like shooting, or that can result in multiple actions, like shooting while jumping while turning right while doing a karate chop, etc. When all the possible actions are combined, I end up with a huge action array, say 1 x 2000. So my LSTM network output array will have that size. Of course I’ll use a dictionary to decode the action array to apply the actions(s). So my questions are, is that action array too large? Is this the way to handle simultaneous actions? Is there any other way to do this? Feel free to link any concrete examples you have seen around. Thanks.
I also have been trying to do something similar for my problem. You can check out the following papers:
Exploring Multi-Action Relationship in Reinforcement Learning
Imitation Learning with Concurrent Actions in 3D Games
Action Branching Architectures for Deep Reinforcement Learning
StarCraft II: A New Challenge for Reinforcement Learning
I'm new in the field of reinforcement learning. So I'm quite confused with "model based" or "model free" terms.
For example, in a video game, if I want to train an agent (a car) to drive on a racetrack.
If my input is a 256x256x3 first person image of the game, should I use a model free RL algorithm ?
And if I want to do the same, but with a 3rd person view above the racetrack, knowing coordinates, speed of the car and all obstacles, etc..., should I use model based RL ?
Thank you for your time.
In model-based you learn a model of the dynamics of your system and use it for planning or for generating "fake" samples. If you can learn the dynamics well, it can be extremely helpful, but if your model is wrong then it can me disastrous.
That said, there is no general rule for when to use model-free or model-based. Usually it depends on how much prior knowledge you have that can help you learning a good dynamics model.
I’ve started exploring dynamo for a while now and quite enjoying its power. I’ve started work on a project, I’m wondering if someone would like to share their expert views on how do I create series of families from one starting point to other. See the following image to understand it visually. I’m sure we can achieve such functionality via Dynamo. Appreciate any help. Thank you.
Here is a discussion of using a dynamic model updater DMU in conjunction with the Idling event to achieve a couple of complex synchronisation tasks, including a video of almost exactly what you are asking for: Updater Queues Multi-Transaction Operation for Idling.
Wanted to do something like this - http://scribbler.eye.gatech.edu/paper.pdf. could someone point me to working model for this? They have mentioned building on (https://github.com/TengdaHan/Convolutional_Sketch_Inversion). However, I do not have much experience with GANs and do not know how to adapt a normal CNN to a GAN. Any help with this would be greatly appreciated too.
Also, they have mentioned using skip layers(residual layers), but I can find no indication of their connections(architecture).
From the figure given in the paper, I cannot make out where the Generator is or where the Discriminator is.
In short, I do not know how to hook up a GAN to CNN. Any help with this would be greatly appreciated.