I want to know about the convergence time of Deep Q-learning vs Q-learning when run on same problem. Can anyone give me an idea about the pattern between them? It will be better if it is explained with a graph.
In short, the more complicated the state is, the better DQN is over Q-Learning (by complicated, I mean the number of all possible states). When the state is too complicated, Q-learning becomes nearly impossible to converge due to time and hardware limitation.
note that DQN is in fact a kind of Q-Learning, it uses a neural network to act like a q table, both Q-network and Q-table are used to output a Q value with the state as input. I will continue using Q-learning to refer the simple version with Q-table, DQN with the neural network version
You can't tell convergence time without specifying a specific problem, because it really depends on what you are doing:
For example, if you are doing a simple environment like FrozenLake:https://gym.openai.com/envs/FrozenLake-v0/
Q-learning will converge faster than DQN as long as you have a reasonable reward function.
This is because FrozenLake has only 16 states, Q-Learning's algorithm is just very simple and efficient, so it runs a lot faster than training a neural network.
However, if you are doing something like atari:https://gym.openai.com/envs/Assault-v0/
there are millions of states (note that even a single pixel difference is considered totally new state), Q-Learning requires enumerating all states in Q-table to actually converge (so it will probably require a very large memory plus a very long training time to be able to enumerate and explore all possible states). In fact, I am not sure if it is ever going to converge in some more complicated game, simply because of so many states.
Here is when DQN becomes useful. Neural networks can generalize the states and find a function between state and action (or more precisely state and Q-value). It no longer needs to enumerate, it instead learns information implied in states. Even if you have never explored a certain state in training, as long as your neural network has been trained to learn the relationship on other similar states, it can still generalize and output the Q-value. And therefore it is a lot easier to converge.
If a Q-Learning agent actually performs noticeably better against opponents in a specific card game when intermediate rewards are included, would this show a flaw in the algorithm or a flaw in its implementation?
It's difficult to answer this question without more specific information about the Q-Learning agent. You might term the seeking of immediate rewards as being the exploitation rate, which is generally inversely proportional to the exploration rate. It should be possible to configure this and the learning rate in your implementation. The other important factor is the choice of exploration strategy and you should not have any difficulty in finding resources that will assist in making this choice. For example:
http://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/Exploration_QLearning.pdf
https://www.cs.mcgill.ca/~vkules/bandits.pdf
To answer the question directly, it may be either a question of implementation, configuration, agent architecture or learning strategy that leads to immediate exploitation and a fixation on local minima.
I am busy coding reinforcement learning agents for the game Pac-Man and came across Berkeley's CS course's Pac-Man Projects, specifically the reinforcement learning section.
For the approximate Q-learning agent, feature approximation is used. A simple extractor is implemented in this code. What I am curious about is why, before the features are returned, they are scaled down by 10? By running the solution without the factor of 10 you can notice that Pac-Man does significantly worse, but why?
After running multiple tests it turns out that the optimal Q-value can diverge wildly away. In fact, the features can all become negative, even the one which would usually incline PacMan to eat pills. So he just stands there and eventually tries to run from ghosts but never tries to finish a level.
I speculate that this happens when he loses in training, that the negative reward is propagated through the system and since the potential number of ghosts can be greater than one, this has a heavy bearing on the weights, causing everything to become very negative and the system can't "recover" from this.
I confirmed this by adjusting the feature extractor to only scale the #-of-ghosts-one-step-away feature and then PacMan manages to get a much better result
In retrospect this question is now more mathsy and might fit better on another stackexchange.
In physics, its the ability for particles to exist in multiple/parallel dynamic states at a particular point in time. In computing, would it be the ability of a data bit to equal 1 or 0 at the same time, a third value like NULL[unknown] or multiple values?.. How can this technology be applied to: computer processors, programming, security, etc.?.. Has anyone built a practical quantum computer or developed a quantum programming language where, for example, the program code dynamically changes or is autonomous?
I have done research in quantum computing, and here is what I hope is an informed answer.
It is often said that qubits as you see them in a quantum computer can exist in a "superposition" of 0 and 1. This is true, but in a more subtle way than you might first guess. Even with a classical computer with randomness, a bit can exist in a superposition of 0 and 1, in the sense that it is 0 with some probability and 1 with some probability. Just as when you roll a die and don't look at the outcome, or receive e-mail that you haven't yet read, you can view its state as a superposition of the possibilities. Now, this may sound like just flim-flam, but the fact is that this type of superposition is a kind of parallelism and that algorithms that make use of it can be faster than other algorithms. It is called randomized computation, and instead of superposition you can say that the bit is in a probabilistic state.
The difference between that and a qubit is that a qubit can have a fat set of possible superpositions with more properties. The set of probabilistic states of an ordinary bit is a line segment, because all there is a probability of 0 or 1. The set of states of a qubit is a round 3-dimensional ball. Now, probabilistic bit strings are more complicated and more interesting than just individual probabilistic bits, and the same is true of strings of qubits. If you can make qubits like this, then actually some computational tasks wouldn't be any easier than before, just as randomized algorithms don't help with all problems. But some computational problems, for example factoring numbers, have new quantum algorithms that are much faster than any known classical algorithm. It is not a matter of clock speed or Moore's law, because the first useful qubits could be fairly slow and expensive. It is only sort-of parallel computation, just as an algorithm that makes random choices is only in weak sense making all choices in parallel. But it is "randomized algorithms on steroids"; that's my favorite summary for outsiders.
Now the bad news. In order for a classical bit to be in a superposition, it has be a random choice that is secret from you. Once you look a flipped coin, the coin "collapses" to either heads for sure or tails for sure. The difference between that and a qubit is that in order for a qubit to work as one, its state has to be secret from the rest of the physical universe, not just from you. It has to be secret from wisps of air, from nearby atoms, etc. On the other hand, for qubits to be useful for a quantum computer, there has to be a way to manipulate them while keeping their state a secret. Otherwise its quantum randomness or quantum coherence is wrecked. Making qubits at all isn't easy, but it is done routinely. Making qubits that you can manipulate with quantum gates, without revealing what is in them to the physical environment, is incredibly difficult.
People don't know how to do that except in very limited toy demonstrations. But if they could do it well enough to make quantum computers, then some hard computational problems would be much easier for these computers. Others wouldn't be easier at all, and great deal is unknown about which ones can be accelerated and by how much. It would definitely have various effects on cryptography; it would break the widely used forms of public-key cryptography. But other kinds of public-key cryptography have been proposed that could be okay. Moreover quantum computing is related to the quantum key distribution technique which looks very safe, and secret-key cryptography would almost certainly still be fairly safe.
The other factor where the word "quantum" computing is used regards an "entangled pair". Essentially if you can create an entangled pair of particles which have a physical "spin", quantum physics dictates that the spin on each electron will always be opposite.
If you could create an entangled pair and then separate them, you could use the device to transmit data without interception, by changing the spin on one of the particles. You can then create a signal which is modulated by the particle's information which is theoretically unbreakable, as you cannot know what spin was on the particles at any given time by intercepting the information in between the two signal points.
A whole lot of very interested organisations are researching this technique for secure communications.
Yes, there is quantum encryption, by which if someone tries to spy on your communication, it destroys the datastream such that neither they nor you can read it.
However, the real power of quantum computing lies in that a qubit can have a superposition of 0 and 1. Big deal. However, if you have, say, eight qubits, you can now represent a superposition of all integers from 0 to 255. This lets you do some rather interesting things in polynomial instead of exponential time. Factorization of large numbers (IE, breaking RSA, etc.) is one of them.
There are a number of applications of quantum computing.
One huge one is the ability to solve NP-hard problems in P-time, by using the indeterminacy of qubits to essentially brute-force the problem in parallel.
(The struck-out sentence is false. Quantum computers do not work by brute-forcing all solutions in parallel, and they are not believed to be able to solve NP-complete problems in polynomial time. See e.g. here.)
Just a update of quantum computing industry base on Greg Kuperberg's answer:
D-Wave 2 System is using quantum annealing.
The superposition quantum states will collapse to a unique state when a observation happened. The current technologies of quantum annealing is apply a physical force to 2 quantum bits, the force adds constrains to qubits so when observation happened, the qubit will have higher probability to collapse to a result that we are willing to see.
Reference:
How does a quantum machine work
I monitor recent non-peer reviewed articles on the subject, this is what I extrapolate from what I have read. a qubit, in addition to what has been said above. namely that they can hold values in superposition, they can also hold multiple bits, for example spin up/+ spin down/+ spin -/vertical , I need to abbreviate +H,-H,+V,-V Left+, LH,LV also not all of the combinations are valid and there are additional values that can be placed on the type of qubit
each used similar to ram vs rom etc. photon with a wavelength, electron with a charge, photon with a charge, photon with a spin, you get the idea, some combinations are not valid and some require additional algorithms in order to pass the argument to the next variable(location where data is stored) or qubit(location of superposition of values to be returned, if you will simply because the use of wires is by necessity limited due to size and space. One of the greatest challenges is controlling or removing Q.(quantum) decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. November 2011 researchers factorised 143 using 4 qubits. that same year, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset.May 2013, Google Inc announced that it was launching the Q. AI. Lab, hopefully to boost AI. I really do Hope I didn't waste anyones time with things they already knew. If you learned something please up.
As I can not yet comment, it really depends on what type of qubit you are working with to know the number of states for example the UNSW silicon Q. bit" vs a Diamond-neutron-valency or a SSD NMR Phosphorus - silicon vs Liquid NMR of the same.
I had started working on GPGPU some days ago and successfully implemented cholesky factorization with good performacne and I attended a conference on High Performance Computing where some people said that "GPGPU is a Hack".
I am still confused what does it mean and why they were saying it hack. One said that this is hack because you are converting your problem into a matrix and doing operations on it. But still I am confused that does people think it is a hack or if yes then why?
Can anyone help me, why they called it a hack while I found nothing wrong with it.
One possible reason for such opinion is that the GPU was not originally intended for general purpose computations. Also programming a GPU is less traditional and more hardcore and therefore more likely to be perceived as a hack.
The point that "you convert the problem into a matrix" is not reasonable at all. Whatever task you solve with writing code you choose reasonable data structures. In case of GPU matrices are likely the most reasonable datastructures and it's not a hack but just a natural choice to use them.
However I suppose that it's a matter of time for GPGPU becoming widespread. People just have to get used to the idea. After all who cares which unit of the computer runs the program?
On the GPU, having efficient memory access is paramount to achieving optimal performance. This often involves restructuring or even choosing entirely new algorithms and data structures. This is reason why GPU programming can be perceived as a hack.
Secondly, adapting an existing algorithm to run on the GPU is not in and of itself science. The relatively low scientific contribution of some GPU algorithm-related papers has led to a negative perception of GPU programming as strictly "engineering".
Obviously, only the person who said that can say for certain why he said it, but, here's my take:
A "Hack" is not a bad thing.
It forces people to learn new programming languages and concepts. For people who are just trying to model the weather or protein folding or drug reactions, this is an unwelcome annoyance. They didn't really want to learn FORTRAN (or whatever) in the first place, and now the have to learn another programming system.
The programming tools are NOT very mature yet.
The hardware isn't as reliable as CPUs (yet) so all of the calculations have to be done twice to make sure you've got the right answer. One reason for this is that GPUs don't come with error-correcting memory yet, so if you're trying to build a supercomputer with thousands of processors, the probability of a cosmic ray flipping a bit in you numbers approaches certainty.
As for the comment "you are converting your problem into a matrix and doing operations on it", I think that shows a lot of ignorance. Virtually ALL of high-performance computing fits that description!
One of the major problems in GPGPU for the past few years and probably for the next few is that programming them for arbitrary tasks was not very easy. Up until DX10 there was no integer support among GPUs and branching is still very poor. This is very much a situation where in order to get maximum benefit you have to write your code in a very awkward manner to extract all sorts of efficiency gains from the GPU. This is because you're running on hardware that is still dedicated to processing polygons and textures, rather than abstract parallel tasks.
Obviously, thats my take on it and YMMV
The GPGPU harks back to the days of the math co-processor. A hack is a shortcut to solving a long winded problem. GPGPU is a hack just like NAT on top of IPV4 is a hack. Computational problems just like networks are getting bigger as we try to do more, GPGPU is an useful interim solution, whether it stays outside the core CPU chip and has separate cranky API or gets sucked into the CPU via API or manufacture is up to the path finders.
I suppose he meant that using GPGPU forced you to restructure your implementation, so that it fitted the hardware, not the problem domain. Elegant implementation should fit the latter.
Note, that the word "hack" may have several different meanings:
http://www.urbandictionary.com/define.php?term=hack