What are the examples of NP problems that is reducible to a NP-complete but not the other way round? - p-np

What are the examples of NP problems that is reducible to NP-complete problem but not the other way round? When I read about NP and NP-complete, I thought the mapping will be one-one such that it is stupid to categorize them. However, surely there are problems where it can only be reducible in one direction. I am interested to know them.

All NP problems can be reduced to NP-complete problems. NP-complete problems are a special type of NP problems. Therefore, NP-complete problems don't need to be reduced to be considered to be in NP; they're already in NP.

Related

NAN during numerical solution of poisson equation

I am solving Poisson Equation for heterostructure (AlGaN-GaN system specifically) using SOR method, in FORTRAN. For a specific initialization, the solver gives NAN as output and stops (as I have set the ffpe-trap flag), and for a different initialization the solver runs well.
Should the solution of the poisson equation depend on the choice of the initial potential?
How do I understand the reason for NAN in general iteration method?
The more aggressive solvers converge faster, but tend to have also a smaller basin of attraction. If you start too far away from that, you might enter regions where the method becomes ill-conditioned. This could be caused by too much of the over-relaxation in the SOR method.
It may also happen that the solver diverges, a good solver should check for such rapidly increasing iteration sequences.
Of course, it could also be a "stupid" error in encoding the problem, for that automatic translators from mathematical formulas to the mesh grid equations were invented.
Without more details on the problem and the origin of the error, there is nothing more specific to be said. If you suspect a problem in the encoding of the discretized PDE, then add code here. If you think the problem is more related in the solver setup, or the solver itself, ask a question with more details in scientific computing, scicomp.SE.

Why neural network tends to output 'mean value'?

I am using keras to build a simple neural network for a regression task.
But the output is always tends to the 'mean value' of ground truth y data.
See the first figure, blue is ground truth, red is predicted value (very close to the constant mean of ground truth).
Also the model stops learning very early even though I set a learning epoch=100.
Anyone have ideas under what kinds of conditions the neural network will stop learning early and why the regression output tends to 'the mean' of ground truth?
Thanks!
Possibly because the data are unpredictable....? Do you know for certain that the data set has N order predictability of some kind?
Just eyeballing your data set, it lacks periodicity, lacks homoscedasticity, it lacks any slope or skew or trend or pattern... I can't really tell if there is anything wrong with your 'net. In the absence of any pattern, the mean is always the best prediction... and it is entirely possible (although not certain) that the neural net is doing its job.
I suggest you find an easier data set, and see if you can tackle that first.
The model is not learning from the data. Think of a basic linear regression - the 'null' prediction, the prediction if you didn't have any predictors at all, is just the expected value; i.e. the mean. It could be caused by many different issues, but initialization comes to mind - bad initialization leads to no learning. This blog post has good practical advice that may help.

What is the best way to read large datasets with less latency for torch model?

I am familiar to use some data specific layers or tools by the habits of my in use libraries like mxnet (recordIO), caffe(LMDB). They also handle data augmentation tricks as well. I decided to give a shot for Torch to see what is good or bad. From the initial gaze, there is no such data loading standard for Torch or I am missing it. Could you point some good ways to load large scale datasets for my model training?
Some pointers:
twitter/torch-dataset
fb.resnet.torch/dataloader
Element-Research/dataload
More resources can also be found in the torch7/wiki.

can every iterative algorithm be turned into dynamic programming?

It have been much discussed that every recursive algorithm can be transformed into iterative algorithms..
But... can every iterative algorithm be transformed into dynamic programming?
I'm starting to learn about Dynamic Programming... and i'm having a lot of problems.. even though i can find recursive solutions, and i'm expertising turning them into iterative algorithms, i still can't turn these iterative algorithms into dynamic programming... it'd be, indeed, very helpfull to certainly know that every iterative algorithm can be transformed into dynamic...
I hope that by Dynamic Programming you mean the same thing as Wikipedia does - that is, algorithms that break the problem into smaller subproblems, and use memoization to avoid having to solve the same problem twice.
Dynamic Programming cannot be usefully applied to all iterative algorithms. For Dynamic Programming to be useful, the problem needs two properties:
Overlapping subproblems - when solving the problem recursively, you need to encounter the same subproblem, with the same parameters, more than once, otherwise memoizing was a waste of time and memory.
Optimal substructure - the knowledge that if you have the solutions to the sub-problems, the solution to the whole problem is easy to compute.

cloth sim with implicit integration unstable problem

Im implementing a gpu based cloth simulation using mass-spring model with a backward euler integration.
The linear system is solved by using the conjugate gradient solver with filter. Everything is done in GPU.
I think as an implicit integration it should be stable like many paper pointed out, but it is unstable just like
the emplicit method. Most of time, when the time step size reaches a certain value (depending on the stiffness), the CG refuses to converge and goes into a infinite loop.
Although I've checked the code over and over again and read many papers, I still couldn't find out the reason.
The cloth moves correctly and the animation is much more convincible than the one using explicit integration, so i guess
the forces are computed correctly. Is there anything I missed to cause the instability?
I've been tackled by this strange problem for days......
Any one can help? any suggestion i would really appreciate
thanks alot in advance!