Real roots of a system of 8 polynomial equations of degree 3 - numerical-methods

I would like to obtain ALL the real roots of a system of 8 polynomial equations in 8 variables, where the maximum degree of each equation is 3. Is this doable? What is the best software to do this?

You will need to use a multi-dimensional root finding method.
Newton's should be fairly easy to implement (simple algorithm) and you may find some details about this and other methods here.
As far as I understand, it's not likely to be straightforward to solve 8 simultaneous cubic equations with a very general direct method.
Depending on your problem, you may also want to check if it's analogous to the problem of fitting a cubic spline to a set of points. If it is, then a simple direct algorithm is available here.
For the initial conditions, you might need to use well-dispersed random starting points in your domain of interest. You could use sobol sequences or some other low-discrepancy random number generator to efficiently generate random numbers to fill-out the space under consideration.
You may also consider moving this question to math exchange where you might have a better chance of having this answered correctly.

Related

Goal Seek in Octave to replicate Excel's 'Solver' Macro

This is essentially a question on fundamentals, and whether or not there is a more efficient way to achieve what I am looking for. I have built a working fluid dynamics calculator in Excel to find the flow rates required for a target pressure loss, the optimisation is handled using Solver but it's very clunky and not user friendly.
I'm trying to replicate the function in Octave since it's widely used here, but I am a complete beginner; I'm probably missing something obvious. I can easily enter all of the math for a single iteration via a series of functions, but my excel file required using the 'Solver' macro, and I'm unsure how to efficiently replicate this in Octave.
I am aware that linprog (in matlab) and glpk (octave) can be used to solve systems of linear equations.
I have a series of nested equations which are all dependant on a single matrix, Q (flow rates at various locations). Many other inputs are required, but they either remain constant throughout calculation (e.g. system geometry) or are dictated by Q (e.g. Reynolds number and loss coefficients). In trying to simplify my problem I have settled on two steps:
Write code to solve my problem, input: Q matrix, output: pressure loss matrix
Create a loop that iterates different Q matrices until some conditions for the pressure loss matrix are met.
I don't think it will be practical to get my expressions into the form of A*x = B (in order to use glpk) given the complexity. In excel, I can point solver at a Q value that drives a multitude of equations that impact pressure loss, and it will find the value I need to achieve a target. How can I most efficiently replicate this functionality in Octave?
First off all Solver is not a macro. Pretty far from.
So, you're going to replicate a comprehensive "What-If" Analysis Plug-in -- so complex in fact, that Microsoft chose to contract a 3rd Party company of experts to develop the tool and provide support for it (successfully based on the 1.2 Billion copies they've distributed).
And you're going to this an inferior coding language that you're a complete beginner with? Cool. I'd like to see this!
Cool. Here's a checklist of Solver's features, so you don't miss anything:
Good Luck!
More Information:
Wikipedia : Solver
Office.com : Define and Solve a Problem by using Solver
Frontline: Official Solver Page: http://solver.com
AppSource.Microsoft.com : Solver (with Video)
Frontline:L Solver International Manazine

Using HMM for offline character recognition

I have extracted features from many images of isolated characters (such as gradient, neighbouring pixel weight and geometric properties. How can I use HMMs as a classifier trained on this data? All literature I read about HMM refers to states and state transitions but I can't connect it to features and class labeling. The example on JAHMM's home page doesn't relate to my problem.
I need to use HMM not because it will work better than other approaches for this problem but because of constraints on project topic.
There was an answer to this question for online recognition but I want the same for offline and in a little more detail
EDIT: I partitioned each character into a grid with fixed number of squares. Now I am planning to perform feature extraction on each grid block and thus obtain a sequence of features for each sample by moving from left to right and top to bottom.
Would this represent an adequate "sequence" for an HMM i.e. would an HMM be able to guess the temporal variation of the data, even though the character is not drawn from left to right and top to bottom? If not suggest an alternate way.
Should I feed a lot of features or start with a few? how do I know if the HMM is underforming or if the features are bad? I am using JAHMM.
Extracting stroke features is difficult and cant be logically combined with grid features? (since HMM expects a sequence generated by some random process)
I've usually seen neural networks used for this sort of recognition task, i.e. here, here here, and here. Since a simple google search turns up so many hits for neural networks in OCR, I'll assume you are set in using HMMs (a project limitation, correct?) Regardless, these links can offer some insight into gridding the image and obtaining image features.
Your approach for turning a grid into a sequence of observations is reasonable. In this case, be sure you do not confuse observations and states. The features you extract from one block should be collected into one observation, i.e. a feature vector. (In comparison to speech recognition, your block's feature vector is analogous to the feature vector associated with a speech phoneme.) You don't really have much information regarding the underlying states. This is the hidden aspect of HMMs, and the training process should inform the model how likely one feature vector is to follow another for a character (i.e. transition probabilities).
Since this is an off-line process, don't be concerned with the temporal aspects of how characters are actually drawn. For the purposes of your task, you've imposed a temporal order on the sequence of observations with your the left-to-right, top-to-bottom block sequence. This should work fine.
As for HMM performance: choose a reasonable vector of salient features. In speech recog, the dimensionality of a feature vector can be high (>10). (This is also where the cited literature can assist.) Set aside a percentage of the training data so that you can properly test the model. First, train the model, and then evaluate the model on the training dataset. How well does classify your characters? If it does poorly, re-evaluate the feature vector. If it does well on the test data, test the generality of the classifier by running it on the reserved test data.
As for the number of states, I would start with something heuristically derived number. Assuming your character images are scaled and normalized, perhaps something like 40%(?) of the blocks are occupied? This is a crude guess on my part since a source image was not provided. For an 8x8 grid, this would imply that 25 blocks are occupied. We could then start with 25 states - but that's probably naive: empty blocks can convey information (meaning the number of states might increase), but some features sets may be observed in similar states (meaning the number of states might decrease.) If it were me, I would probably pick something like 20 states. Having said that: be careful not to confuse features and states. Your feature vector is a representation of things observed in a particular state. If the tests described above show your model is performing poorly, tweak the number of states up or down and try again.
Good luck.

Cosine in floating point

I am trying to implement the cosine and sine functions in floating point (but I have no floating point hardware).
Since my processor has no floating-point hardware, nor instructions, I have already implemented algorithms for floating point multiplication, division, addition, subtraction, and square root. So those are the tools I have available to me to implement cosine and sine.
I was considering using the CORDIC method, at this site
However, I implemented division and square root using newton's method, so I was hoping to use the most efficient method.
Please don't tell me to just go look in a book or that "paper's exist", no kidding they exist. I am looking for names of well known algorithms that are known to be fast and efficient.
First off, depending on your accuracy requirements, this can be considerably fussier than your earlier questions.
Now that you've been warned: you'll first want to reduce the argument modulo pi/2 (or 2pi, or pi, or pi/4) to get the input into a manageable range. This is the subtle part. For a nice discussion of the issues involved, download a copy of K.C. Ng's ARGUMENT REDUCTION FOR HUGE ARGUMENTS: Good to the Last Bit. (simple google search on the title will get you a pdf). It's very readable, and does a great job of describing why this is tricky.
After doing that, you only need to approximate the functions on a small range around zero, which is easily done via a polynomial approximation. A taylor series will work, though it is inefficient. A truncated chebyshev series is easy to compute and reasonably efficient; computing the minimax approximation is better still. This is the easy part.
I have implemented sine and cosine exactly as described, entirely in integer, in the past (sorry, no public sources). Using hand-tuned assembly, results in the neighborhood of 100 cycles are entirely reasonable on "typical" processors. I don't know what hardware you're dealing with (the performance will mostly be gated on how quickly your hardware can produce the high part of an integer multiply).
For various levels of precision, you can find some good approximations here:
http://www.ganssle.com/approx.htm
With the added advantage that they are deterministic in runtime unlike the various "converging series" options which can vary wildly depending on the input value. This matters if you are doing anything real-time (games, motion control etc.)
Since you have the basic arithmetic operations implemented, you may as well implement sine and cosine using their taylor series expansions.

What kind of learning algorithm would you use to build a model of how long it takes a human to solve a given Sudoku situation?

I don't have much experience in machine learning, pattern recognition, data mining, etc. and in their underlying theory and systems.
I would like to develop an artificial model of the time it takes a human to make a move in a given Sudoku puzzle.
So what I'm looking for as an output from the machine learning process is a model that can give predictions on how long does it take for a target human to make a move in a given Sudoku situation.
Same input doesn't always map to same outcome. It takes different times for the human to make a move with the same situation, but my hypothesis is that there's a tendency in the resulting probability distribution. (My educated guess is that it is ~normal.)
I have ideas about the factors that influence the distribution (like #empty slots) but would preferably leave it to the system to figure these patterns out. Please notice, that I'm not interested in the patterns, just the model.
I can generate sample and test data easily by running sudoku puzzles and measuring the times it takes to make the moves.
What kind of learning algorithm would you suggest to use for this?
I was thinking NNs, but I'm not sure if they can have the desired property of giving weighted random outcomes for the same input.
If I understand this correctly you have an input vector of length 81, which contains 1 if the square is filled in and 0 otherwise. You want to learn a function which returns a probability distribution which models the response time of a human to that board position.
My first response would be that this is a regression problem and you should try straightforward linear regression. This will not provide you with a distribution of response times, but a single 'best-guess' response time.
I'm not clear on why you want to model a distribution of response times. However, if you really want to do want to output a distribution then it sounds like you want to look at Bayesian methods. I'm not really an expert on Bayesian inference, so I can't help you much further here.
However, I don't really think your approach is going to work because I agree with your intuition about features such as the number of empty slots being important. There are also other obvious features, such as the number of empty slots per row/column that are likely to be important. Explicitly putting these features in your representation will probably be much more successful than expecting that the learning algorithm will infer something similar on its own.
The monte carlo method seems like it would work well here but would require a stack of solutions the size of the moon to really do it. And it wouldn't give you the time per person, just the time on average.
My understanding of it, tenuous as it is, is that you have a database with a board position and the time it took a human to make the next move. At the very least you have a starting point for most moves. Even if it's not in the database you could start to calculate how long it would take to make a move based on some algorithm. Though I know you had specified you wanted machine learning to do this it might be worth segmenting the problem into something a little smaller then building on it.
If you have some guesstimate as to what influences the function (# of empty cell, etc), try to train a classifier on a vector of features, and not on the 81 cells vector (0/1 or 0..9, doesn't really matter for my argument).
I think that your claim:
we wouldn't have to necessary know the underlying patterns, the "trained patterns" in a learning system automatically encodes these sometimes quite delicate and subtle patterns inside them -- that's one of their great power
is wrong. you do have to give the network the right domain. for example, when trying to detect object in an image, working in the pixel domain is pointless. you'll only get results if you first run some feature detection to detect edges, corners, etc.
Theoretically, with enough non-linearity (in NN - enough layers in the network) it can detect such things, but in practice, I have never seen that work, without giving the classifier the right features to work with.
I was thinking NNs, but I'm not sure if they can have the desired property of giving weighted random outcomes for the same input.
You're just trying to learn a function from 2^81 or 10^81 (or a much smaller feature space as I suggest) to R (response time between 0 and Inf) or some discretization of that. So NN and other classifiers can do that.

What is Cyclomatic Complexity?

A term that I see every now and then is "Cyclomatic Complexity". Here on SO I saw some Questions about "how to calculate the CC of Language X" or "How do I do Y with the minimum amount of CC", but I'm not sure I really understand what it is.
On the NDepend Website, I saw an explanation that basically says "The number of decisions in a method. Each if, for, && etc. adds +1 to the CC "score"). Is that really it? If yes, why is this bad? I can see that one might want to keep the number of if-statements fairly low to keep the code easy to understand, but is this really everything to it?
Or is there some deeper concept to it?
I'm not aware of a deeper concept. I believe it's generally considered in the context of a maintainability index. The more branches there are within a particular method, the more difficult it is to maintain a mental model of that method's operation (generally).
Methods with higher cyclomatic complexity are also more difficult to obtain full code coverage on in unit tests. (Thanks Mark W!)
That brings all the other aspects of maintainability in, of course. Likelihood of errors/regressions/so forth. The core concept is pretty straight-forward, though.
Cyclomatic complexity measures the number of times you must execute a block of code with varying parameters in order to execute every path through that block. A higher count is bad because it increases the chances for logical errors escaping your testing strategy.
Cyclocmatic complexity = Number of decision points + 1
The decision points may be your conditional statements like if, if … else, switch , for loop, while loop etc.
The following chart describes the type of the application.
Cyclomatic Complexity lies 1 – 10  To be considered Normal
applicatinon
Cyclomatic Complexity lies 11 – 20  Moderate application
Cyclomatic Complexity lies 21 – 50  Risky application
Cyclomatic Complexity lies more than 50  Unstable application
Wikipedia may be your friend on this one: Definition of cyclomatic complexity
Basically, you have to imagine your program as a control flow graph and then
The complexity is (...) defined as:
M = E − N + 2P
where
M = cyclomatic complexity,
E = the number of edges of the graph
N = the number of nodes of the graph
P = the number of connected components
CC is a concept that attempts to capture how complex your program is and how hard it is to test it in a single integer number.
Yep, that's really it. The more execution paths your code can take, the more things that must be tested, and the higher probability of error.
Another interesting point I've heard:
The places in your code with the biggest indents should have the highest CC. These are generally the most important areas to ensure testing coverage because it's expected that they'll be harder to read/maintain. As other answers note, these are also the more difficult regions of code to ensure coverage.
Cyclomatic Complexity really is just a scary buzzword. In fact it's a measure of code complexity used in software development to point out more complex parts of code (more likely to be buggy, and therefore has to be very carefully and thoroughly tested). You can calculate it using the E-N+2P formula, but I would suggest you have this calculated automatically by a plugin. I have heard of a rule of thumb that you should strive to keep the CC below 5 to maintain good readability and maintainability of your code.
I have just recently experimented with the Eclipse Metrics Plugin on my Java projects, and it has a really nice and concise Help file which will of course integrate with your regular Eclipse help and you can read some more definitions of various complexity measures and tips and tricks on improving your code.
That's it, the idea is that a method which has a low CC has less forks, looping etc which all make a method more complex. Imagine reviewing 500,000 lines of code, with an analyzer and seeing a couple methods which have oder of magnitude higher CC. This lets you then focus on refactoring those methods for better understanding (It's also common that a high CC has a high bug rate)
Each decision point in a routine (loop, switch, if, etc...) essentially boils down to an if statement equivalent. For each if you have 2 codepaths that can be taken. So with the 1st branch there's 2 code paths, with the second there are 4 possible paths, with the 3rd there are 8 and so on. There are at least 2**N code paths where N is the number of branches.
This makes it difficult to understand the behavior of code and to test it when N grows beyond some small number.
The answers provided so far do not mention the correlation of software quality to cyclomatic complexity. Research has shown that having a lower cyclomatic complexity metric should help develop software that is of higher quality. It can help with software quality attributes of readability, maintainability, and portability. In general one should attempt to obtain a cyclomatic complexity metric of between 5-10.
One of the reasons for using metrics like cyclomatic complexity is that in general a human being can only keep track of about 7 (plus or minus 2) pieces of information simultaneously in your brain. Therefore, if your software is overly complex with multiple decision paths, it is unlikely that you will be able to visualize how your software will behave (i.e. it will have a high cyclomatic complexity metric). This would most likely lead to developing erroneous or bug ridden software. More information about this can be found here and also on Wikipedia.
Cyclomatic complexity is computed using the control flow graph. The Number of quantitative measure of linearly independent paths through a program's source code is called as Cyclomatic Complexity ( if/ if else / for / while )
Cyclomatric complexity is basically a metric to figure out areas of code that needs more attension for the maintainability. It would be basically an input to the refactoring.
It definitely gives an indication of code improvement area in terms of avoiding deep nested loop, conditions etc.
That's sort of it. However, each branch of a "case" or "switch" statement tends to count as 1. In effect, this means CC hates case statements, and any code that requires them (command processors, state machines, etc).
Consider the control flow graph of your function, with an additional edge running from the exit to the entrance. The cyclomatic complexity is the maximum number of cuts we can make without separating the graph into two pieces.
For example:
function F:
if condition1:
...
else:
...
if condition2:
...
else:
...
Control Flow Graph
You can probably intuitively see why the linked graph has a cyclomatic complexity of 3.
Cyclomatric complexity is a measure of how complex a unit of software is.It measures the number of different paths a program might follow with conditional logic constructs (If ,while,for,switch & cases etc....). If you will like to learn more about calculating it here is a wonderful youtube video you can watch https://www.youtube.com/watch?v=PlCGomvu-NM
It is important in designing test cases because it reveals the different paths or scenarios a program can take .
"To have good testability and maintainability, McCabe recommends
that no program module should exceed a cyclomatic complexity of 10"(Marsic,2012, p. 232).
Reference:
Marsic., I. (2012, September). Software Engineering. Rutgers University. Retrieved from www.ece.rutgers.edu/~marsic/books/SE/book-SE_marsic.pdf