Resolve system of equations with 10th degree polynomial, LSM - equation

I have numerical problem with resolve system of equations (polynomial 10th degree) using ordinary LSM (Least Square Method). I obtained parameters with huge and very small values - therefore I can't inverse matrix constructed in this method - precision is to low even in extended variables. I tried do this in C++,Matlab,Delphi.
Can somebody know application instruments which can I do this with enough accurancy or numerical tips do get good results. Standard calculation on matrix is unfortunatly elusive.

I think that your problem comes from the fact that you are using 10th order polynomials, which quite often lead to numerical problems:
First of all, they can be unsuitable because of large oscillations. Even when interpolating a simple function, these oscillations can be present, see the famous Runge's example.
Secondly, the fitting of the high order polynomials can lead to hill-conditioned linear systems, which is why you could not invert the matrix (which you should anyway not do). I made a simple experiment: I took 11 equidistant points (on the interval [0,1]) and assembled the matrix of the linear system to solve. Matlab gives me a condition number of about 1e8, so the least square matrix has a condition number of 1e16. So your matrix is 'close to singular' and this means that all the numerical precision is lost.
So, the best way to get rid of your problem is to get rid of the 10th order polynomial. You should maybe consider lower order polynomials, splines or piecewise polynomial approximations.
If you really need 10th order polynomials (e.g. if you know that your data have been generated by such a polynomial), then do not invert the matrix. Use a good preconditioner and an iterative method to solve the system without inverting the matrix.

Related

Can I find price floors and ceilings with cuda

Background
I'm trying to convert an algorithm from sequential to parallel, but I am stuck.
Point and Figure Charts
I am creating point and figure charts.
Decreasing
While the stock is going down, add an O every time it breaks through the floor.
Increasing
While the stock is going up, add an X every time it breaks through the ceiling.
Reversal
If the stock reverses direction, but the change is less than a reversal threshold (3 units) do nothing. If the change is greater than the reversal threshold, start a new column (X or O)
Sequential vs Parallel
Sequentially, this is pretty straight forward. I keep a variable for the floor and ceiling. If the current price breaks through the floor or ceiling, or changes more than the reversal threshold, I can take the appropriate action.
My question is, is there a way to find these reversal point in parallel? I'm fairly new to thinking in parallel, so I'm sorry if this is trivial. I am trying to do this in CUDA, but I have been stuck for weeks. I have tried using the finite difference algorithms from NVidia. These produce local max / min but not the reversal points. Small fluctuations produce numerous relative max / min, but most of them are trivial because the change is not greater than the reversal size.
My question is, is there a way to find these reversal point in parallel?
one possible approach:
use thrust::unique to remove periods where the price is numerically constant
use thrust::adjacent_difference to produce 1st difference data
use thrust::adjacent_difference on 1st difference data to get the 2nd difference data, i.e the points where there is a change in the sign of the slope.
use these points of change in sign of slope to identify separate regions of data - build a key vector from these (e.g. with a prefix sum). This key vector segments the price data into "runs" where the price change is in a particular direction.
use thrust::exclusive_scan_by_key on the 1st difference data, to produce the net change of the run
Wherever the net change of the run exceeds a threshold, flag as a "reversal"
Your description of what constitutes a reversal may also be slightly unclear. The above method would not flag a reversal on certain data patterns that you might classify as a reversal. I suspect you are looking beyond a single run as I have defined it here. If that is the case, there may be a method to address that as well - with more steps.

Find the max. data length for a CRC polynomial and a given Hamming Distance

I am looking for a numerical algorithm to calculate the maximum data length for a given CRC polynomial and a given Hamming Distance.
E.g. lets say I have an 8 bit CRC with full polynomial 0x19b. I want to achieve a Hamming Distance of 4. Now how many bits of data can be guarded under these conditions?
Is there some numerical algorithm (ideally C or C++ code) that can be used to solve this problem?
Not a complete answer, but my spoof code can be adapted to this problem.
To determine that you have not met the requirement of a Hamming distance of 4 for a given message length, you need only find a single codeword with a Hamming distance of 3. If you give spoof a set of bit locations in a message, it will determine which of those bits to invert in order to leave the CRC unchanged. Spoof simply solves a set of linear equations over GF(2) to find the bit locations to invert.
That will quickly narrow down the message lengths that will work. Once you have a candidate length, n, for which you have not been able to find a codeword of distance 3, proving that there are no such codewords will be a little more work. You would need to generate all possible 3-bit patterns, of which there are n(n-1)(n-2)/6, and look to see if any of them have a CRC of zero. Depending on n, that might not be too daunting. A fast way to do this is to generate the CRCs of all messages with a single bit set, and exclusive-oring all choices of three CRCs from that set to see if any of those are zero.
I conjecture that there is a faster way to do that last step by intelligently culling the rows used in the linear equation solver, allowing for all bit positions. However the margin here is not sufficient for me to express the proof.

gnuRadio Dual Tone detection

I am trying to come up with an efficient way to characterize two narrowband tones separated by about 900kHz (one at around 100kHZ and one at around 1MHz once translated to baseband). They don't move much in freq over time but may have amplitude variations we want to monitor.
Each tone is roughly about 100Hz wide and we are required to characterize these two beasts over long periods of time down to a resolution of about 0.1 Hz. The samples are coming in at over 2M Samples/sec (TBD) to adequately acquire the highest tone.
I'm trying to avoid (if possible) doing brute force >2MSample FFTs on the data once a second to extract frequency domain data. Is there an efficient approach? Something akin to performing two (much) smaller FFTs around the bands of interest? Ive looked at Goertzel and chirp z methods but I am not certain it helps save processing.
Something akin to performing two (much) smaller FFTs around the bands of interest
There is, it's called Goertzel, and is kind of the FFT for single bins, and you already have looked at it. It will save you CPU time.
Anyway, there's no reason to do a 2M-point FFT; first of all, you only want a resolution of about 1/20 the sampling rate, hence, a 20-point FFT would totally do, and should be pretty doable for your CPU at these low rates; since you don't seem to care about phase of your tones, FFT->complex_to_mag.
However, there's one thing that you should always do: look at your signal of interest, and decimate down to the rate that fits exactly that. Since GNU Radio's filters are implemented cleverly, the filter itself will only run at the decimated rate, and you can spend the CPU cycles saved on a better filter.
Because a direct decimation from 2MHz to 100Hz (decimation: 20000) will really have an ugly filter length, you should do this multi-rated:
I'd try first decimating by 100, and then in a second step by 100, leaving you with 200Hz observable spectrum. The xlating fir filter blocks will let you use a simple low-pass filter (use the "Low-Pass Filter Taps" block to define a variable that contains such taps) as a band-selector.

Writing a Discrete Fourier Transform program

I would like to write a DFT program using FFT.
This is actually used for very large matrix-vector multiplication (10^8 * 10^8), which is simplified to a vector-to-vector convolution, and further reduced to a Discrete Fourier Transform.
May I ask whether DFT is accurate? Because the matrix has all discrete binary elements, and the multiplication process would not tolerate any non-zero error probability in result. However from the things I currently learnt about DFT it seems to be an approximation algorithm?
Also, may I ask roughly how long would the code be? i.e. would this be something I could start from scratch and compose in C++ in perhaps one or two hundred lines? Cause actually this is for a paper...and all I need is that the complexity analyis is O(nlogn) and the coefficient in front of it doesn't really matter :) So the simplest implementation would be best. (Although I did see some packages like kissfft and FFTW, but they are very lengthy and probably an overkill for my purpose...)
A canonical radix-2 FFT can be written in less than 200 lines of C++. The average numerical error is roughly proportional to O(log N), so you will need to use a large enough numeric type and data scale factor to account for this.
You can compute numerically stable convolutions using the Number Theoretic transform. It uses unique integer sequences to compute the discrete Fourier transform over integer fields/rings. The only caveat is that the signal needs to be integer valued.
It is implementation is roughly the same size as the FFT, but a little faster. You can find my implementation of it at finitetransform.sourceforge.net as the NTTW sub-library. The APFloat library might be more relevant to your needs as they do multiplication of large numbers using convolutions.

Algorithm for online approximation of a slowly-changing, real valued function

I'm tackling an interesting machine learning problem and would love to hear if anyone knows a good algorithm to deal with the following:
The algorithm must learn to approximate a function of N inputs and M outputs
N is quite large, e.g. 1,000-10,000
M is quite small, e.g. 5-10
All inputs and outputs are floating point values, could be positive or negative, likely to be relatively small in absolute value but no absolute guarantees on bounds
Each time period I get N inputs and need to predict the M outputs, at the end of the time period the actual values for the M outputs are provided (i.e. this is a supervised learning situation where learning needs to take place online)
The underlying function is non-linear, but not too nasty (e.g. I expect it will be smooth and continuous over most of the input space)
There will be a small amount of noise in the function, but signal/noise is likely to be good - I expect the N inputs will expain 95%+ of the output values
The underlying function is slowly changing over time - unlikely to change drastically in a single time period but is likely to shift slightly over the 1000s of time periods range
There is no hidden state to worry about (other than the changing function), i.e. all the information required is in the N inputs
I'm currently thinking some kind of back-propagation neural network with lots of hidden nodes might work - but is that really the best approach for this situation and will it handle the changing function?
With your number of inputs and outputs, I'd also go for a neural network, it should do a good approximation. The slight change is good for a back-propagation technique, it should not have to 'de-learn' stuff.
I think stochastic gradient descent (http://en.wikipedia.org/wiki/Stochastic_gradient_descent) would be a straight forward first step, it will probably work nicely given the operating conditions you have.
I'd also go for an ANN. Single layer might do fine since your input space is large. You might wanna give it a shot before adding a lot of hidden layers.
#mikera What is it going to be used for? Is it an assignment in a ML course?