Determining the convex hull in the presence of outliers - octave

I made a software to create and optimize a racing line in a racetrack.
Now I want to integrate it using real data recorded from GPS, so I need to obtain the g-g diagram, where g is the acceleration. The real g-g diagram is a set of points, in a scatter graph. I need to obtain the contour of that scatter plot, to use it as boundary of limits accelerations.
To obtain data to work on it I recorded myself on two different racetrack.
The code I wrote translate the x-y coordinate to polar R-theta.
Then I divide the circle in a definite number of sector (say, 20).
I calculate the histogram of all R's values in each sector, then from histogram I take the last value with an acceptable number of samples.
Then I draw these lines, and this is the result:
It's not bad, but this boundary is a little inside from the real data, real acceleration is a little bit bigger. I cannot take only the max value, because in this way I take in consideration the absurd values (like 3g in right corner, for sure an error). Moreover, the limit change if I change the number of bins on the histogram, but I cannot find a way to choose the right number of bins.
How can I determine the "true" convex hull, ignoring the outliers?

Related

Can I find price floors and ceilings with cuda

Background
I'm trying to convert an algorithm from sequential to parallel, but I am stuck.
Point and Figure Charts
I am creating point and figure charts.
Decreasing
While the stock is going down, add an O every time it breaks through the floor.
Increasing
While the stock is going up, add an X every time it breaks through the ceiling.
Reversal
If the stock reverses direction, but the change is less than a reversal threshold (3 units) do nothing. If the change is greater than the reversal threshold, start a new column (X or O)
Sequential vs Parallel
Sequentially, this is pretty straight forward. I keep a variable for the floor and ceiling. If the current price breaks through the floor or ceiling, or changes more than the reversal threshold, I can take the appropriate action.
My question is, is there a way to find these reversal point in parallel? I'm fairly new to thinking in parallel, so I'm sorry if this is trivial. I am trying to do this in CUDA, but I have been stuck for weeks. I have tried using the finite difference algorithms from NVidia. These produce local max / min but not the reversal points. Small fluctuations produce numerous relative max / min, but most of them are trivial because the change is not greater than the reversal size.
My question is, is there a way to find these reversal point in parallel?
one possible approach:
use thrust::unique to remove periods where the price is numerically constant
use thrust::adjacent_difference to produce 1st difference data
use thrust::adjacent_difference on 1st difference data to get the 2nd difference data, i.e the points where there is a change in the sign of the slope.
use these points of change in sign of slope to identify separate regions of data - build a key vector from these (e.g. with a prefix sum). This key vector segments the price data into "runs" where the price change is in a particular direction.
use thrust::exclusive_scan_by_key on the 1st difference data, to produce the net change of the run
Wherever the net change of the run exceeds a threshold, flag as a "reversal"
Your description of what constitutes a reversal may also be slightly unclear. The above method would not flag a reversal on certain data patterns that you might classify as a reversal. I suspect you are looking beyond a single run as I have defined it here. If that is the case, there may be a method to address that as well - with more steps.

Best way to store a 3D Vector in MySql to grab entries within a distance?

I am storing entries inside of a database and I would like to store a 3D vector in a field and then later select all rows within X distance of a 3D Vector given. I am thinking storing X,Y, and Z in its own fields and then doing basic greater then and less then signs, but is there a better way I am overlooking?
You'll have to do the math every time if the start point is totally random. If you have a large dataset you could optimize by having a pre analysis for clusters of points which fall inside some minimum distance. Then you could avoid computations on all the points in a cluster.
I think that if you store your values in polar co-ordinates then there might be an optimization on the distance computation which reduces the number of computations.

Storing vector coordinates in MySQL

I am creating a database to track the (normalized) coordinates of events within a coordinate system. Think: a basketball shot chart, where coordinates of shot attempts are stored relative to where they were taken on the basketball court, in both positive and negative directions from center court.
I'm not exactly sure the best way to store this information in a database in order to give myself the most flexibility in utilizing the data. My options are:
Store a JSON object in a TEXT/CHAR column with X and Y properties
Store each X and Y coordinate in two DECIMAL columns
Use MySQL's spatial POINT object to store the coordinate
My goal is to store a normalized vector2 (as a percentage of the bounding box), so I can map the positions back out onto a rectangle of any size.
It would be nice to be able to do calculations, like distance from another point, but my understanding of spatial objects is that it is more for geographical coordinates than a normalized vector. The other options, however, make calculations a bit more difficult though, currently for my project, they aren't a definitive requirement.
Is it possible to use spatial POINT for this and would calculations be similar to that of measuring geographical points?
It is possible to use POINT, but it may be more of a hassle retrieving or modifying the values as it is stored in binary form. You won't be able to view or modify the field directly; you would use an SQL statement to get the components or create a new POINT to replace the old one.
They are stored as numbers and you can do normal mathematical operations on them. Geospatial-type calculations on distance would use other geospatial data types such as LINESTRING.
To insert a point you would have to create a point from two numbers (I think for your case, there would be no issues with the size of the numbers) :
INSERT INTO coordinatetable(testpoint) VALUES (GeomFromText('POINT(-100473882.33 2133151132.13)'));
INSERT INTO coordinatetable(testpoint) VALUES (GeomFromText('POINT(0.3 -0.213318973)'));
To retrieve it you would have to select the X and Y value separately
SELECT X(testpoint), Y(testpoint) from coordinatetable;
For your case, I would go with storing X and Y coordinate in two DECIMAL columns. It's easier to retrieve, modify and having X and Y coordinates separate would allow you direct access to to the coordinates rather than extract the values you want from data stored in a single field. For larger data sets, it may speed up your queries.
For example:
Whether the player is past half court only requires Y-coordinate
How much help the player could possibly get from the backboard would rely more on the X-coordinate than the Y-coordinate (X closer to zero => Straighter shot)
Whether the player usually scores from locations close to the long edges of the court would rely more on the X-coordinate than the Y-coordinate (X approaches 1 or -1)

How to find all frequencies in audio with discrete fourier transform?

I want to analyze some audio and decompose it as best as I can into sine waves. I have never used FFT before and am just doing some initial reading and about the concepts and available libraries, like FFTW and KissFFT.
I'm confused on this point... it sounds like the DFT/FFT will give you the sine amplitudes only at certain frequencies, multiples of a base frequency. For example, if I have audio sampled at the usual 44100 Hz, and I pick a chunk of say 256 samples, then that chuck could fit one cycle of 44100/256=172Hz, and the DFT will give me the sine amplitudes at 172, 172*2, 172*3, etc. Is that correct? How do you then find the strength at other frequencies? I'd like to see a spectrum all the way from 20Hz to about 15Khz, at about 1Hz increments.
Fourier decomposition allows you to take any function of time and describe it as a sum of sine waves each with different amplitudes and frequencies. If however you want to approach this problem using the DFT, you need to make sure you have sufficient resolution in the frequency domain in order to distinguish between different frequencies. Once you have that you can determine which frequencies are dominant in the signal and create a signal consisting of multiples sinewaves corresponding to those frequencies. You are correct in saying that with a sampling frequency of 44.1 kHz, only looking at 256 samples, the lowest frequency you will be able to detect in those 256 samples is a frequency of 172 Hz.
OBTAIN SUFFICIENT RESOLUTION IN THE FREQUENCY DOMAIN:
Amplitude values for frequencies "only at certain frequencies, multiples of a base frequency", is true for Fourier decomposition, NOT the DFT, which will have a frequency resolution of a certain increment. The frequency resolution of the DFT is related to the sampling rate and number of samples of the time-domain signal used to calculate the DFT. Reducing the frequency spacing will give you a better ability to distinguish between two frequencies close together and this can be done in two ways;
Decreasing the sampling rate, but this would move the periodic repetitions in frequency closer together. (Remember NyQuist theorem here)
Increase the number of samples which you use to calculate the DFT. If only the 256 samples are available, one can perform "zero padding" where 0-valued samples are appended to the end of the data, but there are some effects to this which needs to be considered.
HOW TO COME TO A CONCLUSION:
If you depict the frequency content of different audio signals into individual graphs, you will find that the amplitudes differ abit. This is because the individual signals will not be identical in sound, and there is always noise inherent in any signal (from the surroundings and the hardware itself). Therefore, what you want to do is to take the average of two or more DFT signals to remove noise and get a more accurate represention of the frequency content. Depending on your application, this may not be possible if the sound you are capturing is noticably changing rapidly over time (for example speech, or music). Averaging is thus only useful if all the signals to be averaged are pretty much equal in sound (individual seperate recordings of "the same thing"). Just to clarify, from, for example, four time-domain signals, you want to create four frequency domain signals (using a DFT method), and then calculate the average of the four frequency-domain signals into a single averaged frequency-domain signal. This will remove noise and give you a better representation of which frequencies are inherent in your audio.
AN ALTERNATIVE SOLUTION:
If you know that your signal is supposed to contain a certain number of dominant frequencies (not too many) and these are the only ones your are interesting in, then I would recommend that you use Pisarenko's harmonic decomposition (PHD) or Multiple signal classification (MUSIC, nice abbreviation!) to find these frequencies (and their corresponding amplitude values). This is less intensive computationally than the DFT. For example. if you KNOW the signal contains 3 dominant frequencies, Pisarenko will return the frequency values for these three, but keep in mind that the DFT reveals much more information, allowing you come to more conclusions.
Your initial assumption is incorrect. An FFT/DFT will not give you amplitudes only at certain discrete frequencies. Those discrete frequencies are only the centers of bins, each bin constituting a narrow-band filter with a main lobe of non-zero bandwidth, roughly a width or two of the FFT bin separation, depending on the window (rectangular, von Hann, etc.) applied before the FFT. Thus the amplitude of spectral content between bin centers will show up, but spread across multiple FFT result bins.
If the separation of key signals is large enough and the noise level is low enough, then you can interpolate the FFT results to examine frequencies between bin centers. You may need to use a high quality interpolator, such as a Sinc kernel.
If your signal separation is smaller or the noise level is higher, then you may need a longer window of data to feed a longer FFT to gather sufficient resolution information. An FFT window of length 256 at 44.1k sample rate is almost certainly just too short to gather sufficient information regarding spectral content below a few 100 Hz, if those are among the frequencies you would like to see examined, as they can't be separated cleanly from a DC bias (bin 0).
Unfortunately, there's a degree of uncertainty in identifying the frequencies in a fixed sample of a signal. If you use a short FFT, then there's no way to tell the difference between frequencies over a fairly wide range. If you use a long FFT to get higher resolution in the frequency domain, then you can't detect frequency changes as quickly. This is inherent in the math.
Off the top of my head: If you want a 15kHz range at 1Hz increments, you need a 15000 point FFT, which at 44.1kHz means you'll get a frequency plot three times per second. (I may be missing a factor of 2 in there as I can't recall whether the Nyquist limit means you actually want a 30kHz bandwidth.)
You may also be interested in the Short-time Fourier transform. It doesn't solve the fundamental trade-off problem but in practice may get you what you want.

Efficient represention for growing circles in 2D space?

Imagines there's a 2D space and in this space there are circles that grow at different constant rates. What's an efficient data structure for storing theses circles, such that I can query "Which circles intersect point p at time t?".
EDIT: I do realize that I could store the initial state of the circles in a spatial data structure and do a query where I intersect a circle at point p with a radius of fastest_growth * t, but this isn't efficient when there are a few circles that grow extremely quickly whereas most grow slowly.
Additional Edit: I could further augment the above approach by splitting up the circles and grouping them by there growth rate, then applying the above approach to each group, but this requires a bounded time to be efficient.
Represent the circles as cones in 3d, where the third dimension is time. Then use a BSP tree to partition them the best you can.
In general, I think the worst-case for testing for intersection is always O(n), where n is the number of circles. Most spacial data structures work by partitioning the space cleverly so that a fraction of the objects (hopefully close to half) are in each half. However, if the objects overlap then the partitioning cannot be perfect; there will always be cases where more than one object is in a partition. If you just think about the case of two circles overlapping, there is no way to draw a line such that one circle is entirely on one side and the other circle is entirely on the other side. Taken to the logical extreme, assuming arbitrary positioning of the circles and arbitrary radiuses, there is no way to partition them such that testing for intersection takes O(log(n)).
This doesn't mean that, in practice, you won't get a big advantage from using a tree, but the advantage you get will depend on the configuration of the circles and the distribution of the queries.
This is a simplified version of another problem I have posted about a week ago:
How to find first intersection of a ray with moving circles
I still haven't had the time to describe the solution that was expected there, but I will try to outline it here(for this simplar case).
The approach to solve this problem is to use a kinetic KD-tree. If you are not familiar with KD trees it is better to first read about them. You also need to add the time as additional coordinate(you make the space 3d instead of 2d). I have not implemented this idea yet, but I believe this is the correct approach.
I'm sorry this is not completely thought through, but it seems like you might look into multiplicatively-weighted Voronoi Diagrams (MWVDs). It seems like an adversary could force you into computing one with a series of well-placed queries, so I have a feeling they provide a lower-bound to your problem.
Suppose you compute the MWVD on your input data. Then for a query, you would be returned the circle that is "closest" to your query point. You can then determine whether this circle actually contains the query point at the query time. If it doesn't, then you are done: no circle contains your point. If it does, then you should compute the MWVD without that generator and run the same query. You might be able to compute the new MWVD from the old one: the cell containing the generator that was removed must be filled in, and it seems (though I have not proved it) that the only generators that can fill it in are its neighbors.
Some sort of spatial index, such as an quadtree or BSP, will give you O(log(n)) access time.
For example, each node in the quadtree could contain a linked list of pointers to all those circles which intersect it.
How many circles, by the way? For small n, you may as well just iterate over them. If you constantly have to update your spatial index and jump all over cache lines, it may end up being faster to brute-force it.
How are the centres of your circles distributed? If they cover the plane fairly evenly you can discretise space and time, then do the following as a preprocessing step:
for (t=0; t < max_t; t++)
foreach circle c, with centre and radius (x,y,r) at time t
for (int X = x-r; X < x+r; x++)
for (int Y = x-r; Y < y+r; y++)
circles_at[X][Y][T].push_back (&c)
(assuming you discretise space and time along integer boundaries, scale and offset however you like of course, and you can add circles later on or amortise the cost by deferring calculation for distant values of t)
Then your query for point (x,y) at time (t) could do a brute-force linear check over circles_at[x][y][ceil(t)]
The trade-off is obvious, increasing the resolution of any of the three dimensions will increase preprocessing time but give you a smaller bucket in circles_at[x][y][t] to test.
People are going to make a lot of recommendations about types of spatial indices to use, but I would like to offer a bit of orthogonal advice.
I think you are best off building a few indices based on time, i.e. t_0 < t_1 < t_2 ...
If a point intersects a circle at t_i, it will also intersect it at t_{i+1}. If you know the point in advance, you can eliminate all circles that intersect the point at t_i for all computation at t_{i+1} and later.
If you don't know the point in advance, then you can keep these time-point trees (built based on the how big each circle would be at a given time). At query time (e.g. t_query), find i such that t_{i-1} < t_query <= t_i. If you check all the possible circles at t_i, you will not have any false negatives.
This is sort of a hack for a data structure that is "time dynamics aware", but I don't know of any. If you have a threaded environment, then you only need to maintain one spacial index and be working on the next one in the background. It will cost you a lot of computation for the benefit of being able to respond to queries with low latency. This solution should be compared at the very least to the O(n) solution (go through each point and check if dist(point, circle.center) < circle.radius).
Instead of considering the circles, you can test on their bounding boxes to filter out the ones which do not contain the point. If your bounding box sides are all sorted, this is essentially four binary searches.
The tricky part is reconstructing the sorted sides for any given time, t. To do that, you can start off with the original points: two lists for the left and right sides with the x coordinate, and two lists for top and bottom with the y coordinates. For any time greater than 0, all the left side points will move to left, etc. You only need to check each location to the one next to it to obtain a points where the element and the one next to it are are swapped. This should give you a list of time points to modify your ordered lists. If you now sort these modification records by time, for any given starting time and an ending time you can extract all the modification records between the two, and apply them to your four lists in order. I haven't completely figured out the algorithm, but I think there will be edge cases where three or more successive elements can cross over exactly at the same time point, so you may need to modify the algorithm to handle those edge cases as well. Perhaps a list modification record that contains the position in list, and the number of records to reorder would suffice.
I think it's possible to create a binary tree that solves this problem.
Each branch should contain a growing circle, a static circle for partitioning and the latest time at which the partitioning circle cleanly partitions. Further more the growing circle that is contained within a node should always have a faster growing rate than either of it's child nodes' growing circles.
To do a query, take the root node. First check it's growing circle, if it contains the query point at the query time, add it to the answer set. Then, if the time that you're querying is greater than the time at which the partition line is broken, query both children, otherwise if the point falls within the partitioning circle, query the left node, else query the right node.
I haven't quite completed the details of performing insertions, (the difficult part is updating the partition circle so that the number of nodes on the inside and outside is approximately equal and the time when the partition is broken is maximized).
To combat the few circles that grow quickly case, you could sort the circles in descending order by rate of growth and check each of the k fastest growers. To find the proper k given t, I think you can perform a binary search to find the index k such that k*m = (t * growth rate of k)^2 where m is a constant factor you'll need to find by experimentation. The will balance the part the grows linearly with k with the part that falls quadratically with the growth rate.
If you, as already suggested, represent growing circles by vertical cones in 3d, then you can partition the space as regular (may be hexagonal) grid of packed vertical cylinders. For each cylinder calculate minimal and maximal heights (times) of intersections with all cones. If circle center (vertex of cone) is placed inside the cylinder, then minimal time is zero. Then sort cones by minimal intersection time. As result of such indexing, for each cylinder you’ll have the ordered sequence of records with 3 values: minimal time, maximal time and circle number.
When you checking some point in 3d space, you take the cylinder it belongs to and iterate its sequence until stored minimal time exceeds the time of the given point. All obtained cones, which maximal time is less than given time as well, are guaranteed to contain given point. Only cones, where given time lies between minimal and maximal intersection times, are needed to recalculate.
There is a classical tradeoff between indexing and runtime costs – the less is the cylinder diameter, the less is the range of intersection times, therefore fewer cones need recalculation at each point, but more cylinders have to be indexed. If circle centers are distributed non-evenly, then it may be worth to search better cylinder placement configuration then regular grid.
P.S. My first answer here - just registered to post it. Hope it isn’t late.