SciChart - 3D series data point peaks change when rotating - scichart

When rotating various renderable series such MountainRenderableSeries3D and WaterfallRenderableSeries3D, the peaks of the data points change significantly as can be seen in the gif below. What causes this, and can anything be done to fix it? The Y values have a large range, from 0.000024 to 20.0. And the same set of data is inserted in to the data series multiple times along the Z axis, so the X and Y values are all the same. Changing the camera mode from Orthogonal to Perspective helps some, but not completely.
This is with SciChart 5.1.0.11405 and SharpDX 4.0.1.
Orthogonal
Perspective

Related

Trying to find an easy way to recognize when a function is logarithmic based off of input values using math

I recently found a video that explains how you can recognize linear, quadratic, and exponential functions from looking at their output represented in a table.
For example, the difference between y coordinates are equal as x scales consistently in linear functions, the ratio between y coordinates is equal as x scales consistently in exponential, and the second differences between y coordinates are equal in quadratic function.
I've been trying to find out through ingenuity and through Google, but I can't tell how this pattern extends to logarithmic functions. What's the relationship between y coordinates as x scales consistently.
I'm trying to figure this out for a program I'm trying to write.
Also, in case I haven't done a good enough job, here is what I am referring to in the previous examples.

Kalman filter used in IMU , what signals does the fusion process combine?

from what I read, Kalman filter basically tries to "reconcile" the predictions for one variable based on history of this variable, with actual observation of this variable. in the case of finding the position of an IMU , I would imagine that we need the speed read out, so that we can predict x_(k+1) = v * dt + x_(k) , and we would also need the direct read out for z_(k+1).
but in fact on a IMU we don't have this z read out. so what exactly does kalman filtering on IMU do ?
thanks
Yang
ok, after reading more on related resources, i figured out:
the "baseline" report is that vertical vector for earth gravity, measured by the 3-axis accelerometer, which really measures force, either force induced by gravity or force induced by acceleration; noise(including the "noise" introduced by actual horizontal acceleration) is repaired by using gyro's rotation rate readout , in a Kalman filter.
heading is obtained from earth magnetic field , through compass (only on the horizontal plane, compass doesn't tell you the tilt away from earth z-axis). again Kalman filter uses the gyro rotation rate to repair the compass read out.
once you have the earth vertical axis and north axis, you have your full orientation

Applying a Kalman filter on a leg follower robot

I was asked to create a leg follower robot (I already did it) and in the second part of this assignment I have to develop a Kalman filter in order to improve the following process of the robot. The robot gets from the person the distance where she is to the robot and also the angle (it is a relative angle, because the reference is the robot itself, not absolute x-y coordinates)
About this assignment I have a serious doubt. Everything I have read, every sample I have seen about kalman filter has been in one dimension (a car running distance or a rock falling from a building) and according to the task I would have to apply it in 2 dimensions. Is it possible to apply a kalman filter like this?
If it is possible to calculate kalman filter in 2 dimensions then I would understand that what is asked to do is to follow the legs in a linnearized way, despite a person walks weirdly (with random movements) --> About this I have the doubt of how to establish the function of the state matrix, could anyone please tell me how to do it or to tell me where I can find more information about this?
thanks.
Well you should read up on Kalman Filter. Basically what it does is estimate a state through its mean and variance separately. The state can be whatever you want. You can have local coordinates in your state but also global coordinates.
Note that the latter will certainly result in nonlinear system dynamics, in which case you could use the Extended Kalman Filter, or to be more correct the continuous-discrete Kalman Filter, where you treat the system dynamics in a continuous manner and the measurements in discrete time.
Example with global coordinates:
Assuming you have a small cubic mass which can drive forward with velocity v. You could simply model the dynamics in local coordinates only, where your state s would be s = [v], which is a linear model.
But, you could also incorporate the global coordinates x and y, assuming we are moving on a plane only. Then you would have s = [x, y, phi, v]'. We need phi to keep track of the current orientation since the cube can only move forward in respect to its orientation of course. Let's define phi as the angle between the cube's forward direction and the x-axis. Or in other words: With phi=0 the cube would move along the x-axis, with phi=90° it would move along the y-axis.
The nonlinear system dynamics with global coordinates can then be written as
s_dot = [x_dot, y_dot, phi_dot, v_dot]'
with
x_dot = cos(phi) * v
y_dot = sin(phi) * v
phi_dot = ...
v_dot = ... (Newton's Law)
In EKF (Extended Kalman Filter) Prediction step you would use the (discretized) equations above to predict the mean of the state in the first step of and the linearized (and discretized) equations for prediction of the Variance.
There are two things to keep in mind when you decide what your state vector s should look like:
You might be tempted to use my linear example s = [v] and then integrate the velocity outside of the Kalman Filter in order to obtain the global coordinate estimates. This would work, but you would lose the awesomeness of the Kalman Filter since you would only integrate the mean of the state, not its variance. In other words, you would have no idea what the current uncertainties for your global coordinates are.
The second step of the Kalman Filter, the measurement or correction update, requires that you can describe your sensor output as a function of your states. So you may have to add states to your representation just so that you can express your measurements correctly as z[k] = h(s[k], w[k]) where z are measurements and w is a noise vector with Gaussian distribution.

Motion Vectors and DCT residuals, are they related or independent?

I am working on a novel technique that uses already encoded H264 motion vectors from a pre-encoded video.
I need to know how the motion vectors and residuals are related. I need some very specific answers that I can't find answered anywhere else:
Are the motion vectors forward, or backward? I mean, does the vector indicate where the current 4x4 or 8x8, 8x4 .... block will be in the next frame (forward). Or is it the opposite? (That in the block it is indicated where that block comes from), (backwards).
In the case a block has multiple references (I don't know if that is even possible). How are those references added together? Mean? Weighted?
How is the residual error being compensated, per block (4x8, 8x4, etc)? Ignoring the sub blocks, and just partitioning the image in 8x8 chunks?
My ultimate goal, is to know from the video feed the "accuracy" of each motion vector. I can only do that with backwards prediction, and if the DCT residuals are per block. In that case I can measure the accuracy of the motion vector estimation by measuring the amount of residual error of that block.
Thanks in advance!!
PD: Reading trough the 800 pages of H264 is not easy task....
The H264 standard is your friend. Also get the books by Ian Richardson, a bit more readable than the standard (but only a bit :)
"Are the motion vectors forward, or backward?" - they are backward. The MV for a block points to where that block came from.
"In the case a block has multiple references (I don't know if that is even possible). How are those references added together?" - it is possible, check out weightb and weightp options for x264. Can have up to two references, the explicit weights are encoded in the stream (I think as deltas from the neighbor weights, so usually zeros - but don't quote me on that; also I think whether weights are used is a flag somewhere, if not used the weights are equal by default)
"How is the residual error being compensated" - depends on the macroblock partitioning mode and transform size. The MVs are for each partition, the residuals are for the transform size tiled into the partition (so if a 16x16 is partitioned into two 16x8 and the transform is 8x8, each partition gets two transforms; if the transform is 4x4 each partition gets (16/4)x(8/4)=8 transforms).
For experiments, you can change encoder settings to turn off B-frames and weighted P-frames, and also restrict the partitioning mode to not partition (ie 16x16 only). This allows much easier way to try different motion vectors :)

How to detect local maxima and curve windows correctly in semi complex scenarios?

I have a series of data and need to detect peak values in the series within a certain number of readings (window size) and excluding a certain level of background "noise." I also need to capture the starting and stopping points of the appreciable curves (ie, when it starts ticking up and then when it stops ticking down).
The data are high precision floats.
Here's a quick sketch that captures the most common scenarios that I'm up against visually:
One method I attempted was to pass a window of size X along the curve going backwards to detect the peaks. It started off working well, but I missed a lot of conditions initially not anticipated. Another method I started to work out was a growing window that would discover the longer duration curves. Yet another approach used a more calculus based approach that watches for some velocity / gradient aspects. None seemed to hit the sweet spot, probably due to my lack of experience in statistical analysis.
Perhaps I need to use some kind of a statistical analysis package to cover my bases vs writing my own algorithm? Or would there be an efficient method for tackling this directly with SQL with some kind of local max techniques? I'm simply not sure how to approach this efficiently. Each method I try it seems that I keep missing various thresholds, detecting too many peak values or not capturing entire events (reporting a peak datapoint too early in the reading process).
Ultimately this is implemented in Ruby and so if you could advise as to the most efficient and correct way to approach this problem with Ruby that would be appreciated, however I'm open to a language agnostic algorithmic approach as well. Or is there a certain library that would address the various issues I'm up against in this scenario of detecting the maximum peaks?
my idea is simple, after get your windows of interest you will need find all the peaks in this window, you can just compare the last value with the next , after this you will have where the peaks occur and you can decide where are the best peak.
I wrote one simple source in matlab to show my idea!
My example are in wave from audio file :-)
waveFile='Chick_eco.wav';
[y, fs, nbits]=wavread(waveFile);
subplot(2,2,1); plot(y); legend('Original signal');
startIndex=15000;
WindowSize=100;
endIndex=startIndex+WindowSize-1;
frame = y(startIndex:endIndex);
nframe=length(frame)
%find the peaks
peaks = zeros(nframe,1);
k=3;
while(k <= nframe - 1)
y1 = frame(k - 1);
y2 = frame(k);
y3 = frame(k + 1);
if (y2 > 0)
if (y2 > y1 && y2 >= y3)
peaks(k)=frame(k);
end
end
k=k+1;
end
peaks2=peaks;
peaks2(peaks2<=0)=nan;
subplot(2,2,2); plot(frame); legend('Get Window Length = 100');
subplot(2,2,3); plot(peaks); legend('Where are the PEAKS');
subplot(2,2,4); plot(frame); legend('Peaks in the Window');
hold on; plot(peaks2, '*');
for j = 1 : nframe
if (peaks(j) > 0)
fprintf('Local=%i\n', j);
fprintf('Value=%i\n', peaks(j));
end
end
%Where the Local Maxima occur
[maxivalue, maxi]=max(peaks)
you can see all the peaks and where it occurs
Local=37
Value=3.266296e-001
Local=51
Value=4.333496e-002
Local=65
Value=5.049438e-001
Local=80
Value=4.286804e-001
Local=84
Value=3.110046e-001
I'll propose a couple of different ideas. One is to use discrete wavelets, the other is to use the geographer's concept of prominence.
Wavelets: Apply some sort of wavelet decomposition to your data. There are multiple choices, with Daubechies wavelets being the most widely used. You want the low frequency peaks. Zero out the high frequency wavelet elements, reconstruct your data, and look for local extrema.
Prominence: Those noisy peaks and valleys are of key interest to geographers. They want to know exactly which of a mountain's multiple little peaks is tallest, the exact location of the lowest point in the valley. Find the local minima and maxima in your data set. You should have a sequence of min/max/min/max/.../min. (You might want to add an arbitrary end points that are lower than your global minimum.) Consider a min/max/min sequence. Classify each of these triples per the difference between the max and the larger of the two minima. Make a reduced sequence that replaces the smallest of these triples with the smaller of the two minima. Iterate until you get down to a single min/max/min triple. In your example, you want the next layer down, the min/max/min/max/min sequence.
Note: I'm going to describe the algorithmic steps as if each pass were distinct. Obviously, in a specific implementation, you can combine steps where it makes sense for your application. For the purposes of my explanation, it makes the text a little more clear.
I'm going to make some assumptions about your problem:
The windows of interest (the signals that you are looking for) cover a fraction of the entire data space (i.e., it's not one long signal).
The windows have significant scope (i.e., they aren't one pixel wide on your picture).
The windows have a minimum peak of interest (i.e., even if the signal exceeds the background noise, the peak must have an additional signal excess of the background).
The windows will never overlap (i.e., each can be examined as a distinct sub-problem out of context of the rest of the signal).
Given those, you can first look through your data stream for a set of windows of interest. You can do this by making a first pass through the data: moving from left to right, look for noise threshold crossing points. If the signal was below the noise floor and exceeds it on the next sample, that's a candidate starting point for a window (vice versa for the candidate end point).
Now make a pass through your candidate windows: compare the scope and contents of each window with the values defined above. To use your picture as an example, the small peaks on the left of the image barely exceed the noise floor and do so for too short a time. However, the window in the center of the screen clearly has a wide time extent and a significant max value. Keep the windows that meet your minimum criteria, discard those that are trivial.
Now to examine your remaining windows in detail (remember, they can be treated individually). The peak is easy to find: pass through the window and keep the local max. With respect to the leading and trailing edges of the signal, you can see n the picture that you have a window that's slightly larger than the actual point at which the signal exceeds the noise floor. In this case, you can use a finite difference approximation to calculate the first derivative of the signal. You know that the leading edge will be somewhat to the left of the window on the chart: look for a point at which the first derivative exceeds a positive noise floor of its own (the slope turns upwards sharply). Do the same for the trailing edge (which will always be to the right of the window).
Result: a set of time windows, the leading and trailing edges of the signals and the peak that occured in that window.
It looks like the definition of a window is the range of x over which y is above the threshold. So use that to determine the size of the window. Within that, locate the largest value, thus finding the peak.
If that fails, then what additional criteria do you have for defining a region of interest? You may need to nail down your implicit assumptions to more than 'that looks like a peak to me'.