Interpreting jTransform FFT results - fft

Merged with Power Spectral Density from jTransforms DoubleFFT_1D.
I'm using Jtransforms java library to perform analysis on a give dataset.
An example of the data is as follows:
980,988,1160,1080,928,1068,1156,1152,1176,1264
I'm using the DoubleFFT_1D funtion in jTransforms. The data output is as follows:
10952, -152, 80.052, 379.936, -307.691, 12.734, -224.052, 427.607, -48.308, 81.472
I'm having trouble interpreting the output. I understand that the first element in the output array is the total of the 10 inputs (10952). It's the other elements of the output array that i don't understand. Ultimately, I want to plot the Power Spectral Density of the input data on a graph and find amounts between 0 and .5 Hz.
The documentation for the jTransform functions states:
(where a is the data set)
.....................
realForward
public void realForward(double[] a)Computes 1D forward DFT of real data leaving the result in a . The physical layout of the output data is as follows:
if n is even then
a[2*k] = Re[k], 0 <= k < n / 2
a[2*k+1] = Im[k], 0 < k < n / 2
a[1] = Re[n/2]
if n is odd then
a[2*k] = Re[k], 0 <= k < (n+1)/2
a[2*k+1] = Im[k], 0 < k< (n-1)/2
a[1] = Im[(n-1)/2]
This method computes only half of the elements of the real transform. The other half satisfies the symmetry condition. If you want the full real forward transform, use realForwardFull. To get back the original data, use realInverse on the output of this method.
Parameters:
a - data to transform
..................................
So what are the output numbers? What do the values mean?
Any help is appreciated.

Related

How to interpret cuFFT R2C result

I'm in the process of accelerating some data analysis code with GPU and am currently doing some profiling and comparisons between the numpy.fft library and cuFFT (using the skcuda.fft wrapper).
I'm certain I'm just missing something obvious about the FFT implementation in cuFFT, but I'm struggling to find what it is in the cuFFT documentation.
To setup the problem I create 500 ms of data sampled at 100 MS/s with a few spectral components. Then, I declare the GPU arrays, the cufft plan (R2C) and run the fft with a subset of the data. Finally, I have a comparison with numpy.fft.rfft:
time = np.linspace(0, 500E-3, int(50E6))
freq = 100E6
data = np.sin(2*np.pi*time*1E6)
data += np.sin(2*np.pi*time*2E6 + 0.5)
data += np.sin(2*np.pi*time*3E6 - 0.1)
data += np.sin(2*np.pi*time*4E6 - 0.9)
data += np.sin(2*np.pi*time*1E6 - 1.9)
data += np.sin(2*np.pi*time*15E6 - 2.1)
data += np.sin(2*np.pi*time*20E6 - 0.3)
data += np.sin(2*np.pi*time*25E6 - 0.3)
nPtsFFT = int(2**13)
dDev = gp.GPUArray(nPtsFFT, np.float64)
dDev.set(data[:nPtsFFT])
rDev = gp.GPUArray(int(nPtsFFT/2+1), np.float64)
plan = cufft.Plan(nPtsFFT, np.float64, np.complex128)
cufft.fft(dDev, rDev, plan)
rHost = rDev.get()
freqs = np.fft.rfftfreq(nPtsFFT, 1/freq)
hfftRes = np.fft.rfft(data[:nPtsFFT])
plt.loglog(freqs, np.abs(hfftRes), label='npfft')
plt.loglog(freqs, np.abs(rHost), label='cufft')
plt.legend()
plt.show()
I naively assumed that these would be approximately equal, but what I find is that the cufft peaks are all shifted and every other point is lower than expected.
This reminded me of the output of scipy.fftpack.rfft, so checking the documentation there I found the interleaving of Re and Im parts. So, if I modify the plotting to:
plt.loglog(freqs, np.abs(hfftRes), label='npfft')
plt.loglog(freqs[:-1:2]/2, np.abs(rHost[:-1:2] + 1j*rHost[1::2]), label='cufft')
plt.legend()
plt.show()
I now get what I expect but only up to 25 MHz, when I should be able to get data up to 50 MHz given the sampling rate.
Is there a way to extract the data up to the Nyquist frequency from this transform?
Since the R2C interface produces complex outputs you have to provide an array of np.complex128 types to get the entire int(nPtsFFT/2+1) complex values, not just int(nPtsFFT/2+1) floating point values (which would corresponds to only half the quantity of data).
This can be done by changing the rDev definition as follows (and keeping everything else the same):
rDev = gp.GPUArray(int(nPtsFFT/2+1), np.complex128)
plan = cufft.Plan(nPtsFFT, np.float64, np.complex128)
cufft.fft(dDev, rDev, plan)
rHost = rDev.get()
freqs = np.fft.rfftfreq(nPtsFFT, 1/freq)
hfftRes = np.fft.rfft(data[:nPtsFFT])
plt.loglog(freqs, np.abs(hfftRes), label='npfft')
plt.loglog(freqs, np.abs(rHost), label='cufft')
plt.legend()
plt.show()
The result should then go all the way up to 50MHz Nyquist frequency as expected, with the spikes lining up nicely with the reference np.fft.rfft implementation.
I haven't yet found an explanation of the cufft output, but I can get the behaviour I want with cupyx.scipy.fft.rfft, which might be useful if anyone else finds the same problem.
deviceData = cp.array(data, dtype = np.float64)
fftPlan = cpfftPack.get_fft_plan(deviceData[:nPtsFFT], (nPtsFFT,),
value_type = 'R2C')
dfft = cpfft.rfft(deviceData[:nPtsFFT], plan = fftPlan)
h_dfft = dfft.get()
hfft = npfft.rfft(data[:nPtsFFT])
plt.loglog(freqs, np.abs(hfft), label='npfft')
plt.loglog(freqs, np.abs(h_dfft), label='cupyx fft')
plt.legend()
plt.show()

How to interpret FFT data for making a spectrum visualizer

I am trying to visualize a spectrum where the frequency range is divided into N bars, either linearly or logarithmic. The FFT seems to work fine, but I am not sure how to interpret the values in order to decide the max height for the visualization.
I am using FMODAudio, a wrapper for C#. It's set up correctly.
In the case of a linear spectrum, the bars are defined as following:
public int InitializeSpectrum(int windowSize = 1024, int maxBars = 16)
{
numSamplesPerBar_Linear.Clear();
int barSamples = (windowSize / 2) / maxBars;
for (int i = 0; i < maxBars; ++i)
{
numSamplesPerBar_Linear.Add(barSamples);
}
IsInitialized = true;
Data = new float[numSamplesPerBar_Linear.Count];
return numSamplesPerBar_Linear.Count;
}
Data is the array which holds the spectrum values received from the update loop.
The update looks like this:
public unsafe void UpdateSpectrum(ref ParameterFFT* fftData)
{
int length = fftData->Length / 2;
if (length > 0)
{
int indexFFT = 0;
for (int index = 0; index < numSamplesPerBar_Linear.Count; ++index)
{
for (int frec = 0; frec < numSamplesPerBar_Linear[index]; ++frec)
{
for (int channel = 0; channel < fftData->ChannelCount; ++channel)
{
var floatspectrum = fftData->GetSpectrum(channel); //this is a readonlyspan<float> by default.
Data[index] += floatspectrum[indexFFT];
}
++indexFFT;
}
Data[index] /= (float)(numSamplesPerBar_Linear[index] * fftData->ChannelCount); // average of both channels for more meaningful values.
}
}
}
The values I get when testing a song are very low across the bands.
A randomly chosen moment when playing a song gives these values:
16 bars = 0,0326 0,0031 0,001 0,0003 0,0004 0,0003 0,0001 0,0002 0,0001 0,0001 0,0001 0 0 0 0 0
I realize it's more useful to use a logarithmic spectrum in many cases, and I intend to, but I still need to figure how how to find the max values for each bar so that I can setup the visualization on a proper scale.
Q: How can I know the potential max values for each bar based on this setup (it's not 1.0)?
output from FFT call is an array where each element is a complex number ( A + Bi ) where A is the real number component and B the imaginary number component ... element zero of this array represents frequency zero as in DC which is the offset bias can typically be ignored ... as you iterate across each element of this array you increment the frequency ... this freq increment is calculated using
Audio_samples <-- array of raw audio samples in PCM format which gets
fed into FFT call
num_fft_bins := float64(len(Audio_samples)) / 2.0 // using Nyquist theorem
freq_incr_per_bin := (input_audio_sample_rate / 2.0) / num_fft_bins
so to answer your question the output array from FFT call is a linear progression evenly spaced based in above freq increment constant
Depends on your input data to the FFT, and the scaling that your particular FFT implementation uses (not all FFTs use the same scale factor).
With an energy preserving forward-FFT, Parseval's theorem applies. So the energy (sum of squares) of the input vector equals the energy of the FFT result vector. Note that for a single integer periodic in aperture sinusoidal input (a pure tone), all that energy can appear in a single FFT result element. So if you know the maximum possible input energy, you can use that to compute the maximum possible result element magnitude for scaling purposes.
The range is often large enough that visualizers commonly need to use log scaling, or else typical input can get pixel quantized to a graph of all zeros.

How to plot a 2d Function in MATLAB

I am trying to plot a simple equation in MATLAB.
The equation is
z = x^2 - y^2, for -3 <= x <= 3, -3 <= y <= 3.
The current code that I have is
x = -3:3;
y = -3:3;
z = (x.^2) - (y.^2);
plot(z)
The result is
Please help me in this case because I am not sure if the code and graph is correct. Thank you very much.
This is not a piecewise function. A Piecewise Function is a function defined by multiple sub-functions, where each sub-function applies to a different interval in the domain. There is only one function here that takes two arrays of the same length. The calculations yield a vector of zeros, due to the input arrays. If you change either one of the vectors, that is "x" or "y", you will see a nonzero plot. Your code works as expected.
There is a lot going wrong here: Let's start at the beginning:
x = -3:3;
y = -3:3;
If we evaluate these they both will return an vector of integers:
x =
-3 -2 -1 0 1 2 3
This means that the grid on which the function is evaluated is going to be very coarse. To alleviate this you can define a step size, e.g. x = 3:0.1:3 or use linspace, in which case you set the number of samples, so e.g. x = linspace(-3, 3, 500). Now consider the next line:
z = (x.^2) - (y.^2);
If we evaluate this we get
z =
0 0 0 0 0 0 0
and you plot this vector with the 2d-plotting function
plot(z)
which perfectly explains why you get a straight line. This is because the automatic broadcasting of the arithmetic operators like minuse (-) just subtracts values entry-wise. You however want to evaluate z for each possible pair of values of x and y. To do this and to get a nice plot later you should use meshgrid, and use a plotting function like mesh to plot it. So I'd recommend using
[X,Y] = meshgrid(x,y);
to create the grid and then evaluate the function on the grid as follows
Z = X.^2 - Y.^2;
and finally plot your function with
mesh(X,Y,Z);

MATLAB Dftmtx for Huge "N"

I have a vector of size M (say 500), which I up-sample by a factor of MM=500, so that my new vector is now size N=500 x 500=250000. I am using an optimisation algorithm, and need to carryout the fft/dft of the up-sampled vector of size N using the DFT Matrix, and not the inbuilt function.
However, this becomes prohibitive due to memory constraints. Is there any way to go about it? I have seen a similar question here Huge Fourier matrix - MATLAB but this is regarding a Huge Matrix, where the solution is to break the matrix into columns and do the operation column by column. In my case, the vector has 250000 rows.
Would it be wise to split the rows into pieces, say 500 each and iterate the same thing 500 times, and concatenate the results in end ?
If using the FFT is an option, the matrix of twiddle factors does not appear explicitly, so the actual memory requirements are on the order of O(N).
If you must use the explicit DFT matrix, then it is possible to break down the computations using submatrices of the larger DFT matrix. Given an input x of length N, and assuming we wish to divide the large DFT matrix into BlockSize x BlockSize submatrices, this can be done with the following matlab code:
y = zeros(size(x));
Imax = ceil(N / BlockSize); % divide the rows into Imax chunks
Jmax = ceil(N / BlockSize); % divide the columns into Jmax chunks
% iterate over the blocks
for i=0:Imax-1
imin = i*BlockSize;
imax = min(i*BlockSize+BlockSize-1,N-1);
for j=0:Jmax-1
jmin = j*BlockSize;
jmax = min(j*BlockSize+BlockSize-1,N-1);
[XX,YY] = meshgrid(jmin:jmax, imin:imax);
% compute the DFT submatrix
W = exp(-2* pi * 1i * XX .* YY / N);
% apply the DFT submatrix on a chunk of the input and add to the output
y([imin:imax] + 1) = y([imin:imax] + 1) + W * x([jmin:jmax] + 1);
end
end
If needed it would be fairly easy to adapt the above code to use different block size along the rows than along the columns.

How to reduce calculation of average to sub-sets in a general way?

Edit: Since it appears nobody is reading the original question this links to, let me bring in a synopsis of it here.
The original problem, as asked by someone else, was that, given a large number of values, where the sum would exceed what a data type of Double would hold, how can one calculate the average of those values.
There was several answers that said to calculate in sets, like taking 50 and 50 numbers, and calculating the average inside those sets, and then finally take the average of all those sets and combine those to get the final average value.
My position was that unless you can guarantee that all those values can be split into a number of equally sized sets, you cannot use this approach. Someone dared me to ask the question here, in order to provide the answer, so here it is.
Basically, given an arbitrary number of values, where:
I know the number of values beforehand (but again, how would your answer change if you didn't?`)
I cannot gather up all the numbers, nor can I sum them (the sum will be too big for a normal data type in your programming language)
how can I calculate the average?
The rest of the question here outlines how, and the problems with, the approach to split into equally sized sets, but I'd really just like to know how you can do it.
Note that I know perfectly well enough math to know that in math theory terms, calculating the sum of A[1..N]/N will give me the average, let's assume that there are reasons that it isn't just as simple, and I need to split up the workload, and that the number of values isn't necessarily going to be divisable by 3, 7, 50, 1000 or whatever.
In other words, the solution I'm after will have to be general.
From this question:
What is a good solution for calculating an average where the sum of all values exceeds a double’s limits?
my position was that splitting the workload up into sets is no good, unless you can ensure that the size of those sets are equal.
Edit: The original question was about the upper limit that a particular data type could hold, and since he was summing up a lot of numbers (count that was given as example was 10^9), the data type could not hold the sum. Since this was a problem in the original solution, I'm assuming (and this is a prerequisite for my question, sorry for missing that) that the numbers are too big to give any meaningful answers.
So, dividing by the total number of values directly is out. The original reason for why a normal SUM/COUNT solution was out was that SUM would overflow, but let's assume, for this question that SET-SET/SET-SIZE will underflow, or whatever.
The important part is that I cannot simply sum, I cannot simply divide by the number of total values. If I cannot do that, will my approach work, or not, and what can I do to fix it?
Let me outline the problem.
Let's assume you're going to calculate the average of the numbers 1 through 6, but you cannot (for whatever reason) do so by summing the numbers, counting the numbers, and then dividing the sum by the count. In other words, you cannot simply do (1+2+3+4+5+6)/6.
In other words, SUM(1..6)/COUNT(1..6) is out. We're not considering NULL's (as in database NULL's) here.
Several of the answers to that question alluded to being able to split the numbers being averaged into sets, say 3 or 50 or 1000 numbers, then calculating some number for that, and then finally combining those values to get the final average.
My position is that this is not possible in the general case, since this will make some numbers, the ones appearing in the final set, more or less valuable than all the ones in the previous sets, unless you can split all the numbers into equally sized sets.
For instance, to calculate the average of 1-6, you can split it up into sets of 3 numbers like this:
/ 1 2 3 \ / 4 5 6 \
| - + - + - | + | - + - + - |
\ 3 3 3 / \ 3 3 3 / <-- 3 because 3 numbers in the set
---------- -----------
2 2 <-- 2 because 2 equally sized groups
Which gives you this:
2 5
- + - = 3.5
2 2
(note: (1+2+3+4+5+6)/6 = 3.5, so this is correct here)
However, my point is that once the number of values cannot be split into a number of equally sized sets, this method falls apart. For instance, what about the sequence 1-7, which contains a prime number of values.
Can a similar approach, that won't sum all the values, and count all the values, in one go, work?
So, is there such an approach? How do I calculate the average of an arbitrary number of values in which the following holds true:
I cannot do a normal sum/count approach, for whatever reason
I know the number of values beforehand (what if I don't, will that change the answer?)
Well, suppose you added three numbers and divided by three, and then added two numbers and divided by two. Can you get the average from these?
x = (a + b + c) / 3
y = (d + e) / 2
z = (f + g) / 2
And you want
r = (a + b + c + d + e + f + g) / 7
That is equal to
r = (3 * (a + b + c) / 3 + 2 * (d + e) / 2 + 2 * (f + g) / 2) / 7
r = (3 * x + 2 * y + 2 * z) / 7
Both lines above overflow, of course, but since division is distributive, we do
r = (3.0 / 7.0) * x + (2.0 / 7.0) * y + (2.0 / 7.0) * z
Which guarantees that you won't overflow, as I'm multiplying x, y and z by fractions less than one.
This is the fundamental point here. Neither I'm dividing all numbers beforehand by the total count, nor am I ever exceeding the overflow.
So... if you you keep adding to an accumulator, keep track of how many numbers you have added, and always test if the next number will cause an overflow, you can then get partial averages, and compute the final average.
And no, if you don't know the values beforehand, it doesn't change anything (provided that you can count them as you sum them).
Here is a Scala function that does it. It's not idiomatic Scala, so that it can be more easily understood:
def avg(input: List[Double]): Double = {
var partialAverages: List[(Double, Int)] = Nil
var inputLength = 0
var currentSum = 0.0
var currentCount = 0
var numbers = input
while (numbers.nonEmpty) {
val number = numbers.head
val rest = numbers.tail
if (number > 0 && currentSum > 0 && Double.MaxValue - currentSum < number) {
partialAverages = (currentSum / currentCount, currentCount) :: partialAverages
currentSum = 0
currentCount = 0
} else if (number < 0 && currentSum < 0 && Double.MinValue - currentSum > number) {
partialAverages = (currentSum / currentCount, currentCount) :: partialAverages
currentSum = 0
currentCount = 0
}
currentSum += number
currentCount += 1
inputLength += 1
numbers = rest
}
partialAverages = (currentSum / currentCount, currentCount) :: partialAverages
var result = 0.0
while (partialAverages.nonEmpty) {
val ((partialSum, partialCount) :: rest) = partialAverages
result += partialSum * (partialCount.toDouble / inputLength)
partialAverages = rest
}
result
}
EDIT:
Won't multiplying with 2, and 3, get me back into the range of "not supporter by the data type?"
No. If you were diving by 7 at the end, absolutely. But here you are dividing at each step of the sum. Even in your real case the weights (2/7 and 3/7) would be in the range of manageble numbers (e.g. 1/10 ~ 1/10000) which wouldn't make a big difference compared to your weight (i.e. 1).
PS: I wonder why I'm working on this answer instead of writing mine where I can earn my rep :-)
If you know the number of values beforehand (say it's N), you just add 1/N + 2/N + 3/N etc, supposing that you had values 1, 2, 3. You can split this into as many calculations as you like, and just add up your results. It may lead to a slight loss of precision, but this shouldn't be an issue unless you also need a super-accurate result.
If you don't know the number of items ahead of time, you might have to be more creative. But you can, again, do it progressively. Say the list is 1, 2, 3, 4. Start with mean = 1. Then mean = mean*(1/2) + 2*(1/2). Then mean = mean*(2/3) + 3*(1/3). Then mean = mean*(3/4) + 4*(1/4) etc. It's easy to generalize, and you just have to make sure the bracketed quantities are calculated in advance, to prevent overflow.
Of course, if you want extreme accuracy (say, more than 0.001% accuracy), you may need to be a bit more careful than this, but otherwise you should be fine.
Let X be your sample set. Partition it into two sets A and B in any way that you like. Define delta = m_B - m_A where m_S denotes the mean of a set S. Then
m_X = m_A + delta * |B| / |X|
where |S| denotes the cardinality of a set S. Now you can repeatedly apply this to partition and calculate the mean.
Why is this true? Let s = 1 / |A| and t = 1 / |B| and u = 1 / |X| (for convenience of notation) and let aSigma and bSigma denote the sum of the elements in A and B respectively so that:
m_A + delta * |B| / |X|
= s * aSigma + u * |B| * (t * bSigma - s * aSigma)
= s * aSigma + u * (bSigma - |B| * s * aSigma)
= s * aSigma + u * bSigma - u * |B| * s * aSigma
= s * aSigma * (1 - u * |B|) + u * bSigma
= s * aSigma * (u * |X| - u * |B|) + u * bSigma
= s * u * aSigma * (|X| - |B|) + u * bSigma
= s * u * aSigma * |A| + u * bSigma
= u * aSigma + u * bSigma
= u * (aSigma + bSigma)
= u * (xSigma)
= xSigma / |X|
= m_X
The proof is complete.
From here it is obvious how to use this to either recursively compute a mean (say by repeatedly splitting a set in half) or how to use this to parallelize the computation of the mean of a set.
The well-known on-line algorithm for calculating the mean is just a special case of this. This is the algorithm that if m is the mean of {x_1, x_2, ... , x_n} then the mean of {x_1, x_2, ..., x_n, x_(n+1)} is m + ((x_(n+1) - m)) / (n + 1). So with X = {x_1, x_2, ..., x_(n+1)}, A = {x_(n+1)}, and B = {x_1, x_2, ..., x_n} we recover the on-line algorithm.
Thinking outside the box: Use the median instead. It's much easier to calculate - there are tons of algorithms out there (e.g. using queues), you can often construct good arguments as to why it's more meaningful for data sets (less swayed by extreme values; etc) and you will have zero problems with numerical accuracy. It will be fast and efficient. Plus, for large data sets (which it sounds like you have), unless the distributions are truly weird, the values for the mean and median will be similar.
When you split the numbers into sets you're just dividing by the total number or am I missing something?
You have written it as
/ 1 2 3 \ / 4 5 6 \
| - + - + - | + | - + - + - |
\ 3 3 3 / \ 3 3 3 /
---------- -----------
2 2
but that's just
/ 1 2 3 \ / 4 5 6 \
| - + - + - | + | - + - + - |
\ 6 6 6 / \ 6 6 6 /
so for the numbers from 1 to 7 one possible grouping is just
/ 1 2 3 \ / 4 5 6 \ / 7 \
| - + - + - | + | - + - + - | + | - |
\ 7 7 7 / \ 7 7 7 / \ 7 /
Average of x_1 .. x_N
= (Sum(i=1,N,x_i)) / N
= (Sum(i=1,M,x_i) + Sum(i=M+1,N,x_i)) / N
= (Sum(i=1,M,x_i)) / N + (Sum(i=M+1,N,x_i)) / N
This can be repeatedly applied, and is true regardless of whether the summations are of equal size. So:
Keep adding terms until both:
adding another one will overflow (or otherwise lose precision)
dividing by N will not underflow
Divide the sum by N
Add the result to the average-so-far
There's one obvious awkward case, which is that there are some very small terms at the end of the sequence, such that you run out of values before you satisfy the condition "dividing by N will not underflow". In which case just discard those values - if their contribution to the average cannot be represented in your floating type, then it is in particular smaller than the precision of your average. So it doesn't make any difference to the result whether you include those terms or not.
There are also some less obvious awkward cases to do with loss of precision on individual summations. For example, what's the average of the values:
10^100, 1, -10^100
Mathematics says it's 1, but floating-point arithmetic says it depends what order you add up the terms, and in 4 of the 6 possibilities it's 0, because (10^100) + 1 = 10^100. But I think that the non-commutativity of floating-point arithmetic is a different and more general problem than this question. If sorting the input is out of the question, I think there are things you can do where you maintain lots of accumulators of different magnitudes, and add each new value to whichever one of them will give best precision. But I don't really know.
Here's another approach. You're 'receiving' numbers one-by-one from some source, but you can keep track of the mean at each step.
First, I will write out the formula for mean at step n+1:
mean[n+1] = mean[n] - (mean[n] - x[n+1]) / (n+1)
with the initial condition:
mean[0] = x[0]
(the index starts at zero).
The first equation can be simplified to:
mean[n+1] = n * mean[n] / (n+1) + x[n+1]/(n+1)
The idea is that you keep track of the mean, and when you 'receive' the next value in your sequence, you figure out its offset from the current mean, and divide it equally between the n+1 samples seen so far, and adjust your mean accordingly. If your numbers don't have a lot of variance, your running mean will need to be adjusted very slightly with the new numbers as n becomes large.
Obviously, this method works even if you don't know the total number of values when you start. It has an additional advantage that you know the value of the current mean at all times. One disadvantage that I can think of is the it probably gives more 'weight' to the numbers seen in the beginning (not in a strict mathematical sense, but because of floating point representations).
Finally, all such calculations are bound to run into floating-point 'errors' if one is not careful enough. See my answer to another question for some of the problems with floating point calculations and how to test for potential problems.
As a test, I generated N=100000 normally distributed random numbers with mean zero and variance 1. Then I calculated their mean by three methods.
sum(numbers) / N, call it m1,
my method above, call it m2,
sort the numbers, and then use my method above, call it m3.
Here's what I found: m1 − m2 ∼ −4.6×10−17, m1 − m3 ∼ −3×10−15, m2 − m3 ∼ −3×10−15. So, if your numbers are sorted, the error might not be small enough for you. (Note however that even the worst error is 10−15 parts in 1 for 100000 numbers, so it might be good enough anyway.)
Some of the mathematical solutions here are very good. Here's a simple technical solution.
Use a larger data type. This breaks down into two possibilities:
Use a high-precision floating point library. One who encounters a need to average a billion numbers probably has the resources to purchase, or the brain power to write, a 128-bit (or longer) floating point library.
I understand the drawbacks here. It would certainly be slower than using intrinsic types. You still might over/underflow if the number of values grows too high. Yada yada.
If your values are integers or can be easily scaled to integers, keep your sum in a list of integers. When you overflow, simply add another integer. This is essentially a simplified implementation of the first option. A simple (untested) example in C# follows
class BigMeanSet{
List<uint> list = new List<uint>();
public double GetAverage(IEnumerable<uint> values){
list.Clear();
list.Add(0);
uint count = 0;
foreach(uint value in values){
Add(0, value);
count++;
}
return DivideBy(count);
}
void Add(int listIndex, uint value){
if((list[listIndex] += value) < value){ // then overflow has ocurred
if(list.Count == listIndex + 1)
list.Add(0);
Add(listIndex + 1, 1);
}
}
double DivideBy(uint count){
const double shift = 4.0 * 1024 * 1024 * 1024;
double rtn = 0;
long remainder = 0;
for(int i = list.Count - 1; i >= 0; i--){
rtn *= shift;
remainder <<= 32;
rtn += Math.DivRem(remainder + list[i], count, out remainder);
}
rtn += remainder / (double)count;
return rtn;
}
}
Like I said, this is untested—I don't have a billion values I really want to average—so I've probably made a mistake or two, especially in the DivideBy function, but it should demonstrate the general idea.
This should provide as much accuracy as a double can represent and should work for any number of 32-bit elements, up to 232 - 1. If more elements are needed, then the count variable will need be expanded and the DivideBy function will increase in complexity, but I'll leave that as an exercise for the reader.
In terms of efficiency, it should be as fast or faster than any other technique here, as it only requires iterating through the list once, only performs one division operation (well, one set of them), and does most of its work with integers. I didn't optimize it, though, and I'm pretty certain it could be made slightly faster still if necessary. Ditching the recursive function call and list indexing would be a good start. Again, an exercise for the reader. The code is intended to be easy to understand.
If anybody more motivated than I am at the moment feels like verifying the correctness of the code, and fixing whatever problems there might be, please be my guest.
I've now tested this code, and made a couple of small corrections (a missing pair of parentheses in the List<uint> constructor call, and an incorrect divisor in the final division of the DivideBy function).
I tested it by first running it through 1000 sets of random length (ranging between 1 and 1000) filled with random integers (ranging between 0 and 232 - 1). These were sets for which I could easily and quickly verify accuracy by also running a canonical mean on them.
I then tested with 100* large series, with random length between 105 and 109. The lower and upper bounds of these series were also chosen at random, constrained so that the series would fit within the range of a 32-bit integer. For any series, the results are easily verifiable as (lowerbound + upperbound) / 2.
*Okay, that's a little white lie. I aborted the large-series test after about 20 or 30 successful runs. A series of length 109 takes just under a minute and a half to run on my machine, so half an hour or so of testing this routine was enough for my tastes.
For those interested, my test code is below:
static IEnumerable<uint> GetSeries(uint lowerbound, uint upperbound){
for(uint i = lowerbound; i <= upperbound; i++)
yield return i;
}
static void Test(){
Console.BufferHeight = 1200;
Random rnd = new Random();
for(int i = 0; i < 1000; i++){
uint[] numbers = new uint[rnd.Next(1, 1000)];
for(int j = 0; j < numbers.Length; j++)
numbers[j] = (uint)rnd.Next();
double sum = 0;
foreach(uint n in numbers)
sum += n;
double avg = sum / numbers.Length;
double ans = new BigMeanSet().GetAverage(numbers);
Console.WriteLine("{0}: {1} - {2} = {3}", numbers.Length, avg, ans, avg - ans);
if(avg != ans)
Debugger.Break();
}
for(int i = 0; i < 100; i++){
uint length = (uint)rnd.Next(100000, 1000000001);
uint lowerbound = (uint)rnd.Next(int.MaxValue - (int)length);
uint upperbound = lowerbound + length;
double avg = ((double)lowerbound + upperbound) / 2;
double ans = new BigMeanSet().GetAverage(GetSeries(lowerbound, upperbound));
Console.WriteLine("{0}: {1} - {2} = {3}", length, avg, ans, avg - ans);
if(avg != ans)
Debugger.Break();
}
}