proper translation of multi-dimensional array to JSON (and back) - json

Is there a standard way of translating multi-dimensional arrays to JSON and back? Does this depend on the language and the relationship between byte ordering and rows/columns/pages etc.? I am working working in Matlab. In Matlab, an array of integers, with values of say 1 through 10, with a shape of 5 x 2, would have the following 2-D layout:
1 6
2 7
3 8
4 9
5 10
as opposed to:
1 2
3 4
5 6
7 8
9 10
The question is, if I translate the 2-D array into a JSON string, should I get:
[[1,2,3,4,5],[6,7,8,9,10]]
or
[[1,6],[2,7],[3,8],[4,9],[5,10]]
My preference would be the first case, since it is sequential in terms of memory access.
So the explicit question then is, which way are n-d arrays written?
By memory - innermost should be sequential in memory
high to low dimensions - innermost is lowest dimension (which in this case matches the memory for Matlab)
low to high dimensions - innermost is highest dimension

If you say you have a 5x2 array, in JS you probably mean you have an array of length of 5, each element of which is an array of length 2. So the "layout" you get is:
[ [a00, a01],
[a10, a11],
[a20, a21],
[a30, a31],
[a40, a41] ]
Decoding that in terms of 1..10 is entirely up to you.
an array of integers, with values of say 1 through 10, with a shape of 5 x 2
There is no single correct way to put values 1..10 into an array 5x2. So [[1,6],[2,7],[3,8],[4,9],[5,10]] would be just as good as [[1,2],[3,4],[5,6],[7,8],[9,10]].

Related

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

FFT outptut for a signal with 2 cosf() cycles

I am transforming a signal using the ZeroFFT library. The results I get from it, are not what I would intuitively expect.
As a test, I feed the FFT algorithm with a buffer that contains two full cycles of cosine:
Which is sampled over 512 samples.
Which are fed as int16_t values. What I expected to get back, is 256 amplitudes, with the values [ 0, 4095, 0, 0, ..., 0 ].
Instead, this is the result:
2 2052 4086 2053 0 2 2 2 1 2 2 2 4 4 3 4...
And it gets weirder! If I feed it the same signal, but shifted (so sinf() over 0 .. 4*pi instead of cosf() function) I get a completely different result: 4 10 2 16 2 4 4 4 2 2 2 3 2 4 3 4
This throws up the questions:
1. Doesn't a sine signal and cosine signal with same period, contain exactly the same frequencies?
2. If I feed it a buffer with exactly 2 cycles of cosine, wouldn't the Fourier transform result in all zeros, except for 1 frequency?
I generate my test signal as:
static void setup_reference(void)
{
for (int i=0; i<CAPTURESZ; ++i)
{
const float phase = 2 * 3.14159f * i / (256.0f);
reference_wave[i] = (int16_t) (cosf(phase) * 0x7fff);
}
}
And call the ZeroFFT function as:
ZeroFFT(reference_Wave, CAPTURESZ);
Note: the ZeroFFT docs state that a Hanning window is applied.
Windowing causes some spectral leakage. Including the window function, the wave now looks like this:
If I feed it a buffer with exactly 2 cycles of cosine, wouldn't the Fourier transform result in all zeros, except for 1 frequency?
Yes, if you do it without windowing. Actually two frequencies: both the positive frequency that you expect, and the equivalent negative frequency, though not all FFT functions will include the negative frequencies in their output (for Real input, the result is Hermitian-symmetric, there is no extra information in the negative frequencies). For practical reasons, since neither the input signal nor the FFT calculation are exact, you may not get exactly zero everywhere else either, but it should be close - that's mainly a concern for floating point output.
By the way by this I don't mean that windowing is bad, but in this special case (perfectly periodic input) it didn't work out in your favour.
As for the sine wave, the magnitudes of the result should be similar (within reason - exactness shouldn't be expected), but the comments on the FFT function you used mention
The complex portion is discarded, and the real values are returned.
While phase shifts would not change the magnitudes much, they change the phases of the results, and therefore also their Real component.

Reduction of odd number of elements CUDA

It seems that it possible to make reduction only for odd number of elements. For example, it needs to sum up numbers. When I have even number of elements, it will be like this:
1 2 3 4
1+2
3+3
6+4
But what to do when I have, for instance 1 2 3 4 5? The last iteration is the sum of three elements 6+4+5 or what? I saw the same question here, but couldn't find the answer.
A parallel reduction will add pairs of elements first:
1 1+3 4+6
2 2+4
3
4
Your example with an odd number of elements would typically be realized as:
1 1+4 5+3 8+7
2 2+5 7+0
3 3+0
4 0+0
5
0
0
0
That is to say, typically a parallel reduction will work with a power-of-2 set of threads, and at most one threadblock (the last one) will have less than a full complement of data to work with. The usual method to handle this is to zero-pad the data out to the threadblock size. If you study the cuda parallel reduction sample code, you'll find examples of this.

Pascal memory issues

I have a problem that requires searching and saving some values that prevent it from doing an infinite loop. Every possible state of this problem is expressed as an unique 8 digit code with base 6 (all digits are 0-5). When the program evaluates this position, I want a boolean to be set as true so as not to evaluate this position again. However an array 1..55555555 is too large in memory and if i convert the 8 digit code to decimal it takes too much time. Also not all combinations are possible in the problem; 11 11 11 11, 11 11 55 12 and others are not valid and i need not use extra memory. So, is there a way to store as value "true" a block of memory with adress e.g 23451211 and when i call the evaluating process check if 23451211 is true or unassigned;
6 to the power 8 = 1679616.
To mark used or not you need one bit, thus you can do with about 209952 bytes.
In recent Free Pascal's, bitpacked structures are done as follows
var
arr : bitpacked array [0..6*6*6*6*6*6*6*6-1] of boolean;
and arr[x] will give true or false.
The conversion time from base 6 to binary (not decimal!) will probably be shorter than trying to use large swaths of memory. (((digit8)*6+digit7)*6+digit6)*6 etc
p.s. FPC does have an exponent operator, but not for constants, so that's why 6^8 is written like that.

The most efficient way to calculate an integral in a dataset range

I have an array of 10 rows by 20 columns. Each columns corresponds to a data set that cannot be fitted with any sort of continuous mathematical function (it's a series of numbers derived experimentally). I would like to calculate the integral of each column between row 4 and row 8, then store the obtained result in a new array (20 rows x 1 column).
I have tried using different scipy.integrate modules (e.g. quad, trpz,...).
The problem is that, from what I understand, scipy.integrate must be applied to functions, and I am not sure how to convert each column of my initial array into a function. As an alternative, I thought of calculating the average of each column between row 4 and row 8, then multiply this number by 4 (i.e. 8-4=4, the x-interval) and then store this into my final 20x1 array. The problem is...ehm...that I don't know how to calculate the average over a given range. The question I am asking are:
Which method is more efficient/straightforward?
Can integrals be calculated over a data set like the one that I have described?
How do I calculate the average over a range of rows?
Since you know only the data points, the best choice is to use trapz (the trapezoidal approximation to the integral, based on the data points you know).
You most likely don't want to convert your data sets to functions, and with trapz you don't need to.
So if I understand correctly, you want to do something like this:
from numpy import *
# x-coordinates for data points
x = array([0, 0.4, 1.6, 1.9, 2, 4, 5, 9, 10])
# some random data: 3 whatever data sets (sharing the same x-coordinates)
y = zeros([len(x), 3])
y[:,0] = 123
y[:,1] = 1 + x
y[:,2] = cos(x/5.)
print y
# compute approximations for integral(dataset, x=0..10) for datasets i=0,1,2
yi = trapz(y, x[:,newaxis], axis=0)
# what happens here: x must be an array of the same shape as y
# newaxis tells numpy to add a new "virtual" axis to x, in effect saying that the
# x-coordinates are the same for each data set
# approximations of the integrals based the datasets
# (here we also know the exact values, so print them too)
print yi[0], 123*10
print yi[1], 10 + 10*10/2.
print yi[2], sin(10./5.)*5.
To get the sum of the entries 4 to 8 (including both ends) in each column, use
a = numpy.arange(200).reshape(10, 20)
a[4:9].sum(axis=0)
(The first line is just to create an example array of the desired shape.)