Convert 0 to 1 and Vice Versa - language-agnostic

I was asked in an interview : how to convert 0 to 1 and 1 to 0. I answered :
Simple if and switch
Bit flipping.
Are there any other approach?

Simple arithmetic:
x = 1 - x;
Actually, there are an infinite number of polynomials that will map 1 to 0 and vice versa. For example:
x = x * x * x * x * x - x * x * x * x + x * x - 2 * x + 1;

A few obvious possibilities:
!n
1-n
n^1
n==0
n!=1
n<1

Lookup table:
int[] swap = { 1, 0 };
And later:
x = swap[x];

Take a paper clip. Straighten it out. It's a 1. Bend it to meet its ends. It's a 0. To make it a 1, straighten it.

they probably expected you to use bitwise NOT

Some Trig:
COS(PI * N)^2
In python
import math
math.cos(math.pi * n) ** 2
I can't believe people forgot modulus:
(3 + n) % 2

I guess you could do ABS(VAR - 1) but i think your approaches are more elegant

This one isn't the best, but it works:
pow(0, n);

This should work for any two numbers...
(EDIT: looking at the other answers I may have misread the question... but I still like my answer :-)
public class X
{
public static void main(final String[] argv)
{
int x = Integer.parseInt(argv[0]);
int y = Integer.parseInt(argv[1]);
x += y;
y = x - y;
x = x - y;
System.out.println(x);
System.out.println(y);
}
}

I've used -~-n in JavaScript.
It converts 1 to -1 which is represented as 11111111, then flips the bits to 00000000 which is 0. The second negative sign does not affect the 0. On the other hand, if n is 0, the first negative sign has no effect, the tilde flips the bits, and the second negative sign converts -1 to 1.

Related

DCT using FFT results in complex result

I'm trying to implement a DCT10 accourding to this paper https://www.researchgate.net/publication/330405662_PittPack_Open-Source_FFT-Based_Poisson%27s_Equation_Solver_for_Computing_With_Accelerators (section "Neumann Boundary Condition").
However I have the problem that after performing the FFT and half-sample shifting, the result is not purely real (which i think it should be, right ?) Therefore when truncating the imaginary part, the mentioned reverse transform will not result in my original values.
Here is my Matlab code (DCT in first dimension):
function X_dct = dct_type2(x_sig)
N = size(x_sig);
% shuffle to prepare for FFT
x_hat = zeros(N);
for m = 1 : N(2)
for n = 1 : (N(1) / 2)
x_hat(n, m) = x_sig((2 * n) - 1, m);
x_hat(N(1) - n + 1, m) = x_sig(2 * n, m);
end
end
% perform FFT
X_hat_dft = fft(x_hat, N(1), 1);
% apply shifting by half-sample
X_dct = zeros(N);
for m = 1 : N(2)
for k = 1 : N(1)
X_dct(k, m) = 2 * exp(-1i * (pi * (k-1)) / (2 * N(1))) * X_hat_dft(k, m);
end
end
end
Can somebody explain what is the problem here ? Or is my assumption wrong that the result should be purely real ?
So it turns out that it is correct to drop the non-zero imaginary part using this technique, even though it intuitively appeared wrong to me.
That the reverse transform did't recorver the original values was merely a scaling issue of the frequency components.

why divide 65536 twice during conversion back in binary scaling

my understand of binary scaling is that you can represent a floating point value with integer value, so to represent a float 1.2 in short (2 bytes integer), simply 1.2*power(2,16), gives 78643, convert it back would simply divide power(2,16).
according to link https://en.wikipedia.org/wiki/Binary_scaling, the following:
For instance, to represent 1.2 and 5.6 as B16 one multiplies them by 216, giving 78643 and 367001.
Multiplying these together gives
28862059643
To convert it back to B16, divide it by 216.
This gives 440400B16, which when converted back to a floating point number (by dividing again by 216, but holding the result as floating point) gives 6.71999. The correct floating point result is 6.72.
What I don't understand is why do we need to divide 65536 (power(2,16) twice when convert back to B16.
Let x = 1.2, y = 5.6
Now let x1 = x * 2^16, y1 = y * 2^16
Let z1 = x1 * y1
= (x * 2^16) * (y * 2^16)
= (x * y) * 2^16 * 2^16
When the values are multiplied, each one brings a 2^16 with it. Then you need to remove one of them to return it to B16, and another to return to floating point.
This isn't the case for addition, because (x * 2^16) + (y * 2^16) = (x + y) * 2^16. Division would be a bad idea, because the 2^16 factors would cancel out, giving you a floating point value stored as an integer.

Given these set of points, what would be the mathematical function for this and the Big(O) notation?

X=2, y=1
X=3, y=3
X=4, y= 6
X=5, y= 10
X=6, y= 15
X=7, y= 21
X=8, y=28
I know that f(x) = f(x-1) + (x-1)
But...is that the correct mathematical function? What would Big O notation be?
The correct (or at least, significantly more efficient than recursive) equation would be
f(x) = x * (x - 1) / 2
Looks like homework. You should mark it with the homework tag.
Did you mean f(x) = f(x-1) + (x-1) ?
To solve for the function:
http://en.wikipedia.org/wiki/Recurrence_relation#Solving
To get the complexity:
http://en.wikipedia.org/wiki/Master_theorem
Yes the function is right, the difference between y values is incrementally increasing by 1
Edited: Thanks for the comment by trutheality
For complexity of the function you can see y like this
y= 1 + (1+2) + (1+2+3) + ....(1+2+3+..n)
As highest possible degree term 1+2+3...n is O(n^2)
y=O(n^2)
The way to correctly state the problem is:
f(x) = f(x - 1) + (x - 1)
f(1) = 0
You want to solve f(x) in terms of x.
There are many ways to solve these kinds of recursive formulas. I like to use Wolfram Alpha, it has an easy interface.
Wolfram Alpha query "f(x)=f(x-1)+(x-1)"
That gives you the precise answer, in big-O notation you would say the function f is in O(x^2).

How to store a symmetric matrix?

Which is the best way to store a symmetric matrix in memory?
It would be good to save half of the space without compromising speed and complexity of the structure too much. This is a language-agnostic question but if you need to make some assumptions just assume it's a good old plain programming language like C or C++..
It seems a thing that has a sense just if there is a way to keep things simple or just when the matrix itself is really big, am I right?
Just for the sake of formality I mean that this assertion is always true for the data I want to store
matrix[x][y] == matrix[y][x]
Here is a good method to store a symmetric matrix, it requires only N(N+1)/2 memory:
int fromMatrixToVector(int i, int j, int N)
{
if (i <= j)
return i * N - (i - 1) * i / 2 + j - i;
else
return j * N - (j - 1) * j / 2 + i - j;
}
For some triangular matrix
0 1 2 3
4 5 6
7 8
9
1D representation (stored in std::vector, for example) looks like as follows:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
And call fromMatrixToVector(1, 2, 4) returns 5, so the matrix data is vector[5] -> 5.
For more information see http://www.codeguru.com/cpp/cpp/algorithms/general/article.php/c11211/TIP-Half-Size-Triangular-Matrix.htm
I find that many high performance packages just store the whole matrix, but then only read the upper triangle or lower triangle. They might then use the additional space for storing temporary data during the computation.
However if storage is really an issue then just store the n(n+1)/2 elements making the upper triangle in a one-dimensional array. If that makes access complicated for you, just define a set of helper functions.
In C to access a matrix matA you could define a macro:
#define A(i,j, dim) ((i <= j)?matA[i*dim + j]:matA[j*dim + i])
then you can access your array nearly normally.
Well I would try a triangular matrix, like this:
int[][] sym = new int[rows][];
for( int i = 0; i < cols; ++i ) {
sym=new int[i+1];
}
But then you wil have to face the problem when someone wants to access the "other side". Eg he wants to access [0][10] but in your case this val is stored in[10][0] (assuming 10x10).
The probably "best" way is the lazy one - dont do anything until the user requests. So you could load the specific row if the user types somethin like print(matrix[4]).
If you want to use a one dimensional array the code would look something like this:
int[] new matrix[(rows * (rows + 1 )) >> 1];
int z;
matrix[ ( ( z = ( x < y ? y : x ) ) * ( z + 1 ) >> 1 ) + ( y < x ? y : x ) ] = yourValue;
You can get rid of the multiplications if you create an additional look-up table:
int[] new matrix[(rows * (rows + 1 )) >> 1];
int[] lookup[rows];
for ( int i= 0; i < rows; i++)
{
lookup[i] = (i * (i+1)) >> 1;
}
matrix[ lookup[ x < y ? y : x ] + ( x < y ? x : y ) ] = yourValue;
If you're using something that supports operator overloading (e.g. C++), it's pretty easy to handle this transparently. Just create a matrix class that checks the two subscripts, and if the second is greater than the first, swap them:
template <class T>
class sym_matrix {
std::vector<std::vector<T> > data;
public:
T operator()(int x, int y) {
if (y>x)
return data[y][x];
else
return data[x][y];
}
};
For the moment I've skipped over everything else, and just covered the subscripting. In reality, to handle use as both an lvalue and an rvalue correctly, you'll typically want to return a proxy instead of a T directly. You'll want a ctor that creates data as a triangle (i.e., for an NxN matrix, the first row will have N elements, the second N-1, and so on -- or, equivalantly 1, 2, ...N). You might also consider creating data as a single vector -- you have to compute the correct offset into it, but that's not terribly difficult, and it will use a bit less memory, run a bit faster, etc. I'd use the simple code for the first version, and optimize later if necessary.
You could use a staggered array (or whatever they're called) if your language supports it, and when x < y, switch the position of x and y. So...
Pseudocode (somewhat Python style, but not really) for an n x n matrix:
matrix[n][]
for i from 0 to n-1:
matrix[i] = some_value_type[i + 1]
[next, assign values to the elements of the half-matrix]
And then when referring to values....
if x < y:
return matrix[y][x]
else:
return matrix[x][y]

We know log_add, but how to do log_subtract?

Multiplying two numbers in log space means adding them:
log_multiply(x, y) = log( exp(x) * exp(y) )
= x + y
Adding two numbers in log space means you do a special log-add operation:
log_add(x, y) = log( exp(x) + exp(y) )
which is implemented in the following code, in a way that doesn't require us to take the two exponentials (and lose runtime speed and precision):
double log_add(double x, double y) {
if(x == neginf)
return y;
if(y == neginf)
return x;
return max(x, y) + log1p(exp( -fabs(x - y) ));
}
(Here is another one.)
But here is the question:
Is there a trick to do it for subtraction as well?
log_subtract(x, y) = log( exp(x) - exp(y) )
without having to take the exponents and lose precision?
double log_subtract(double x, double y) {
// ?
}
How about
double log_subtract(double x, double y) {
if(x <= y)
// error!! computing the log of a negative number
if(y == neginf)
return x;
return x + log1p(-exp(y-x));
}
That's just based on some quick math I did...
The library functions for exp and log lose precision for extreme values.
log1p gets you half way there, but what you need is a function that treats the error for both the log and the exp parts.
See this article: http://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf
The title is "Accurately Computing log(1 - exp(-|a|))".
The article discusses how to seemlessly merge different algorithms to create good error bounds for a larger range of inputs.