Direction vs sign of bitwise shift/rotation - function

For bit-wise shift (or rotation, circulation) operations, we usually have an operator, i mean two of them, for instance
x << n
x >> n
for left or right shift of x by n bits.
We want to define a single function
bitshift(x, n)
Before that, we have to determine, which shift is to be used for positive and negative n - what is the "sign" of each shift (or rotation) direction.
Is there a definition or convention for that?
(Please note that this question has nothing to do with signed/unsigned types)
UPDATES
Please also note that i am not asking for implementation details of this function, even it might be somewhat related..
There are similar functions in scheme/lisp-like languages, like ash, which do left shift for positive n

Since shifting right by k places is equal to multiplying by 2^-k, and shifting left is equal to multiplying by 2^k, I think that should give you a hint.
Note: Reason I would argue for this way of looking at it is that it is common to consider multiplication as more fundamental operation in some sense than division is, although you could certainly argue the other way around.

You can use negative number as argument
For example
x << n
so pass n=2 if yuo want to left shift two positions
and pass n=-2 if you want to right shift two positions.

Related

Why are there no asin2() and acos2() functions similar to atan2()?

From my understanding, the atan2() function exists in programming languages because atan() itself cannot always determine the correct theta since the output is restricted to -pi/2 to pi/2.
If this is the case, then the same problem applies to both asin() and acos(), both of whom also have restricted ranges, so then why are there no asin2() and acos2() functions?
First off, note that the syntaxes of the two arctan functions are atan(y/x) and atan2(y, x). This distinction is important, because by not performing the division you provide additional information, most importantly the individual signs of x and y. If you know the individual x and y coordinates, the particular solution to the atan function can be found (i.e. the solution which takes into account the quadrant that (x,y) is in).
If you go from tan(θ) = y/x to sin(θ) = y/sqrt(x²+y²), then the inverse operation asin takes y and sqrt(x²+y²) and combines that to obtain some information about the angle. Here it doesn't matter whether we perform the division ourself or let some hypothetical asin2 function handle it. The denominator is always positive, so the divided argument contains just as much information as separate numerator and denominator contain. (At least in an IEEE environment where division by zero leads to a correctly-signed infinity.)
If you know the y coordinate and the hypothenuse sqrt(x²+y²) then you know the sine of the angle, but you cannot know the angle itself, since you cannot distinguish between negative and positive x values. Likewise, if you know the x coordinate and the hypothenuse, you know the cosine of the angle but you cannot know the sign of the y value.
So asin2 and acos2 are not mathematically feasible, at least not in an obvious way. If you had some kind of sign encoded into the hypothenuse, things might be different, but I can think of no situation where such a sign would arise naturally.
Because asin(y,x) acos(y,x) would each take the same parameters as atan(y,x) and each give the same answer. Each would be equally valid, but we only need one such function.
The unclarity arises from the name (of atan2). Its a function that given x and y, computes the angle (made by a line from the origin to this point) with the (positive) x-axis. A name like angle_from(x,y) would arguably have been more appropriate.
There are times when a function like "acos2" is needed, for example when performing rotations of vectors in 3D space. Under those circumstances, I hard-code my own acos2 function which simply performs the following checks:
x_perp=sqrt(x*x+y*y)
r=sqrt(x*x+y*y+z*z)
if(x_perp.gt.0.0d0) then
phi=acos(x/x_perp)
else
phi=0.0d0
endif
if(y.lt.0.0d0) phi=2.0d0*pi-phi
theta=acos(z/r)
where theta and phi are the usual spherical coordinates and x,y,z the Cartesian coordinates. The problem arises when y is negative, there needs to be a phase shift in phi. There is no such problem for theta.
I will explain in SIMPLE TERMS this way.
Refer to this image for the following explanation:
Task: Choose a function that will track the correct angle across a range -180 < θ < 180
Trial 1:
sin() is positive in the first and second quadrants, sin(30) = sin(150) = 0.5. It won't be easy to track quadrant change with sin().
Therefore, asin2() is not feasible.
Trial 2:
cos() is positive in the first and fourth quadrants, cos(60) = sin(300) = 0.5. Also, it won't be easy to track quadrant change with cos().
Therefore, acos2() is again not feasible.
Trial 3:
tan() is positive in the first and third quadrants, and in an interesting order.
It is positive in the 1st quadrant, negative in the 2nd, positive in the 3rd, negative in the 4th, and positive in the wrapped-around-1st quadrant.
such that tan(45) = 1 , tan(135) = -1, tan(225) = 1, tan(315) = -1, and tan(360+45) = 1. Hurray! we can track quadrant change.
Notice that the unambiguous range is -180 < θ < 180. Also, note in my 45-degree-increment example above, if the sequence is 1,-1,.. the angle goes counter-clockwise, and if the sequence is -1,1,.. it goes clockwise. This idea should resolve directionality.
Therefore, atan2() BECOMES OUR CHOICE.

Shifting left in 2's complement and multiplication

I'm implementing a simple (virtual) ALU and some other chips (adder, multiplier etc.).
I'm using the 2's complement representation for my numbers.
For multiplication of x and y, two 16-bit numbers, I thought I'd use left shifts along these lines (this is of course performed without an actual loop):
set sum[0..15]=0
set x'=x
for i=0...15 //(y[0] is LSB and y[15] is MSB)
add x' to sum if y[i]=1 and shift x' left.
(Is this the standard way?)
My problem is with the left shifts:
If there's i s.t. x[i]=1, at some point the MSB of x' will become 1, and that negates it.
That's a problem since for example 2*2 using the method above gives "-4".
So, my actual question is: when shifting left do I also need to take into consideration the sign bit?
Yes, you do.
A simple approach could be to save the sign somewhere, convert to unsigned, perform the math, then convert back if it was negative.

what is the meaning of "<<" in TCL?

I know the "<<" is a bit operation. but I do not understand what it exactly functions in TCL, and when should we use it?
can anyone help me on this?
The << operator in Tcl's expressions is an arithmetic bit shift left. It's exceptionally similar to the equivalent in C and many other languages, and would be used in all the same places (it's logically equivalent to a multiply by a suitable power of 2, but it's usually advisable to use a shift when thinking about bits and a multiply when thinking about numbers).
Note that one key difference with many other languages (from Tcl 8.5 onwards) is that it does not “drop bits off the front”; the language implementation automatically uses wider number representations as necessary so that information is never lost. Bits are dropped by using a separate binary mask operation (e.g., & ((1 << $numBits) - 1)).
There are a number of uses for the << shift left operator. Some that come to my mind are :
Bit by bit processing. Shift a number and observe highest order bit etc. It comes in more handy than you might think.
If you add a zero to a number in the decimal number system you effectively multiply it by 10. shifting bits effectively means multiplying by 2. This actually translated into a low level assembly command of bit shifting which has lower compute cycles than multiplication by 2. This is used for efficiency in the gaming industry. Shift if twice (<< 2) to multiply it by 4 and so on.
I am sure there are many others.
The << operation is not much different from C's, for instance. And it's used when you need to shift bits of an integer value to the left. This can be occasionally useful when doing subtle number crunching like implemening a hash function or deserialising something from an input bytestream (but note that [binary scan] covers almost all of what's needed for this sort of thing). For a more general info refer to this Wikipedia article or something like this, this is not really Tcl-related.
The '<<' is a left bit shift. You must apply it to an integer. This arithmetic operator will shift the bits to left.
For example, if you want to shifted the number 1 twice to the left in the Tcl interpreter tclsh, type:
expr { 1 << 2 }
The command will return 4.
Pay special attention to the maximum integer the interpreter hold on your platform.

Resource(s) for learning bitwise operation?

I was recently asked a question, "how do you multiply without using the multiplication operator, without any sort of looping statements or explicit addition" and realized I wasn't familiar with bitwise operation at all.
There is obviously wikipedia but I need something with more of an explanation geared toward a newbie. There's also this hack guide but I'm not at the level of grasping it yet.
I don't mind if you point out a chapter in a book, as I have access to a good library through Safari Books and other resources.
Knuth, Volume 2 - Seminumerical Algorithms
The crux of this comes down to a "half adder" and a "full adder". A half adder adds two bits of input to produce a single-bit result, and a single-bit carry. A full adder adds three bits of input (two normal inputs plus a carry from a lower bit) to produce a single-bit result and a single-bit carry.
In any case, the result is based on a truth table for addition. For a half adder, that is: 0+0=0, 0+1=1, 1+0=1, 1+1=0+carry.
So, the "normal" part of the result is the XOR of the inputs. The "carry' part of the result is the AND of the inputs. A full adder is pretty much the same, but left as the infamous "exercise for the reader".
Putting those together, you use a half-adder for the least significant bit, and full adders for the other bits to add N bits of input.
Once you can do addition, there are a couple of ways of doing multiplication. The easy (and slow) way to multiply NxM is to add N to itself M times. The faster (but somewhat more difficult to understand) way is to shift and add. For example, Nx5 = Nx4 + Nx1. You can produce NxB, where B = 2L by shifting N left by L bits.

Repeated application of functions

Reading this question got me thinking: For a given function f, how can we know that a loop of this form:
while (x > 2)
x = f(x)
will stop for any value x? Is there some simple criterion?
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
Specifically, can we prove this for sqrt and for log?
For these functions, a proof that ceil(f(x))<x for x > 2 would suffice. You could do one iteration -- to arrive at an integer number, and then proceed by simple induction.
For the general case, probably the best idea is to use well-founded induction to prove this property. However, as Moron pointed out in the comments, this could be impossible in the general case and the right ordering is, in many cases, quite hard to find.
Edit, in reply to Amnon's comment:
If you wanted to use well-founded induction, you would have to define another strict order, that would be well-founded. In case of the functions you mentioned this is not hard: you can take x << y if and only if ceil(x) < ceil(y), where << is a symbol for this new order. This order is of course well-founded on numbers greater then 2, and both sqrt and log are decreasing with respect to it -- so you can apply well-founded induction.
Of course, in general case such an order is much more difficult to find. This is also related, in some way, to total correctness assertions in Hoare logic, where you need to guarantee similar obligations on each loop construct.
There's a general theorem for when then sequence of iterations will converge. (A convergent sequence may not stop in a finite number of steps, but it is getting closer to a target. You can get as close to the target as you like by going far enough out in the sequence.)
The sequence x, f(x), f(f(x)), ... will converge if f is a contraction mapping. That is, there exists a positive constant k < 1 such that for all x and y, |f(x) - f(y)| <= k |x-y|.
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
If we're talking about floats here, that's not true. If for all x > n f(x) is strictly less than x, it will reach n at some point (because there's only a limited number of floating point values between any two numbers).
Of course this means you need to prove that f(x) is actually less than x using floating point arithmetic (i.e. proving it is less than x mathematically does not suffice, because then f(x) = x may still be true with floats when the difference is not enough).
There is no general algorithm to determine whether a function f and a variable x will end or not in that loop. The Halting problem is reducible to that problem.
For sqrt and log, we could safely do that because we happen to know the mathematical properties of those functions. Say, sqrt approaches 1, log eventually goes negative. So the condition x < 2 has to be false at some point.
Hope that helps.
In the general case, all that can be said is that the loop will terminate when it encounters xi≤2. That doesn't mean that the sequence will converge, nor does it even mean that it is bounded below 2. It only means that the sequence contains a value that is not greater than 2.
That said, any sequence containing a subsequence that converges to a value strictly less than two will (eventually) halt. That is the case for the sequence xi+1 = sqrt(xi), since x converges to 1. In the case of yi+1 = log(yi), it will contain a value less than 2 before becoming undefined for elements of R (though it is well defined on the extended complex plane, C*, but I don't think it will, in general converge except at any stable points that may exist (i.e. where z = log(z)). Ultimately what this means is that you need to perform some upfront analysis on the sequence to better understand its behavior.
The standard test for convergence of a sequence xi to a point z is that give ε > 0, there is an n such that for all i > n, |xi - z| < ε.
As an aside, consider the Mandelbrot Set, M. The test for a particular point c in C for an element in M is whether the sequence zi+1 = zi2 + c is unbounded, which occurs whenever there is a |zi| > 2. Some elements of M may converge (such as 0), but many do not (such as -1).
Sure. For all positive numbers x, the following inequality holds:
log(x) <= x - 1
(this is a pretty basic result from real analysis; it suffices to observe that the second derivative of log is always negative for all positive x, so the function is concave down, and that x-1 is tangent to the function at x = 1). From this it follows essentially immediately that your while loop must terminate within the first ceil(x) - 2 steps -- though in actuality it terminates much, much faster than that.
A similar argument will establish your result for f(x) = sqrt(x); specifically, you can use the fact that:
sqrt(x) <= x/(2 sqrt(2)) + 1/sqrt(2)
for all positive x.
If you're asking whether this result holds for actual programs, instead of mathematically, the answer is a little bit more nuanced, but not much. Basically, many languages don't actually have hard accuracy requirements for the log function, so if your particular language implementation had an absolutely terrible math library this property might fail to hold. That said, it would need to be a really, really terrible library; this property will hold for any reasonable implementation of log.
I suggest reading this wikipedia entry which provides useful pointers. Without additional knowledge about f, nothing can be said.