Related
I know the "<<" is a bit operation. but I do not understand what it exactly functions in TCL, and when should we use it?
can anyone help me on this?
The << operator in Tcl's expressions is an arithmetic bit shift left. It's exceptionally similar to the equivalent in C and many other languages, and would be used in all the same places (it's logically equivalent to a multiply by a suitable power of 2, but it's usually advisable to use a shift when thinking about bits and a multiply when thinking about numbers).
Note that one key difference with many other languages (from Tcl 8.5 onwards) is that it does not “drop bits off the front”; the language implementation automatically uses wider number representations as necessary so that information is never lost. Bits are dropped by using a separate binary mask operation (e.g., & ((1 << $numBits) - 1)).
There are a number of uses for the << shift left operator. Some that come to my mind are :
Bit by bit processing. Shift a number and observe highest order bit etc. It comes in more handy than you might think.
If you add a zero to a number in the decimal number system you effectively multiply it by 10. shifting bits effectively means multiplying by 2. This actually translated into a low level assembly command of bit shifting which has lower compute cycles than multiplication by 2. This is used for efficiency in the gaming industry. Shift if twice (<< 2) to multiply it by 4 and so on.
I am sure there are many others.
The << operation is not much different from C's, for instance. And it's used when you need to shift bits of an integer value to the left. This can be occasionally useful when doing subtle number crunching like implemening a hash function or deserialising something from an input bytestream (but note that [binary scan] covers almost all of what's needed for this sort of thing). For a more general info refer to this Wikipedia article or something like this, this is not really Tcl-related.
The '<<' is a left bit shift. You must apply it to an integer. This arithmetic operator will shift the bits to left.
For example, if you want to shifted the number 1 twice to the left in the Tcl interpreter tclsh, type:
expr { 1 << 2 }
The command will return 4.
Pay special attention to the maximum integer the interpreter hold on your platform.
I was recently asked a question, "how do you multiply without using the multiplication operator, without any sort of looping statements or explicit addition" and realized I wasn't familiar with bitwise operation at all.
There is obviously wikipedia but I need something with more of an explanation geared toward a newbie. There's also this hack guide but I'm not at the level of grasping it yet.
I don't mind if you point out a chapter in a book, as I have access to a good library through Safari Books and other resources.
Knuth, Volume 2 - Seminumerical Algorithms
The crux of this comes down to a "half adder" and a "full adder". A half adder adds two bits of input to produce a single-bit result, and a single-bit carry. A full adder adds three bits of input (two normal inputs plus a carry from a lower bit) to produce a single-bit result and a single-bit carry.
In any case, the result is based on a truth table for addition. For a half adder, that is: 0+0=0, 0+1=1, 1+0=1, 1+1=0+carry.
So, the "normal" part of the result is the XOR of the inputs. The "carry' part of the result is the AND of the inputs. A full adder is pretty much the same, but left as the infamous "exercise for the reader".
Putting those together, you use a half-adder for the least significant bit, and full adders for the other bits to add N bits of input.
Once you can do addition, there are a couple of ways of doing multiplication. The easy (and slow) way to multiply NxM is to add N to itself M times. The faster (but somewhat more difficult to understand) way is to shift and add. For example, Nx5 = Nx4 + Nx1. You can produce NxB, where B = 2L by shifting N left by L bits.
I'd like to write a program that lets users draw points, lines, and circles as though with a straightedge and compass. Then I want to be able to answer the question, "are these three points collinear?" To answer correctly, I need to avoid rounding error when calculating the points.
Is this possible? How can I represent the points in memory?
(I looked into some unusual numeric libraries, but I didn't find anything that claimed to offer both exact arithmetic and exact comparisons that are guaranteed to terminate.)
Yes.
I highly recommend Introduction to constructions, which is a good basic guide.
Basically you need to be able to compute with constructible numbers - numbers that are either rational, or of the form a + b sqrt(c) where a,b,c were previously created (see page 6 on that PDF). This could be done with algebraic data type (e.g. data C = Rational Integer Integer | Root C C C in Haskell, where Root a b c = a + b sqrt(c)). However, I don't know how to perform tests with that representation.
Two possible approaches are:
Constructible numbers are a subset of algebraic numbers, so you can use algebraic numbers.
All algebraic numbers can be represented using polynomials of whose they are roots. The operations are computable, so if you represent a number a with polynomial p and b with polynomial q (p(a) = q(b) = 0), then it is possible to find a polynomial r such that r(a+b) = 0. This is done in some CASes like Mathematica, example. See also: Computional algebraic number theory - chapter 4
Use Tarski's test and represent numbers. It is slow (doubly exponential or so), but works :) Example: to represent sqrt(2), use the formula x^2 - 2 && x > 0. You can write equations for lines there, check if points are colinear etc. See A suite of logic programs, including Tarski's test
If you turn to computable numbers, then equality, colinearity etc. get undecidable.
I think the only way this would be possible is if you used a symbolic representation,
as opposed to trying to represent coordinate values directly -- so you would have
to avoid trying to coerce values like sqrt(2) into some numerical format. You will
be dealing with irrational numbers that are not finitely representable in binary,
decimal, or any other positional notation.
To expand on Jim Lewis's answer slightly, if you want to operate on points that are constructible from the integers with exact arithmetic, you will need to be able to operate on representations of the form:
a + b sqrt(c)
where a, b, and c are either rational numbers, or representations in the form given above. Wikipedia has a pretty decent article on the subject of what points are constructible.
Answering the question of exact equality (as necessary to establish colinearity) with such representations is a rather tricky problem.
If you try to compare co-ordinates for your points, then you have a problem. Leaving aside co-linearity for a moment, how about just working out whether two points are the same or not?
Supposing that one has given co-ordinates, and the other is a compass-straightedge construction starting from certain other co-ordinates, you want to determine with certainty whether they're the same point or not. Either way is a theorem of Euclidean geometry, it's not something you can just measure. You can prove they aren't the same by spotting some difference in their co-ordinates (for example by computing decimal places of each until you encounter a difference). But in general to prove they are the same cannot be done by approximate methods. Compute as many decimal places as you like of some expansions of 1/sqrt(2) and sqrt(2)/2, and you can prove they're very close together but you won't ever prove they're equal. That takes algebra (or geometry).
Similarly, to show that three points are co-linear you will need theorem-proving software. Represent the points A, B, C by their constructions, and attempt to prove the theorem "A, B and C are colinear". This is very hard - your program will prove some theorems but not others. Much easier is to ask the user for a proof that they are co-linear, and then verify (or refute) that proof, but that's probably not what you want.
In general, constructable points may have an arbitrarily complex symbolic form, so you must use a symbolic representation to work them exactly. As Stephen Canon noted above, you often need numbers of the form a+b*sqrt(c), where a and b are rational and c is an integer. All numbers of this form form a closed set under arithmetic operations. I have written some C++ classes (see rational_radical1.h) to work with these numbers if that is all you need.
It is also possible to construct numbers which are sums of any number of terms of rational multiples of radicals. When dealing with more than a single radicand, the numbers are no longer closed under multiplication and division, so you will need to store them as variable length rational coefficient arrays. The time complexity of operations will then be quadratic in the number of terms.
To go even further, you can construct the square root of any given number, so you could potentially have nested square roots. Here, the representations must be tree-like structures to deal with root hierarchy. While difficult to implement, there is nothing in principle preventing you from working with these representations. I'm not sure just what additional numbers can be constructed, but beyond a certain point, your symbolic representation will be expressive enough to handle very large classes of numbers.
Addendum
Found this Google Books link.
If the grid axes are integer valued then the answer is fairly straight forward, the points are either exactly colinear or they are not.
Typically however, one works with real numbers (well, floating points) and then draws the rounded values on the screen which does exist in integer space. In this case you have no choice but to pick a tolerance and use it to determine colinearity. Keep it small and the users will never know the difference.
You seem to be asking, in effect, "Can the normal mathematics (integer or floating point) used by computers be made to represent real numbers perfectly, with no rounding errors?" And, of course, the answer to that is "No." If you want theoretical correctness, then you will be stuck with the much harder problem of symbolic manipulation and coding up the equivalent of the inferences that are done in geometry. (In short, I'm agreeing with Steve Jessop, above.)
Some thoughts in the hope that they might help.
The sort of constructions you're talking about will require multiplication and division, which means that to preserve exactness you'll have to use rational numbers, which are generally easy to implement on top of a suitable sort of big integer (i.e., of unbounded magnitude). (Common Lisp has these built-in, and there have to be other languages.)
Now, you need to represent square roots of arbitrary numbers, and these have to be mixed in.
Therefore, a number is one of: a rational number, a rational number multiplied by a square root of a rational number (or, alternately, just the square root of a rational), or a sum of numbers. In order to prove anything, you're going to have to get these numbers into some sort of canonical form, which for all I can figure offhand may be annoying and computationally expensive.
This of course means that the users will be restricted to rational points and cannot use arbitrary rotations, but that's probably not important.
I would recommend no to try to make it perfectly exact.
The first reason for this is what you are asking here, the rounding error and all that stuff that comes with floating point calculations.
The second one is that you have to round your input as the mouse and screen work with integers. So, initially all user input would be integers, and your output would be integers.
Beside, from a usability point of view, its easier to click in the neighborhood of another point (in a line for example) and that the interface consider you are clicking in the point itself.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Although I know the basic concepts of binary representation, I have never really written any code that uses binary arithmetic and operations.
I want to know
What are the basic concepts any
programmer should know about binary
numbers and arithmetic ? , and
In what "practical" ways can binary
operations be used in programming. I
have seen some "cool" uses of shift
operators and XOR etc. but are there
some typical problems where using binary
operations is an obvious choice.
Please give pointers to some good reference material.
If you are developing lower-level code, it is critical that you understand the binary representation of various types. You will find this particularly useful if you are developing embedded applications or if you are dealing with low-level transmission or storage of data.
That being said, I also believe that understanding how things work at a low level is useful even if you are working at much higher levels of abstraction. I have found, for example, that my ability to develop efficient code is improved by understanding how things are represented and manipulated at a low level. I have also found such understanding useful in working with debuggers.
Here is a short-list of binary representation topics for study:
numbering systems (binary, hex, octal, decimal, ...)
binary data organization (bits, nibbles, bytes, words, ...)
binary arithmetic
other binary operations (AND,OR,XOR,NOT,SHL,SHR,ROL,ROR,...)
type representation (boolean,integer,float,struct,...)
bit fields and packed data
Finally...here is a nice set of Bit Twiddling Hacks you might find useful.
Unless you're working with lower level stuff, or are trying to be smart, you never really get to play with binary stuff.
I've been through a computer science degree, and I've never used any of the binary arithmetic stuff we learned since my course ended.
Have a squizz here: http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/BinaryMath.html
You must understand bit masks.
Many languages and situations require the use of bit masks, for example flags in arguments or configs.
PHP has its error level which you control with bit masks:
error_reporting = E_ALL & ~E_NOTICE
Or simply checking if an int is odd or even:
isOdd = myInt & 1
I believe basic know-hows on binary operations line AND, OR, XOR, NOT would be handy as most of the programming languages support these operations in the form of bit-wise operators.
These operations are also used in image processing and other areas in graphics.
One important use of XOR operation which I can think of is Parity check. Check this http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html
cheers
The following are things I regularly appreciate knowing in my quite conventional programming work:
Know the powers of 2 up to 2^16, and know that 2^32 is about 4.3 billion. Know them well enough so that if you see the number 2147204921 pop up somewhere your first thought is "hmm, that looks pretty close to 2^31" -- that's a very effective module for your bug radar.
Be able to do simple arithmetic; e.g. convert a hexadecimal digit to a nybble and back.
Have some vague idea of how floating-point numbers are represented in binary.
Understand standard conventions that you might encounter in other people's code related to bit twiddling (flags get ORed together to make composite values and AND checks if one's set, shift operators pack and unpack numbers into different bytes, XOR something twice and you get the same something back, that kind of thing.)
Further knowledge is mostly gravy unless you work with significant performance constraints or do other less common work.
At the absolute bare minimum you should be able to implement a bit mask solution. The tasks associated with bit mask operations should ensure that you at least understand binary at a superficial level.
From the top of my head, here are some examples of where I've used bitwise operators to do useful stuff.
A piece of javascript that needed one of those "check all" boxes was something along these lines:
var check = true;
for(var i = 0; i < elements.length; i++)
check &= elements[i].checked;
checkAll.checked = check;
Calculate the corner points of a cube.
Vec3f m_Corners[8];
void corners(float a_Size){
for(size_t i = 0; i < 8; i++){
m_Corners[i] = a_Size * Vec3f(axis(i, Vec3f::X), axis(i, Vec3f::Y), axis(i, Vec3f::Z));
}
}
float axis(size_t a_Corner, int a_Axis) const{
return ((a_Corner >> a_Axis) & 1) == 1
? -.5f
: +.5f;
}
Draw a Sierpinski triangle
for(int y = 0; y < 512; y++)
for(int x = 0; x < 512; x++)
if(x & y) pixels[x + y * w] = someColor;
else pixels[x + y * w] = someOtherColor;
Finding the next power of two
int next = 1 << ((int)(log(number) / log(2));
Checking if a number is a power of two
bool powerOfTwo = number & (number - 1);
The list can go on and on, but for me these are (except for Sierpinksi) everyday examples. Once you'll understand and work with it though, you'll encounter it in more and more places such as the corners of a cube.
You don't specifically mention (nor rule out!-) floating point binary numbers and arithmetic, so I won't miss the opportunity to flog one of my favorite articles ever (seriously: I sometimes wish I could make passing a strict quiz on it a pre-req of working as a programmer...;-).
The most important thing every programmer should know about binary numbers and arithmetic is : Every number in a computer is represented in some kind of binary encoding, and all arithmetic on a computer is binary arithmetic.
The consequences of this are many:
Floating point "bugs" when doing math with IEEE floating point binary numbers (Which is all numbers in javascript, and quite a few in JAVA, and C)
The upper and lower bounds of representable numbers for each type
The performance cost of multiplication/division/square root etc operations (for embedded systems
Precision loss, and accumulation errors
and more. This is stuff you need to know even if you never do a bitwise xor, or not, or whatever in your life. You'll still run into these things.
This really depends on the language you're using. Recent languages such as C# and Java abstract the binary representation from you -- this makes working with binary difficult and is not usually the best way to do things anyway in these languages.
Middle and low level languages like C and C++, however, require you to understand quite a bit about how the numbers are stored underneath -- especially regarding endianness.
Binary knowledge is also useful when implementing a cross platform protcol of some sort .... for example, on x86 machines, byte order is little endian. but most network protocols want big endian numbers. Therefore you have to realize you need to do the conversion for things to go smoothly. Many RFCs, such as this one -> https://www.rfc-editor.org/rfc/rfc4648 require binary knowledge to understand.
In short, it's completely dependent on what you're trying to do.
Billy3
It's handy to know the numbers 256 and 65536. It's handy to know how two's complement negative numbers work.
Maybe you won't run into a lot of binary. I still use it pretty often, but maybe out of habit.
A good familiarity with bitwise operations should make you more facile with boolean algebra, and I think that's important for every programmer--you want to be able to quickly simplify complex logic expressions.
Absolute minimum is, that "2" is not a binary digit and 10b is smaller than 3.
If you never do low-level programming (like C in embedded systems), never have to use a debugger, and never have to work with real numbers, then I suppose you could get by without knowing binary. But knowing binary will make you a stronger programmer, even if indirectly.
Once you venture into those areas you will need to know binary (and its ``sister'' base, hexadecimal). Without knowing it:
Embedded systems programming would be impossible.
Debugging would be hard because you wouldn't know what you were looking at in memory.
Numerical calculations with decimals would give you answers you don't understand.
I learned to twiddle bits back when c and asm were still used for "mainstream" programming. Although I no longer have much use for that knowledge, I recently used it to solve a real-world business problem.
We use a fax service that posts a message back to us when the fax has been sent or failed after x number of retries. The only way I had to identify the fax was a 15 character field. We wanted to consolidate this into one URL for all of our clients. Before we consolidated, all we had to fit in this field was the FaxID PK (32 bit int) column which we just sent as a string.
Now we had to identify the client (a 4 character code) and the database (32 bit int) underneath the client. I was able to do this using base 64 encoding. Without understanding the binary representation of numbers and characters, I probably would never have even thought of this solution.
Some useful information about the number system.
Binary | base 2
Hexadecimal | base 16
Decimal | base 10
Octal | base 8
These are the most common.
Converting them is faily easy.
112 base 8 = (1 x 8^2) + (2 x 8^1) + (4 x 8^0)
74 base 10 = (7 x 10^1) + (4 x 10^0)
The AND, OR, XOR, and etc. are used in logic gates. Search boolean algebra, something well worth the time knowing.
Say for instance, you have 11001111 base 2 and you want to extract the last four only.
Truth table for AND:
P | Q | R
T | T | T
T | F | F
F | F | F
F | T | F
You can use 11001111 base 2 AND 00111111 base 2 = 00001111 base 2
There are plenty of resources on the internet.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Alright, so maybe I shouldn't have shrunk this question sooo much... I have seen the post on the most efficient way to find the first 10000 primes. I'm looking for all possible ways. The goal is to have a one stop shop for primality tests. Any and all tests people know for finding prime numbers are welcome.
And so:
What are all the different ways of finding primes?
Some prime tests only work with certain numbers, for instance, the Lucas–Lehmer test only works for Mersenne numbers.
Most prime tests used for big numbers can only tell you that a certain number is "probably prime" (or, if the number fails the test, it is definitely not prime). Usually you can continue the algorithm until you have a very high probability of a number being prime.
Have a look at this page and especially its "See Also" section.
The Miller-Rabin test is, I think, one of the best tests. In its standard form it gives you probable primes - though it has been shown that if you apply the test to a number beneath 3.4*10^14, and it passes the test for each parameter 2, 3, 5, 7, 11, 13 and 17, it is definitely prime.
The AKS test was the first deterministic, proven, general, polynomial-time test. However, to the best of my knowledge, its best implementation turns out to be slower than other tests unless the input is ridiculously large.
For a given integer, the fastest primality check I know is:
Take a list of 2 to the square root of the integer.
Loop through the list, taking the remainder of the integer / current number
If the remainder is zero for any number in the list, then the integer is not prime.
If the remainder was non-zero for all numbers in the list, then the integer is prime.
It uses significantly less memory than The Sieve of Eratosthenes and is generally faster for individual numbers.
The Sieve of Eratosthenes is a decent algorithm:
Take the list of positive integers 2 to any given Ceiling.
Take the next item in the list (2 in the first iteration) and remove all multiples of it (beyond the first) from the list.
Repeat step two until you reach the given Ceiling.
Your list is now composed purely of primes.
There is a functional limit to this algorithm in that it exchanges speed for memory. When generating very large lists of primes the memory capacity needed skyrockets.
#akdom's question to me:
Looping would work fine on my previous suggestion, and you don't need to do any calculations to determine if a number is even; in your loop, simply skip every even number, as shown below:
//Assuming theInteger is the number to be tested for primality.
// Check if theInteger is divisible by 2. If not, run this loop.
// This loop skips all even numbers.
for( int i = 3; i < sqrt(theInteger); i + 2)
{
if( theInteger % i == 0)
{
//getting here denotes that theInteger is not prime
// somehow indicate that some number, i, divides it and break
break;
}
}
A Rutgers grad student recently found a recurrence relation that generates primes. The difference of its successive numbers will generate either primes or 1's.
a(1) = 7
a(n) = a(n-1) + gcd(n,a(n-1)).
It makes a lot of crap that needs to be filtered out. Benoit Cloitre also has this recurrence that does a similar task:
b(1) = 1
b(n) = b(n-1) + lcm(n,b(n-1))
then the ratio of successive numbers, minus one [b(n)/b(n-1)-1] is prime. A full account of all this can be read at Recursivity.
For the sieve, you can do better by using a wheel instead of adding one each time, check out the Improved Incremental Prime Number Sieves. Here is an example of a wheel. Let's look at the numbers, 2 and 5 to ignore. Their wheel is, [2,4,2,2].
In your algorithm using the list from 2 to the root of the integer, you can improve performance by only testing odd numbers after 2. That is, your list only needs to contain 2 and all odd numbers from 3 to the square root of the integer. This cuts the number of times you loop in half without introducing any more complexity.
#theprise
If I were wanting to use an incrementing loop instead of an instantiated list (problems with memory for massive numbers...), what would be a good way to do that without building the list?
It doesn't seem like it would be cheaper to do a divisibility check for the given integer (X % 3) than just the check for the normal number (N % X).
If you're wanting to find a way of generating prime numbers, this have been covered in a previous question.