just explanation in a nutshell would help about this chart
I am not able to understand the illustrations of this data value which is a number
I read it as:
Can start with an optional "-"
Then either
0
OR
1-9 followed by zero or more 0-9
Then an optional "." followed by one or more 0-9
Then an optional "e" or "E" (scientific notation)
Optionally followed by either "+" or "-"
Then 1 or more 0-9
Generally, you follow the line from left to right. When it forks in two or more directions, you can take any one path. A green circle indicates a constant; a green box indicates a choice from 2 or more values. Often, a branch offers no choice, in which case you just keep going straight. Eventually, all divergent branches meet up again to proceed to the next choice.
The most confusing part is deciding what each intersection means. For that, you need to look carefully at the curve. You either peel away from the path, or you merge into the path; sometimes, the direction you are moving determines which is happening.
For example,starting at the far left, we immediately hit a branch which says we can start with a - or do nothing. After the choice is made, the two branches come together again, then immediately branch apart again: we can choose a 0 or a non-zero digit. If we choose the 0 branch, we do nothing until the next branch (which chooses between a . and nothing). If w choose the digit branch, we take a non-zero digit, the branch up to the decision point following the 0 or we pick any digit and decide again.
Basically, this is a drawing of a state machine corresponding to a regular expression like
-?(0|([1-9][0-9]*))(.[0-9]+)?((e|E)(+|-)?[0-9]+)?
More intuitively, an optional negative sign, followed by a single 0 or an integer with no leading 0, followed by an optional fractional part (allowing unlimited trailing 0s), followed by an optional integer exponent consisting prefixed with either e or E and an optional sign.
Related
I'm reading a book of David Patterson and John Hennesy titled: Computer Organization and Design. In the RISC-V architecture set which the book is about there are two instruction formats related to jumping - SB-type and UJ-type. The former uses 12-bit constant to represent offset (in bytes) to jump from the current instruction and the latter uses 20-bit constant to represent the same. Then the author says the following:
Since the program counter (PC) contains the address of the current instruction, we can branch (SB-type) within +=2^10 words of the current instruction, or jump (UJ) within += 2^18 words of the current instruction, if we use the PC as the register to be added to the address.
I don't understand how they get those 2^10 and 2^18. Since the constant the instructions use is two's complement, then it can represent values from -2^11 to 2^11 - 1 in the first case and -2^19 to 2^19 - 1 in the second case. Since these constants represent bytes, but we want to know how many words we can jump over, therefore we need to divide max value of bytes by four, so the max which we can get is 2^11 / 2^2 = 2^9 words in the first case and 2^17 in the second one.
Could someone please take a look at my calculations above and point me out to what I'm missing and what's wrong with my calculations and thoughts?
UPDATE:
Probably I didn't understand the author correctly. May it be the case that they mean the lower-bound (-2^10) and upper-bound (+2^10)? So they mean that we can never jump beyond 2^10 from the current instruction?
Good Day everyone!
I am trying to solve this Exercise for learning purpose. Can someone guide me in solving these 3 questions?
Like I tried the 1st question for addition of 2 binary numbers separated by '+'. where I tried 2 numbers addition by representing each number with respective number of 1's or zeros e.g 5 = 1 1 1 1 1 or 0 0 0 0 0 and then add them and the result will also be in the same format as represented but how to add or represent 2 binaries and separating them by +, not getting any clue. Will be head of Turing machine move from left and reach plus sign and then move left and right of + sign? But how will the addition be performed. As far as my little knowledge is concerned TM can not simply add binaries we have to make some logic to represent its binaries like in the case of simple addition of 2 numbers. Similar is the case with comparison of 2 binaries?
Regards
The following program, inspired by the edX / MITx course Paradox and Infinity, shows how to perform binary addition with a Turing machine, where the numbers to be added are input to the Turing machine and are separated by a blank.
The Turing Machine
uses the second number as a counter
decrements the second number by one
increments the first number by one
till the second number becomes 0.
The following animation of the simulation of the Turing machine shows how 13 (binary 1101) and 5 (binary 101) are added to yield 18 (binary 10010).
I'll start with problems 2 and 3 since they are actually easier than problem 1.
We'll assume we have valid input (non-empty binary strings on both sides with no leading zeroes), so we don't need to do any input validation. To check whether the numbers are equal, we can simply bounce back and forth across the = symbol and cross off one digit at a time. If we find a mismatch at any point, we reject. If we have a digit remaining on the left and can't find one on the right, we reject. If we run out of digits on the left and still have some on the right, we reject. Otherwise, we accept.
Q T Q' T' D
q0 0 q1 X right // read the next (or first) symbol
q0 1 q2 X right // of the first binary number, or
q0 = q7 = right // recognize no next is available
q1 0 q1 0 right // skip ahead to the = symbol while
q1 1 q1 1 right // using state to remember which
q1 = q3 = right // symbol we need to look for
q2 0 q2 0 right
q2 1 q2 1 right
q2 = q4 = right
q3 X q3 X right // skip any crossed-out symbols
q3 0 q5 X left // in the second binary number
q3 1,b rej 1 left // then, make sure the next
q4 X q4 X,b right // available digit exists and
q4 0,b rej 0,b left // matches the one remembered
q4 1 q5 X left // otherwise, reject
q5 X q5 X left // find the = while ignoring
q5 = q6 = left // any crossed-out symbols
q6 0 q6 0 left // find the last crossed-out
q6 1 q6 1 left // symbol in the first binary
q6 X q0 X right // number, then move right
// and start over
q7 X q7 X right // we ran out of symbols
q7 b acc b left // in the first binary number,
q7 0,1 rej 0,1 left // make sure we already ran out
// in the second as well
This TM could first sanitize input by ensuring both binary strings are non-empty and contain no leading zeroes (crossing off any it finds).
Do to "greater than", you could easily do the following:
check to see if the length of the first binary number (after removing leading zeroes) is greater than, equal to, or less than the length of the second binary number (after removing leading zeroes). If the first one is longer than the second, accept. If the first one is shorter than the second, reject. Otherwise, continue to step 2.
check for equality as in the other problem, but accept if at any point you have a 1 in the first number and find a 0 in the second. This works because we know there are no leading zeroes, the numbers have the same number of digits, and we are checking digits in descending order of significance. Reject if you find the other mismatch or if you determine the numbers are equal.
To add numbers, the problem says to increment and decrement, but I feel like just adding with carry is going to be not significantly harder. An outline of the procedure is this:
Begin with carry = 0.
Go to least significant digit of first number. Go to state (dig=X, carry=0)
Go to least significant digit of second number. Go to state (sum=(X+Y+carry)%2, carry=(X+Y+carry)/2)
Go after the second number and write down the sum digit.
Go back and continue the process until one of the numbers runs out of digits.
Then, continue with whatever number still has digits, adding just those digits and the carry.
Finally, erase the original input and copy the sum backwards to the beginning of the tape.
An example of the distinct steps the tape might go through:
#1011+101#
#101X+101#
#101X+10X#
#101X+10X=#
#101X+10X=0#
#10XX+10X=0#
#10XX+1XX=0#
#10XX+1XX=00#
#1XXX+1XX=00#
#1XXX+XXX=00#
#1XXX+XXX=000#
#XXXX+XXX=000#
#XXXX+XXX=0000#
#XXXX+XXX=00001#
#XXXX+XXX=0000#
#1XXX+XXX=0000#
#1XXX+XXX=000#
#10XX+XXX=000#
#10XX+XXX=00#
#100X+XXX=00#
#100X+XXX=0#
#1000+XXX=0#
#1000+XXX=#
#10000XXX=#
#10000XXX#
#10000XX#
#10000X#
#10000#
There are two ways to solve the addition problem. Assume your input tape is in the form ^a+b$, where ^ and $ are symbols telling you you've reached the front and back of the input.
You can increment b and decrement a by 1 each step until a is 0, at which point b will be your answer. This is assuming you're comfortable writing a TM that can increment and decrement.
You can implement a full adding TM, using carries as you would if you were adding binary numbers on paper.
For either option, you need code to find the least significant bit of both a and b. The problem specifies that the most significant bit is first, so you'll want to start at + for a and $ for b.
For example, let's say we want to increment 1011$. The algorithm we'll use is find the least significant unmarked digit. If it's a 0, replace it with a 1. If it's a 1, move left.
Start by finding $, moving the read head there. Move the read head to the left.
You see a 1. Move the read head to the left.
You see a 1. Move the read head to the left.
You see a 0. write 1.
Return the read head to $. The binary number is now 1111$.
To compare two numbers, you need to keep track of which values you've already looked at. This is done by extending the alphabet with "marked" characters. 0 could be marked as X, 1 as Y, for example. X means "there's a 0 here, but I've seen it already.
So, for equality, we can start at ^ for a and = for b. (Assuming the input looks like ^a=b$.) The algorithm is to find the start of a and b, comparing the first unmarked bit of each. The first time you get to a different value, halt and reject. If you get to = and $, halt and reject.
Let's look at input ^11=10$:
Read head starts at ^.
Move the head right until we find an unmarked bit.
Read a 1. Write Y. Tape reads ^Y1=10$. We're in a state that represents having read a 1.
Move the head right until we find =.
Move the head right until we find an unmarked bit.
Read a 1. This matches the bit we read before. Write a Y.
Move the head left until we find ^.
Go to step 2.
This time, we'll read a 1 in a and read the 0 in b. We'll halt and reject.
Hope this helps to get you started.
I was wondering how to construct a Turing Machine for
A<B<C<D...<N
with all numbers (A,B,C,D,...,N) being positive binary numbers.
These are a couple examples of how the machine should work:
1001 - Accepts because there is only one number
0<1 - Accepts
0010<1000<0001 - Doesn't accept because 1000!<0001
0100<1010<1010<1000 - Doesn't accept because 1010!<1010
I've tried methods that work to compare only two numbers but I can't seem to find a way to compare multiple (should work for infinite number of inputs) numbers.
Here is a high-level block diagram for solving this problem. You can implement these blocks by using block feature of JFLAP.
Blocks Description
Done? : This block decides whether all comparisons are done, if yes, it accepts, otherwise, it goes to compare the very next two numbers.
A<B : This block is responsible to compare two binary numbers, the one that cursor is pointing at the first digit and the next one. You can use '<' as the separator between A and B and the next number.
cleanup: during the comparison, you might marked 0's and 1's to something else. This block clean up everything and prepare everything for the next comparison.
Hopefully, this gives you an idea to solve the problem.
I need to compare binary strings but I consider them to be the same if one is a circular shift of the other. For example, I need the following to be the same:
10011, 11100, 01110, 11001
I think it'll be easier to store them in a canonical way, and just check for equivalence. All strings to be compared has the same length (might be more than 100 bits), so I'll keep the length unchanged, and define the canonical form as the smallest binary number we can get from a circular shift. For example, 00111 is the canonical form of the binary strings shown above.
My question is, given a binary string, how can I circular shift it to get the smallest binary number without checking all possible shifts?
If other representation would be better, I'd be happy to receive suggestions.
I'll add, that flipping the string also doesn't matter, so the canonical form of 010011 is 001101 (if as described above) but if flipping is allowed, the canonical form should be 001011 (which I prefer). A possible solution is just to compute the canonization of the string and of the flipped string and choose the smaller.
If it helps, I'm working in MATLAB with binary vectors, but no need for code, an explanation of a way to solve that will be enough, thanks!
Find the longest sequence of 0s, if > 1 shift until the first is the msb.
oops edge case would be 00100, so if you have two equal longest sequences of zeros, you'd want the only possible 1 as the lsb as well.
Find the longest sequence of 1s, if > 1 shift until the last is the lsb
If there are no consecutive 1s or 0s shift until the lsb is 1
That said it's only five shifts so the brute force way you know will work could easily be less effort...
When people talk about the use of "magic numbers" in computer programming, what do they mean?
Magic numbers are any number in your code that isn't immediately obvious to someone with very little knowledge.
For example, the following piece of code:
sz = sz + 729;
has a magic number in it and would be far better written as:
sz = sz + CAPACITY_INCREMENT;
Some extreme views state that you should never have any numbers in your code except -1, 0 and 1 but I prefer a somewhat less dogmatic view since I would instantly recognise 24, 1440, 86400, 3.1415, 2.71828 and 1.414 - it all depends on your knowledge.
However, even though I know there are 1440 minutes in a day, I would probably still use a MINS_PER_DAY identifier since it makes searching for them that much easier. Whose to say that the capacity increment mentioned above wouldn't also be 1440 and you end up changing the wrong value? This is especially true for the low numbers: the chance of dual use of 37197 is relatively low, the chance of using 5 for multiple things is pretty high.
Use of an identifier means that you wouldn't have to go through all your 700 source files and change 729 to 730 when the capacity increment changed. You could just change the one line:
#define CAPACITY_INCREMENT 729
to:
#define CAPACITY_INCREMENT 730
and recompile the lot.
Contrast this with magic constants which are the result of naive people thinking that just because they remove the actual numbers from their code, they can change:
x = x + 4;
to:
#define FOUR 4
x = x + FOUR;
That adds absolutely zero extra information to your code and is a total waste of time.
"magic numbers" are numbers that appear in statements like
if days == 365
Assuming you didn't know there were 365 days in a year, you'd find this statement meaningless. Thus, it's good practice to assign all "magic" numbers (numbers that have some kind of significance in your program) to a constant,
DAYS_IN_A_YEAR = 365
And from then on, compare to that instead. It's easier to read, and if the earth ever gets knocked out of alignment, and we gain an extra day... you can easily change it (other numbers might be more likely to change).
There's more than one meaning. The one given by most answers already (an arbitrary unnamed number) is a very common one, and the only thing I'll say about that is that some people go to the extreme of defining...
#define ZERO 0
#define ONE 1
If you do this, I will hunt you down and show no mercy.
Another kind of magic number, though, is used in file formats. It's just a value included as typically the first thing in the file which helps identify the file format, the version of the file format and/or the endian-ness of the particular file.
For example, you might have a magic number of 0x12345678. If you see that magic number, it's a fair guess you're seeing a file of the correct format. If you see, on the other hand, 0x78563412, it's a fair guess that you're seeing an endian-swapped version of the same file format.
The term "magic number" gets abused a bit, though, referring to almost anything that identifies a file format - including quite long ASCII strings in the header.
http://en.wikipedia.org/wiki/File_format#Magic_number
Wikipedia is your friend (Magic Number article)
Most of the answers so far have described a magic number as a constant that isn't self describing. Being a little bit of an "old-school" programmer myself, back in the day we described magic numbers as being any constant that is being assigned some special purpose that influences the behaviour of the code. For example, the number 999999 or MAX_INT or something else completely arbitrary.
The big problem with magic numbers is that their purpose can easily be forgotten, or the value used in another perfectly reasonable context.
As a crude and terribly contrived example:
while (int i != 99999)
{
DoSomeCleverCalculationBasedOnTheValueOf(i);
if (escapeConditionReached)
{
i = 99999;
}
}
The fact that a constant is used or not named isn't really the issue. In the case of my awful example, the value influences behaviour, but what if we need to change the value of "i" while looping?
Clearly in the example above, you don't NEED a magic number to exit the loop. You could replace it with a break statement, and that is the real issue with magic numbers, that they are a lazy approach to coding, and without fail can always be replaced by something less prone to either failure, or to losing meaning over time.
Anything that doesn't have a readily apparent meaning to anyone but the application itself.
if (foo == 3) {
// do something
} else if (foo == 4) {
// delete all users
}
Magic numbers are special value of certain variables which causes the program to behave in an special manner.
For example, a communication library might take a Timeout parameter and it can define the magic number "-1" for indicating infinite timeout.
The term magic number is usually used to describe some numeric constant in code. The number appears without any further description and thus its meaning is esoteric.
The use of magic numbers can be avoided by using named constants.
Using numbers in calculations other than 0 or 1 that aren't defined by some identifier or variable (which not only makes the number easy to change in several places by changing it in one place, but also makes it clear to the reader what the number is for).
In simple and true words, a magic number is a three-digit number, whose sum of the squares of the first two digits is equal to the third one.
Ex-202,
as, 2*2 + 0*0 = 2*2.
Now, WAP in java to accept an integer and print whether is a magic number or not.
It may seem a bit banal, but there IS at least one real magic number in every programming language.
0
I argue that it is THE magic wand to rule them all in virtually every programmer's quiver of magic wands.
FALSE is inevitably 0
TRUE is not(FALSE), but not necessarily 1! Could be -1 (0xFFFF)
NULL is inevitably 0 (the pointer)
And most compilers allow it unless their typechecking is utterly rabid.
0 is the base index of array elements, except in languages that are so antiquated that the base index is '1'. One can then conveniently code for(i = 0; i < 32; i++), and expect that 'i' will start at the base (0), and increment to, and stop at 32-1... the 32nd member of an array, or whatever.
0 is the end of many programming language strings. The "stop here" value.
0 is likewise built into the X86 instructions to 'move strings efficiently'. Saves many microseconds.
0 is often used by programmers to indicate that "nothing went wrong" in a routine's execution. It is the "not-an-exception" code value. One can use it to indicate the lack of thrown exceptions.
Zero is the answer most often given by programmers to the amount of work it would take to do something completely trivial, like change the color of the active cell to purple instead of bright pink. "Zero, man, just like zero!"
0 is the count of bugs in a program that we aspire to achieve. 0 exceptions unaccounted for, 0 loops unterminated, 0 recursion pathways that cannot be actually taken. 0 is the asymptote that we're trying to achieve in programming labor, girlfriend (or boyfriend) "issues", lousy restaurant experiences and general idiosyncracies of one's car.
Yes, 0 is a magic number indeed. FAR more magic than any other value. Nothing ... ahem, comes close.
rlynch#datalyser.com