GIF color table formula - gif

In the GIF specifications noted here:
http://www.w3.org/Graphics/GIF/spec-gif89a.txt
It gives the following formula for calculating the color table size:
3 x 2^(Size of Global Color Table+1).
Given this they use 'x' instead of '*', am I correct in assuming '^' does not mean XOR? If that is the case, what does '^' mean?
Thank you.

^ is commonly used for exponentiation, and 2 is a very common base for that.
The Size of Color Table variable is noted as a three-bit value, with in combination with the +1 means that the color table is between 2 and 256 colors. That indeed matches the GIF format.
(In C, you'd write this as 6 << Size_of_global_color_table)

^ means power. So, it is 2 raise to the power of size of global colour table + 1. Basically, something as a base 2 can be powered to a value easily by left shift operation. So, you don't need pow() API. Just do the following . 2 << ( global_colour_table_size). For example, 2^3 is equal to 2 << 2. In general the formula is the following, 2^n is equal to 2<< (n-1).
You can download the decoder logic and details from the following link - http://www.tune2wizard.com/gif-decoder/

Related

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

Output_precision(); Octave

Hi after calling this code (Octave) I get an answer with 7 digits of precision, I need only 6. It is worth mentioning that on different data-set the output is normal(with 6 digits);
output_precision(6);
Prev
output:
Prev =
0.1855318
0.2181108
0.1796457
I know this is a little late but I wanted to add an answer for anyone with the same question in the future.
According to the function reference for output_precision(), the argument passed to the function (in this case, 6) specifies the minimum number of significant figures, which only guarantees that future numeric output won't have less than that number of significant figures.
From what I've seen, if you use output_precision(new_val) before displaying an array (e.g., Prev in the question), then octave will round the element with the least digits before the decimal place to have new_val significant figures and then all other elements will be rounded to have the same number of digits after the decimal place as that initial rounded result. If you use a statement to output a single value instead of an array, then the output is just rounded to new_val significant figures. However, I don't know if this behavior is guaranteed .
Here's a short example of what I mean:
% array defined with values having 5 digits after decimal
F = [401.51670 313.70753 -88.55225 188.50067 280.21988 354.51821 54.51821 350];
output_precision(4)
F
output_precision(6)
F
Output:
F =
401.52 313.71 -88.55 188.50 280.22 354.52 54.52 350.00
F =
401.5167 313.7075 -88.5523 188.5007 280.2199 354.5182 54.5182 350.0000
It can be a little quirky if you try to round the values too much. When I used output_precision(3) and then output F, the numbers were actually rounded as if my system's default precision, 5, was still active. However, when I used elements with only 2 or 3 digits after the decimal to define another array, it displayed as expected with output_precision(3).
Check out Octave Forge if you ever need docs for octave features. It's not perfect but it's something. Hope this was helpful.

what is the logic behind converting a number in base X to base 10

For example, 1234(base 5) when converted into base 10, is computed as :
1x5^3 + 2x5^2 + 3x5^1 + 4
Question is what is the logic behind doing this , Please Explain, Thanks in Advance.
Each digit is converted according to its place value. If a number is in base 10(that's decimal number) it is calculated in the same way. E.g if 1234 is in base 10 it can be calculated as 1*10^3+2*10^2+3*10^1+4*10^0 too. In short every number (in any base can be converted to decimal following the above pattern, even if the number is already in base 10).

What does caffe do with the mean-binary file ?

In the caffe-input layer one can define a mean image that holds mean values of all the images used. From the image net example: "The model requires us to subtract the image mean from each image, so we have to compute the mean".
My question is: What is the implementation of this subtraction? Is it simply :
used_image = original_image - mean_image
or
used_image = mean_image - original_iamge
or
used_image = |original_image - mean_image|^2
if it is one of the first two, then how are negative pixels handeld ? Since the pictures are usually stored in uint8 it would mean that it simply starts from the beginning. e.g
200 - 255 = 56
Why I need to know this? I made tests and I know that the second example or the third example would work better.
It's the first one, a trivial normalization step. Using the second instead wouldn't really matter: the weights would invert.
There are no "negative pixels", per se: this is simply integer input to the matrix operations. You are welcome to interpret this as a visual alteration of some sort, but the arithmetic doesn't care.

distributive property for product of maxterms

I am unsure how to use the Distributive property on the following function:
F = B'D + A'D + BD
I understand that F = xy + x'z would become (xy + x')(xy + z) but I'm not sure how to do this with three terms with two variables.
Also another small question:
I was wondering how to know what number a minterm is without having to consult (or memorise) the table of minterms.
For example how can I tell that xy'z' is m4?
When you're trying to use the distributive property there, what you're doing is converting minterms to maxterms. This is actually very related to your second question.
To tell that xy'z' is m4, think of function as binary where false is 0 and true is 1. xy'z' then becomes 100, binary for the decimal 4. That's really what a k-map/minterm table is doing for you to give a number.
Now an important extension of this: the number of possible combinations is 2^number of different variables. If you have 3 variables, there are 2^3 or 8 different combinations. That means you have min/maxterm possible numbers from 0-7. Here's the cool part: anything that isn't a minterm is a maxterm, and vice versa.
So, if you have variables x and y, and you have the expression xy', you can see that as 10, or m2. Because the numbers go from 0-3 with 2 variables, m2 implies M0, M1, and M3. Therefore, xy'=(x+y)(x+y')(x'+y').
In other words, the easiest way to do the distributive property in either direction is to note what minterm or maxterm you're dealing with, and just switch it to the other.
For more info/different wording.