I have some sets of combinations and I want to find out the intersection function between say two of them. Then I want to represent the intersected results in ZDD.
I am thinking about using the CUDD package to do this.
An example:
All the 4-bit strings having hamming distance >= 2 with 1100 =
{ 0001, 0010, 0011,0101, 0110, 0111, 1001, 1010, 1011 }
All the 4-bit strings having hamming distance >= 2 with 0000 =
{ 0011, 0101, 0110, 1001, 1010, 0111, 1011, 1101, 1110 }
Intersected elements of the set (what I want):
{0011, 0101, 0110, 1010, 1001 }
From what I understand, I need to be able to express those sets of combinations first, with boolean functions, e.g. ( f = a b c d ) to represent their corresponding BDDs, convert them to ZDDs and then find out the intersection? Someone experienced with the CUDD package please help.
Your reasoning is correct. You can first build BDDs corresponding to the two string sets, convert them to ZDDs, and then build the intersection (logical AND).
However, you can also first compute the intersection (logical AND) and then translate the result to a ZDD.
It is however not clear what you mean by "find out the intersection" - what do you want to do with it? Print out all the elements? Count the number of elements? Depending what what is your aim, the translation to ZDDs may be unnecessary (or the use of CUDD altogether).
Related
Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway. Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction. For arithmetic & logic, it can do add, not, and. That's pretty much it! But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct. Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction. These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string. It takes a string length count in R1 as parameter supplied by caller (not shown). It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works. The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach. That yields a number between 0 and 9, which is used as an index into the first table Lookup10. The value obtained from the table at that index position is basically the index × 10. So this table is a × 10 table. The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer. It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow). The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping. Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions. Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2 (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi. You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit . That might be done by repetitive addition. Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.
Hi after calling this code (Octave) I get an answer with 7 digits of precision, I need only 6. It is worth mentioning that on different data-set the output is normal(with 6 digits);
output_precision(6);
Prev
output:
Prev =
0.1855318
0.2181108
0.1796457
I know this is a little late but I wanted to add an answer for anyone with the same question in the future.
According to the function reference for output_precision(), the argument passed to the function (in this case, 6) specifies the minimum number of significant figures, which only guarantees that future numeric output won't have less than that number of significant figures.
From what I've seen, if you use output_precision(new_val) before displaying an array (e.g., Prev in the question), then octave will round the element with the least digits before the decimal place to have new_val significant figures and then all other elements will be rounded to have the same number of digits after the decimal place as that initial rounded result. If you use a statement to output a single value instead of an array, then the output is just rounded to new_val significant figures. However, I don't know if this behavior is guaranteed .
Here's a short example of what I mean:
% array defined with values having 5 digits after decimal
F = [401.51670 313.70753 -88.55225 188.50067 280.21988 354.51821 54.51821 350];
output_precision(4)
F
output_precision(6)
F
Output:
F =
401.52 313.71 -88.55 188.50 280.22 354.52 54.52 350.00
F =
401.5167 313.7075 -88.5523 188.5007 280.2199 354.5182 54.5182 350.0000
It can be a little quirky if you try to round the values too much. When I used output_precision(3) and then output F, the numbers were actually rounded as if my system's default precision, 5, was still active. However, when I used elements with only 2 or 3 digits after the decimal to define another array, it displayed as expected with output_precision(3).
Check out Octave Forge if you ever need docs for octave features. It's not perfect but it's something. Hope this was helpful.
I have been trying to reverse a quite simple looking function.
the function is presented in assembly:
(Argument is loaded into AX)
AND AX, 0xFFFE (round down to even number)
MUL AX (Multiply AX by AX ; the result is represented as DX:AX)
XOR AX,DX
The function can be described as: H(X) = F(X & 0xFFFE); F(X) = ((X * X) mod 2^16) xor ((X * X) div 2^16)
Calculated all of the values from 1 to 2^16 and plotted on matlab in order to "see" some function.
Can anyone help me find an answer to this? (when given y what is the argument x).
It might be that for some values there is more than one answer, so narrowing it down is my goal.
Thanks,
Or.
It's a hash function.
You can't reverse a hash function, because the whole point of it is that it's a one way function.
The multiply is clearly reversible, it's the xor that's not. By combining the low and high part of the multiplication you lose information.
As you can see in the plot there are some white spaces, because there are 2^16 spaces in that plot that means there are also different input values that hash to the same value.
This is common in a hash function.
The only way to 'reverse' it is to build a lookup table that translates output values into possible input values. However you will find that for every output values that be 1 or more input values.
An even number x an even number is always a multiple of 4.
So the low 2 bits are always 0, ergo the low 2 bits of the result are bits 16+17 of the multiplication.
Bits 2..15 are a mix of bits 2..15 xor bits 18..31.
A quick simulation shows 24350 unique outputs ergo on average 1.34 0.34 duplicates for every input value, not bad.
The maximum number of collisions is 6, but most numbers don't collide.
For all those numbers that don't collide you can uniquely lookup your input value in the lookup table (all this disregarding odd input values obviously).
Right now I'm preparing for my AP Computer Science exam, and I need some help understanding how to convert between decimal, hexadecimal, and binary values by hand. The book that I'm using (Barron's) includes an example but does not explain it very well.
What are the formulas that one should use for conversion between these number types?
Are you happy that you understand number bases? If not, then you will need to read up on this or you'll just be blindly following some rules.
Plenty of books would spend a whole chapter or more on this...
Binary is base 2, Decimal is base 10, Hexadecimal is base 16.
So Binary uses digits 0 and 1, Decimal uses 0-9, Hexadecimal uses 0-9 and then we run out so we use A-F as well.
So the position of a decimal digit indicates units, tens, hundreds, thousands... these are the "powers of 10"
The position of a binary digit indicates units, 2s, 4s, 8s, 16s, 32s...the powers of 2
The position of hex digits indicates units, 16s, 256s...the powers of 16
For binary to decimal, add up each 1 multiplied by its 'power', so working from right to left:
1001 binary = 1*1 + 0*2 + 0*4 + 1*8 = 9 decimal
For binary to hex, you can either work it out the total number in decimal and then convert to hex, or you can convert each 4-bit sequence into a single hex digit:
1101 binary = 13 decimal = D hex
1111 0001 binary = F1 hex
For hex to binary, reverse the previous example - it's not too bad to do in your head because you just need to work out which of 8,4,2,1 you need to add up to get the desired value.
For decimal to binary, it's more of a long division problem - find the biggest power of 2 smaller than your input, set the corresponding binary bit to 1, and subtract that power of 2 from the original decimal number. Repeat until you have zero left.
E.g. for 87:
the highest power of two there is 1,2,4,8,16,32,64!
64 is 2^6 so we set the relevant bit to 1 in our result: 1000000
87 - 64 = 23
the next highest power of 2 smaller than 23 is 16, so set the bit: 1010000
repeat for 4,2,1
final result 1010111 binary
i.e. 64+16+4+2+1 = 87 in decimal
For hex to decimal, it's like binary to decimal, only you multiply by 1,16,256... instead of 1,2,4,8...
For decimal to hex, it's like decimal to binary, only you are looking for powers of 16, not 2. This is the hardest one to do manually.
This is a very fundamental question, whose detailed answer, on an entry level could very well be a couple of pages. Try to google it :-)
I have a MySQL table with a field which is an unsigned tinyint (max value: 255).
Typical change in the requirements. We would need to create a new field because of a bunch of records in that table. But that would be very expensive for the application (lots of changes, a lot of work).
So we are thinking to combine the new value with the old value.
Basically in an unsigned tinyint (max value: 255), we need to store:
an integer that can be 1, 2, 3 or 4
an integer that can span from 1 to 30 (limits included)
The requirement is to get and set the 'combined' value with an algorithm as easy as possible.
How would you do that?
If possible I would like not to use any binary representation.
Thanks,
Dan
You could use multiples of 32 to represent 1-4 and add the 1-30 on top.
[1,1] would be 33
[1,2] would be 34
[1,30] would be 62
[2,1] would be 65
[2,30] would be 94
[4,1] would be 129
[4,30] would be 158
This would work and be unambiguous, but in general I really think you should not consort to a hack like this. Add the column and change your code. What will you do with the next requirements change? At the end, your software will be a collection of hacks and it can't be maintained anymore.
Let's call the two values x and y.
While storing the numbers perform these steps:
Multiply x by 100.
Add the result of 1 to y.
Store the result of 2 in the column.
Thus, if x were to be 3, and y 15, I would get 315 for the result. You can decode that easily by first extracting the last two digits from the number and then dividing by 100 will give you the first one.
But because you have to fit the numbers within 255, you can chose an appropriate multiplier that is less than 100.