Change in the requirements - store 2 different values in a single database table field - mysql

I have a MySQL table with a field which is an unsigned tinyint (max value: 255).
Typical change in the requirements. We would need to create a new field because of a bunch of records in that table. But that would be very expensive for the application (lots of changes, a lot of work).
So we are thinking to combine the new value with the old value.
Basically in an unsigned tinyint (max value: 255), we need to store:
an integer that can be 1, 2, 3 or 4
an integer that can span from 1 to 30 (limits included)
The requirement is to get and set the 'combined' value with an algorithm as easy as possible.
How would you do that?
If possible I would like not to use any binary representation.
Thanks,
Dan

You could use multiples of 32 to represent 1-4 and add the 1-30 on top.
[1,1] would be 33
[1,2] would be 34
[1,30] would be 62
[2,1] would be 65
[2,30] would be 94
[4,1] would be 129
[4,30] would be 158
This would work and be unambiguous, but in general I really think you should not consort to a hack like this. Add the column and change your code. What will you do with the next requirements change? At the end, your software will be a collection of hacks and it can't be maintained anymore.

Let's call the two values x and y.
While storing the numbers perform these steps:
Multiply x by 100.
Add the result of 1 to y.
Store the result of 2 in the column.
Thus, if x were to be 3, and y 15, I would get 315 for the result. You can decode that easily by first extracting the last two digits from the number and then dividing by 100 will give you the first one.
But because you have to fit the numbers within 255, you can chose an appropriate multiplier that is less than 100.

Related

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

How to do bit insertion

How to insert bits to a certain position?
For example, if I have an integer 115 (i.e. b1110011).
And I want to insert integer 3 (i.e. b11) to position 3 from the right. In this case, let's assume that I know the size of the binary that I want to insert. For example, I know that 3 is 2 digits binary.
Therefore the resulting binary will be 475 (i.e. b111011011)
I have a solution in mind but it seems complicated. It involves first, shifting the bits 2 digits to the left to give room for the new bits. Then set rightmost 2+position bits to zero. Therefore now I have 111000000.
Then I will OR the result to change the bold bits part with the rightmost 2+position from the original bits with the first two position is changed to the bits that I want to insert. (i.e. 11011).
Therefore in the end I will have 111000000 | 11011 == 111011011.
But I think my solution is too complicated. Thus, is there any better way to do this operation?

MySQL round in query, wrong result

I have a question about a query that I'm running on a MySQL Server (v5.5.50-0+deb8u1).
SELECT 12 - (SELECT qty FROM Table WHERE id = 5213) AS Amount
so Amount value is 12 - 8,5500000000000007 = 3.4499999999999993
But if I run the query:
SELECT qty FROM Table WHERE id = 5213
it returns 8.55 that is the correct number written in the record, so I was expecting that the first querty returned 3.45.
The "qty" column in the table "Table" is a DOUBLE.
How is it possibile? How can I get the right answer from the query?
thanks in advance
Well that's just the way floating numbers are.
Floating-point numbers sometimes cause confusion because they are
approximate and not stored as exact values. A floating-point value as
written in an SQL statement may not be the same as the value
represented internally.
This statement holds true for many programming languages as well. Some numbers don't even have an exact representation. Here's something from the python manual
The problem is easier to understand at first in base 10. Consider the
fraction 1/3. You can approximate that as a base 10 fraction:
0.3 or, better,
0.33 or, better,
0.333 and so on. No matter how many digits you’re willing to write down, the result will never be exactly 1/3, but will be an
increasingly better approximation of 1/3.
In the same way, no matter how many base 2 digits you’re willing to
use, the decimal value 0.1 cannot be represented exactly as a base 2
fraction. In base 2, 1/10 is the infinitely repeating fraction
So in short generally doing is float1 = float2 type of comparison is a bad idea but everyone keeps forgetting it.
You can define 'qty' column as decimal(10,2)

How would I convert a number into binary bits, then truncate or enlarge their size, and then insert into a bit container?

As the title of the question says, I want to take a number (declared preferably as int or char or std::uint8_t), convert it into its binary representation, then truncate or pad it by a certain variable number of bits given, and then insert it into a bit container (preferably std::vector<bool> because I need variable bit container size as per the variable number of bits). For example, I have int a= 2, b = 3. And let's say I have to write this as three bits and six bits respectively into the container. So I have to put 010 and 000011 into the bit container. So, how would I go from 2 to 010 or 3 to 000011 using normal STL methods? I tried every possible thing that came to my mind, but I got nothing. Please help. Thank you.
You can use a combination of 'shifting' (>>) and 'bit-wise and' (&).
First lets look at the bitwise &: For instance if you have an int a=7 and you do the &-operation on it with 13, you will get 5. Why?
Because & gives 1 at position i iff both operands have a 1 at position i. So we get:
00...000111 // binary 7
& 00...001101 // binary 13
-------------
00...000101 // binary 5
Next, by using the shift operation >> you can shift the binary representation of your ints. For instance 5 >> 1 is 2. Why?
Because each position gets displaced by 1 to the right. The rightmost bit "falls out". Hence we have:
00...00101 //binary for 5
shift by 1 to the right gives:
00...00010 // binary for 2
Another example: 13 (01101) shifted by 2 is 3 (00011). I hope you get the idea.
Hence, by repeatedly shifting and doing & with 1 (00..0001), you can read out the binary representation of a number.
Finally, you can use this 1 to set the corresponding position in your vector<bool>. Assuming you want to have the representation you show in your post, you will have to fill in your vector from the back. So, you could for instance do something along the lines:
unsigned int f=13; //the number we want to convert
std::vector<bool> binRepr(size, false); //size is the container-size you want to use.
for(int currBit=0; currBit<size; currBit++){
binRepr[size-1-currBit] = (f >> currBit) & 1;
}
If the container is smaller than the binary representation of your int, the container will contain the truncated number. If it is larger, it will fill in with 0s.
I'm using an unsigned int since for an int you would still have to take care of negative numbers (for positive numbers it should work the same) and we would have to dive into the two's complement representation, which is not difficult, but requires a bit more bit-fiddling.

smallest mysql type that accommodates single decimal

Database newbie here. I'm setting up a mysql table. One of the fields will accept a value in increment of a 0.5. e.g. 0.5, 1.0, 1.5, 2.0, .... 200.5, etc.
I've tried int but it doesn't capture the decimals.
`value` int(10),
What would be the smallest type that can accommodate this value, considering it's only a single decimal.
I also was considering that because the decimal will always be 0.5 if at all, I could store it in a separate boolean field? So I would have 2 fields instead. Is this a stupid or somewhat over complicated idea? I don't know if it really saves me any memory, and it might get slower now that I'm accessing 2 fields instead of 1
`value` int(10),
`half` bool, //or something similar to boolean
What are your suggestions guys? Is the first option better, and what's the smallest data type in that case that would get me the 0.5?
You'll want to look at the DECIMAL(P,S) type.
For that, P is the precision and S is the scale. You can think of P as how many digits there are in total (both before and after the decimal point), and S as how many of the digits are after the decimal point.
So for instance, to store from -999.99 to 999.99, you'd need 5 digits of precision and a scale of 2, therefore you'd use DECIMAL(5, 2).
In your case, you'd need a DECIMAL(n, 1), where n is how many digits you need before the decimal point + 1 for the decimal.
You can use
DECIMAL(N,1)
Where N is the precision that represents the max number of significant digits that can be stored.
Apart from using DECIMAL, you could store the tenfold of the value in an INT column (which I would think could be more efficient.)
0.5 => 5
201.5 => 2015
etc.