I'm having some problems with the MySQL CONCAT() function. I ran the following:
SELECT CONCAT(now(),now())
And this is what I got back:
323031342d30352d30352031343a33393a3535323031342d30352d30352031343a33393a3535
Not sure what exactly what is going on? Has anyone seen this before? This happens when concatenating anything (columns, strings, mysql functions like now())
My server version is 5.1.63 - SUSE MySQL RPM" and the client version is libmysql - mysqlnd 5.0.7-dev - 091210 - $Revision: 304625 $
Looks like a hexadecimal representation of printable ASCII characters:
hex: 32 30 31 34 2d 30 35 2d 30 35 20 31 34 3a 33 39 3a 35 35
char: 2 0 1 4 - 0 5 - 0 5 1 4 : 3 9 : 5 5
I can't explain why the client is displaying character data as hexadecimal; I'd investigate the possibility that there's a mismatch in character set encoding.
Possibly the MySQL client library is using latin1, but the application is using a different encoding; but we'd expect this would affect all character expressions, not just CONCAT() expressions.
Actually, it's more likely the client is displaying hexadecimal for binary strings, and the value returned from CONCAT() is being reported as a binary string.
Here is an excerpt from MySQL 5.1 documentation for CONCAT() function:
Returns the string that results from concatenating the arguments. May
have one or more arguments. If all arguments are nonbinary strings,
the result is a nonbinary string. If the arguments include any binary
strings, the result is a binary string. A numeric argument is
converted to its equivalent binary string form; if you want to avoid
that, you can use an explicit type cast, as in this example:
SELECT CONCAT(CAST(int_col AS CHAR), char_col);
So, the workaround might be to CAST the value of NOW() as character, either using a CAST or possibly using the DATE_FORMAT function, e.g.
CONCAT(DATE_FORMAT(NOW(),'%Y-%m-%d %h:%i:%s'),DATE_FORMAT(NOW(),'%Y-%m-%d %h:%i:%s'))
Related
I am trying to store the number 0.0015 in the database. I have tried float, integer but I am getting zero not the exact figure I have entered. Is there a datatype which can store such a value?
Normally you'd use DECIMAL (aka NUMERIC), with a specified scale and precision, here are the docs for it. FLOAT should also work, but you need to be aware of floating point arithmetics quirks, so DECIMAL is preferred.
Here's a dbfiddle example
If you see your data as 0 then it's either an issue with how you're inserting (maybe importing from a file) or your client rounds it down to 0 and you need to tweak it. As you can see from the dbfiddle above, it works perfectly fine.
This number (0.0015) is not representable in binary. See the following example in python:
Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32
>>> x = 0.0015
>>> (x + 1) - 1
0.0015000000000000568
This means that the storing in mysql (or any other language that converts the number to binary) will show up representation errors. You can use numeric types that doesn't do any conversion to binary, like decimal or numeric.
This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set
If I use a file comparison tool like fc in Windows, you can choose between ASCII and binary comparison.
What is the actual difference between these two comparisons? If I compare two ASCII files, don't I want the binary data of the files to be identical?
WARNING: this is 5 year old loose remembrance of knowledge from uni
Binary representation means you compare the binary exactly, and ascii is a comparison of data type. to put it in a simple case the char 'A' is a representation of 01000001, but that is also an 8 bit integer equal to '65', so that means A = 65 in binary. so if you were doing A + A as a string and 65 43 65 (43 is '+' in binary to decimal), in binary they would be equivalent, but in ascii they would not. This is a very loose explanation and i'm sure i missed a lot, but that should sum it up loosely.
In a text file you want ASCII because you write in ascii characters. In say, a program state saved to a file you want binary to get a direct comparison.
I was just asking myself if this is standard, because I was setting a column to Type "Char 40" to store a SHA1 value. Is this true? or do I have to pay more attention when I do this in case I work with other then my own mysql database.
Thanks
EDIT
the best possible answer is, that SHA1 just works that way. I thought it was returning 160 bits and some other config setting converted it into a 40 char string, but it always returns that 40 digit string. see doc
SHA1 returns 40 characters, yes.
I have this fragment of MSSQL -
CONVERT(INT, HASHBYTES('MD5', {some_field}))
...and I'd really like a MySQL equivalent. I'm pretty sure the HASHBYTES('MD5', ...) bit is the same as MySQL's MD5(...) - it's the CONVERT(INT, ...) bit that's really puzzling me.
Thanks.
From the MySQL manual entry for the MD5() function:
The value is returned as a string of 32 hex digits, or NULL if the argument was NULL.
The MSSQL CONVERT() function which you quote above converts its varbinary argument to a signed 32-bit integer by truncating to the 4 lowest-order bytes. This is a bit of a nuisance because MySQL arithmetic works to 64-bit precision.
We must therefore take the rightmost 8 digits of MySQL's hex representation (representing the 4 lowest-order bytes) and convert to decimal using MySQL's CONV() function, then sign-extend the result:
CONV(RIGHT(MD5('foo'),8), 16, 10) ^ 0x80000000 - 0x80000000