MySQL AES_DECRYPT of mediumblob column with charset latin1 - mysql

I have a database where there are 3 tables with 3 columns which are AES_ENCRYPTed.
One table has a column which is a VARCHAR(255) with charset latin1. It contains some text encrypted.
Applying the AES_DECRYPT with an UNHEX of the column value shows the output of the real value properly.
Another table has a column which is a mediumblob with charset utf8. It contains some large text encrypted.
Applying just the AES_DECRYPT and then casting it to char displays the original value properly too.
But the 3rd table has a column which is also a mediumblob but charset latin1. It contains a large JSON data stringified.
Now when I apply just the AES_DECRYPT('<column_name>', '') it outputs null. Applying unhex or casting the column to other types before the decryption did not do anything.
For the first 2 tables, just applying the AES_DECRYPT without the conversion also output something.
But the 3rd table does not output anything; just shows NULL.
Any idea what is happening here? It would be very helpful if someone with DB expertise can point me to the right direction why the output is NULL and what needs to be done in the query to get the real output.
EDIT:
The DB Columns are populated by a microservice which uses JAVA Hibernate ColumnTransformer for doing the write and read.
write = AES_ENCRYPT(, ), read = AES_DECRYPT(, )
The values posted via this is also returned properly in the GET response. But the same query does not output the 3rd column value and print NULL as described.

With this structure :
And this command :
UPDATE encryption SET `encrypted` = AES_encrypt(CONVERT(`json` USING latin1), "key");
UPDATE encryption SET `decrypted` = AES_decrypt(encrypted, "key");
This works well for me.
However blobs doesn't any character sets...

Related

How can I insert arbitrary binary data into a VARCHAR column?

I have a MySQL table with a VARCHAR(100) column, using the utf8_general_ci collation.
I can see rows where this column contains arbitrary byte sequences (i.e. data that contains invalid UTF8 character sequences), but I can't figure out how to write an UPDATE or INSERT statement that allows this type of data to be entered.
For example, I've tried the following:
UPDATE DataTable SET Data = CAST(BINARY(X'16d7a4fca7442dda3ad93c9a726597e4') AS CHAR(100)) WHERE Id = 1;
But I get the error:
Incorrect string value: '\xFC\xA7D-\xDA:...' for column 'Data' at row 1
How can I write an INSERT or UPDATE statement that bypasses the destination column's collation, allowing me to insert arbitrary byte sequences?
Have you considered using one of the Blob data types instead of varchar? I believe that this'd take a lot of the pain away from your use-case.
EDIT: Alternatively, there is the HEX and UNHEX functions, which MySQL supports. Hex takes either a str or a numeric argument and returns the hexadecimal representation of your argument as a string. Unhex does the inverse; taking a hexadecimal string and returning a binary string.
The short answer is that it shouldn't be possible to insert values with invalid UTF8 characters into VARCHAR column declared to use UTF8 characterset.
That's the design goal of MySQL, to disallow invalid values. When there's an attempt to do that, MySQL will return either an error or a warning, or (more leniently?) silently truncate the supplied value at the first invalid character encountered.
The more usual variety of characterset issues are with MySQL performing a characterset conversion when a characterset conversion isn't required.
But the issue you are reporting is that invalid characters were inserted into a UTF8 column. It's as if a latin1 (ISO-8859) encoding was supplied, and a characterset conversion was required, but was not performed.
As far as working around that... I believe it was possible in earlier versions of MySQL. I believe it was possible to cast a value to BINARY, and then warp that in CONVERT( ... USING UTF8), and MySQL wouldn't perform a validation of the characterset. I don't know if that's still possible with the current MySQL Connectors.
If it is possible, then that's (IMO) a bug in the Connector.
The only way I can think of getting around that characterset check/validation would be to get the MySQL sever to trust the client, and determine that no check of the characterset is required. (That would also mean the MySQL server wouldn't be doing a characterset conversion, the client lying to the server, the client telling the server that it's supplying valid UTF8 characters.
Basically, the client would be telling the server "Hey server, I'm going to be sending UTF8 character encodings".
And the server says "Okay. I'll not do any characterset conversion then, since we match. And I'll just trust that what you send is valid UTF8".
And then the client mischievously chuckles to itself, "Heh, heh, I lied. I'm actually sending character encodings that aren't valid UTF8".
And I think it's much more likely to be able to achieve such mischief using prepared statements with the old school MySQL C API (mysql_stmt_prepare, mysql_stmt_execute), supplying nvalid UTF8 encodings as values for string bind parameters. (The onus is really on the client to supply valid values for bind parameters.)
You should base64 encode your value beforehand so you can generate a valid SQL with it:
UPDATE DataTable SET Data = from_base64('mybase64-encoded-representation-of-my-value') WHERE Id = 1;

MySQL does not identify character '?' on select

I got a table in MySQL with the following columns:
id name email address borningDate
I have a form in a HTML page that submits this data to a servlet, responsible for saving it at the database. Due to charset issues (already fixed), I saved a row like this, when trying to store letters with accents:
19 ? ? ? 2015-03-01
and now I want to delete this row.
Yeah, doing this:
DELETE FROM table WHERE id=19;
works nice. My didatic question is: why, if I try something like this:
DELETE FROM table WHERE name='?';
it returns 0 rows affected, like if it can't see ? as a valid character?
Try doing
SELECT id, HEX(name), HEX(email), HEX(address), borningDate FROM table
This will tell you what's actually in the database. It probably isn't actually ASCII question marks. The question marks are probably substitution characters applied when MySQL tries to convert the column's character set to the connection's character set.
To manage this more specifically, do SHOW CREATE TABLE table and look for the character set being used for the text columns. This probably shows up at the end of the table definition as something like DEFAULT CHARSET utf8 or some such thing. But it might be specified in the column definition.
Once you know the character set, issue the command SET NAMES charset, for example, SET NAMES utf8. Then reissue your commands and see if you get better results than the ? substitution character. That assumes, of course, that the client program you are using can handle the character set mentioned.

"Data too long for column" - why?

I've written a MySQL script to create a database for hypothetical hospital records and populate it with data. One of the tables, Department, has a column named Description, which is declared as type varchar(200). When executing the INSERT command for Description I get an error:
error 1406: Data too long for column 'Description' at row 1.
All the strings I'm inserting are less than 150 characters.
Here's the declaration:
CREATE TABLE Department(
...
Description varchar(200)
...);
And here's the insertion command:
INSERT INTO Department VALUES
(..., 'There is some text here',...), (..., 'There is some more text over here',...);
By all appearances, this should be working. Anyone have some insight?
Change column type to LONGTEXT
I had a similar problem when migrating an old database to a new version.
Switch the MySQL mode to not use STRICT.
SET ##global.sql_mode= 'NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
Error Code: 1406. Data too long for column - MySQL
There is an hard limit on how much data can be stored in a single row of a mysql table, regardless of the number of columns or the individual column length.
As stated in the OFFICIAL DOCUMENTATION
The maximum row size constrains the number (and possibly size) of columns because the total length of all columns cannot exceed this size. For example, utf8 characters require up to three bytes per character, so for a CHAR(255) CHARACTER SET utf8 column, the server must allocate 255 × 3 = 765 bytes per value. Consequently, a table cannot contain more than 65,535 / 765 = 85 such columns.
Storage for variable-length columns includes length bytes, which are assessed against the row size. For example, a VARCHAR(255) CHARACTER SET utf8 column takes two bytes to store the length of the value, so each value can take up to 767 bytes.
Here you can find INNODB TABLES LIMITATIONS
in mysql if you take VARCHAR then change it to TEXT bcoz its size is 65,535
and if you can already take TEXT the change it with LONGTEXT only if u need more then 65,535.
total size of LONGTEXT is 4,294,967,295 characters
Varchar has its own limits. Maybe try changing datatype to text.!
Turns out, as is often the case, it was a stupid error on my part. The way I was testing this, I wasn't rebuilding the Department table after changing the data type from varchar(50) to varchar(200); I was just re-running the insert command, still with the column as varchar(50).
If your source data is larger than your target field and you just want to cut off any extra characters, but you don't want to turn off strict mode or change the target field's size, then just cut the data down to the size you need with LEFT(field_name,size).
INSERT INTO Department VALUES
(..., LEFT('There is some text here',30),...), (..., LEFT('There is some more text over here',30),...);
I used "30" as an example of your target field's size.
In some of my code, it's easy to get the target field's size and do this. But if your code makes that hard, then go with one of the other answers.
For me, I defined column type as BIT (e.g. "boolean")
When I tried to set column value "1" via UI (Workbench), I was getting a "Data too long for column" error.
Turns out that there is a special syntax for setting BIT values, which is:
b'1'
With Hibernate you can create your own UserType. So thats what I did for this issue. Something as simple as this:
public class BytesType implements org.hibernate.usertype.UserType {
private final int[] SQL_TYPES = new int[] { java.sql.Types.VARBINARY };
//...
}
There of course is more to implement from extending your own UserType but I just wanted to throw that out there for anyone looking for other methods.
Very old question, but I tried everything suggested above and still could not get it resolved.
Turns out that, I had after insert/update trigger for the main table which tracked the changes by inserting the record in history table having similar structure. I increased the size in the main table column but forgot to change the size of history table column and that created the problem.
I did similar changes in the other table and error is gone.
I try to create a table with a field as 200 characters and I've added two rows with early 160 characters and it's OK. Are you sure your rows are less than 200 characters?
Show SqlFiddle
There was a similar problem when storing a hashed password into a table. Changing the maximum column length didn't help. Everything turned out to be simple. It was necessary to delete the previously created table from the database, and then test the code with new values ​​of the allowable length.
If you re using type: DataTypes.STRING, then just pass how long this string can be like DataTypes.STRING(1000)
In my case this error occurred due to entering data a wrong type for example: if it is a long type column, i tried to enter in string type. so please check your data that you are entering and type are same or not
For me, I try to update column type "boolean" value
When I tried to set column value 1 MySQL Workbench, I was getting a "Data too long for column" error.
So for that there is a special syntax for setting boolean values, which is:
UPDATE `DBNAME`.`TABLE_NAME` SET `FIELD_NAME` = false WHERE (`ID` = 'ID_VALUE'); //false for 0
UPDATE `DBNAME`.`TABLE_NAME` SET `FIELD_NAME` = true WHERE (`ID` = 'ID_VALUE'); //true for 1
I had a different problem which gave the same error so I'll make a quick recap as this seems to have quite different sources and the error does not help much to track down the root cause.
Common sources for INSERT / UPDATE
Size of value in row
This is exactly what the error is complaining about. Maybe it's just that.
You can:
increase the column size: for long strings you can try to use TEXT, MEDIUMTEXT or LONGTEXT
trim the value that is too long: you can use tools from the language you're using to build the query or directly in SQL with LEFT(value,size) or RIGHT(...) or SUBSTRING(...)
Beware that there is a maximum row size in a MySQL table as reported by this answer. Check documentation and InnoDB engine limitations.
Datatype Mismatch
One or more rows are of the wrong datatype.
common sources of error are
ENUM
BIT: don't use 1 but b'1'
Data outlier
In a long list of insert one can easily miss a row which has a field not adhering to the column typing, like an ENUM generated from a string.
Python Django
Check if you have sample_history enabled, after a change in a column size it must be updated too.

UTF-8 data in Latin1 database: can it be saved?

I have a rails app that receives data from an Android device. I noticed that some of the data, when in Japanese, is not saved correctly. It shows up as literal question marks (not the diamond ones) in the MySQL client and in the rails website.
It turns out that the database that I have connected to the rails app is set to Latin1. Rails is set to UTF-8.
I read a lot about character encodings, but they all mention that the data is somehow a bit readable. Mine however is only literal question marks. Also trying to convert the data to UTF-8 using several methods on the web doesn't change a thing. I suspect that the data is converted to question marks when it's written to the database.
Sample output from the MySQL console:
select * from foo where bar = "foobar";
+-------+------+------------------------+---------------------+---------------------+
| id | name | bar | created_at | updated_at |
+-------+------+------------------------+---------------------+---------------------+
| 24300 | ???? | foobar | 2012-01-23 05:04:22 | 2012-01-23 05:04:22 |
+-------+------+------------------------+---------------------+---------------------+
1 row in set (0.00 sec)
The input data, that my rails app got from the Android client was:
name = 爆笑笑話
This input data has been verified to exist in the rails app before saving to the database. So it's not mangled in the Android client or during transfer to the server. Is there any chance I can get this data back? Or is it completely lost?
It's actually very easy to think that data is encoded in one way, when it is actually encoded in some other way: this is because any attempt to directly retrieve the data will result in conversion first to the character set of your database connection and then to the character set of your output medium—therefore you should first verify the actual encoding of your stored data through either SELECT BINARY name FROM foo WHERE bar = 'foobar' or SELECT HEX(name) FROM foo WHERE bar = 'foobar'.
Where the character 爆 is expected, you will likely find either of the following byte sequences:
0xe78886, indicating that your column actually contains UTF-8 encoded data: this usually happens when the character set of the database connection over which the text was originally inserted was set to latin1 but actually UTF-8 encoded data was sent.
You must be seeing ? characters when fetching the data because something between the data storage and the display has been unable to transcode those bytes (however, given that MySQL thinks they represent 爆 and those characters are likely available in most character sets, it's unlikely that it's occurring within MySQL itself—unless you're explicitly adjusting the encoding information during retrieval).
Anyway, if this is the case, you need to drop the encoding information from the column and then tell MySQL that the data is actually encoded as UTF-8. As documented under ALTER TABLE Syntax:
Warning 
The CONVERT TO operation converts column values between the character sets. This is not what you want if you have a column in one character set (like latin1) but the stored values actually use some other, incompatible character set (like utf8). In this case, you have to do the following for each such column:
ALTER TABLE t1 CHANGE c1 c1 BLOB;
ALTER TABLE t1 CHANGE c1 c1 TEXT CHARACTER SET utf8;
The reason this works is that there is no conversion when you convert to or from BLOB columns.
0x3f, indicating that the database does actually contain the literal character ? and your original data has been lost: this doesn't happen easily, since MySQL usually throws error 1366 if implicit transcoding results in loss of data. Perhaps there was some explicit transcoding in your insert statement?
In this case, you need to convert the storage encoding to a suitable format, then update or re-insert the data:
ALTER TABLE foo CONVERT TO utf8;
UPDATE foo SET name = _utf8 '爆笑笑話' WHERE bar = 'foobar';

Problem retrieving Strings from varbinary columns using HIbernate and MySQL

Here's my scenario. I save a bunch of Strings containing asian characters in MySQL using Hibernate. These strings are written in varbinary columns. Everything works fine during the saving operation. The DB contains the correct values (sequence of bytes). If I query (again using Hibernate) for the Strings that I saved I get the correct results. But when Hibernate fills the entity to which the Strings belong with the values from the DB I get different values then the ones I used in the query that retrieved them. Instead of receiving the correct values I receive a bunch of FFFD replacement characters.
For example: if I store "하늘" in the DB and then I query for it, the resulting String will be \uFFFD\uFFFD\uFFFD\uFFFD\uFFFD\uFFFD.
the DB connection has the following parameters set useUnicode=true&characterEncoding=UTF-8,
I've tried using the following configurations for Hibernate but that didn't solve the problem:
- connection.useUnicode = true
- connection.characterEncoding = UTF-8
By the way, this all works fine if the MySQL columns are of type varchar.
What am I missing? Any suggestions?
Thanks
Set the connection character set to be binary too:
SET NAMES 'binary';