I am trying to get, post and put JSON data to a MYSQL server (docker container) that uses utf8 as the charset. Within this JSON data are strings that should contain "\n". When the data is applied in my application these "\n" are causing linebreaks which is what I want.
But when I try to insert this into my database it throws an error: "ERROR 3140 (22032) at line x: "Invalid JSON text: "Invalid encoding in string.".
Solutions I found suggested adding additional backslashes (" '\' \n"). While this works for inserting, it doesn't allow for the linebreaks I wanted in my application. Is there some way around without changing the application ?
SQL Query:
INSERT INTO table (id, json) VALUES (1,'{"data": "name: ab \n cost: 5 $ | time: 1 s"}')
INSERT INTO table (id, json) VALUES (1,'{"data": "name: ab \n cost: 5 $ | time: 1 s"}')
\n is a escape sequence in SQL so you're sending an actual carriage return character rather than a backslash followed by letter n, which is what JSON mandates. It's more obvious with this other query:
mysql> select '{"data": "name: ab \n cost: 5 $ | time: 1 s"}' as test;
+----------------------------------------------+
| test |
+----------------------------------------------+
| {"data": "name: ab
cost: 5 $ | time: 1 s"} |
+----------------------------------------------+
1 row in set (0.00 sec)
To insert a literal \ in MySQL you need to escape with another \:
mysql> select 'one\\ntwo' as test;
+----------+
| test |
+----------+
| one\ntwo |
+----------+
1 row in set (0.00 sec)
Please note this is only an issue when you type literal strings in your code. It won't affect data coming from other sources (e.g. HTML forms or databases) unless you fail to process it somewhere (e.g. by injecting it in raw SQL rather than using prepared statements).
Related
I am using MySQL json data type to store a JSON string.
In the JSON string are two fields: entry_time, and entry_date. The values for these fields are stored in ISO 8609 format as follows:
entry_date:2017-02-15T00:00:00.00Z
entry_time:0000-00-00T04:35:51.29Z
I am trying to create a virtual column from these two attributes. Since MySQL 5.7 has NO_ZERO_DATE set I cannot see a way to extract these values out as date and time columns. Here is what I have tried:
alter table odh
add entry_time time GENERATED ALWAYS AS (REPLACE,(json_extract(jdoc,'$.entry_time')),'0000-00-00T','')
But I cannot get the sot work either Any help would be greatly appreciated!
There's another problem past the NO_ZERO_DATE problem: MySQL doesn't use ISO 8601 for dates, it uses ISO 9075 or SQL dates (really it uses all sorts of formats, MySQL date handling is pretty chaotic). There's some subtle differences, the biggest being that 9075 doesn't have the T in the middle. Most date APIs deal with this by making the T optional, but not MySQL.
mysql> select time('1234-01-01T04:35:51.29Z');
+---------------------------------+
| time('1234-01-01T04:35:51.29Z') |
+---------------------------------+
| 00:12:34 |
+---------------------------------+
1 row in set, 1 warning (0.00 sec)
Rather than giving an error, or ignoring the T, MySQL gave some desperately wrong answer.
mysql> select time('1234-01-01 04:35:51.29Z');
+---------------------------------+
| time('1234-01-01 04:35:51.29Z') |
+---------------------------------+
| 04:35:51.29 |
+---------------------------------+
1 row in set, 1 warning (0.00 sec)
Now it's fine.
Simplest thing to do is to replace the T with a space. Then TIME() will work.
mysql> select time(replace('0000-00-00T04:35:51.29Z', 'T', ' '));
+------------------------------------------------+
| time(substring('0000-00-00T04:35:51.29Z', 12)) |
+------------------------------------------------+
| 04:35:51.29 |
+------------------------------------------------+
1 row in set, 1 warning (0.00 sec)
The date part will be handled by DATE(), because MySQL will just ignore the rest.
mysql> select date('2017-02-15T00:00:00.00Z');
+---------------------------------+
| date('2017-02-15T00:00:00.00Z') |
+---------------------------------+
| 2017-02-15 |
+---------------------------------+
1 row in set, 1 warning (0.01 sec)
Rather than storing separate date and time columns, I'd strongly recommend you combine them into a single datetime column. Especially if they're supposed to represent a single event (ie. 4:35:51.29 on 2017-20-15). This will make it much easier to do comparisons, and you can always extract the date and time parts with DATE() and TIME().
Not sure, but you might find some of this useful. Assuming you start with a json string brought over from the database:
$jsonstring = '{
"entries": [{
"entry_date": "2017-02-15T00:00:00.00Z",
"entry_time": "0000-00-00T04:35:51.29Z"
}, {
"entry_date": "2017-02-16T00:00:00.00Z",
"entry_time": "0000-00-00T05:21:08.12Z"
}, {
"entry_date": "2017-02-17T00:00:00.00Z",
"entry_time": "0000-00-00T07:14:55.04Z"
}]
}';
The following code will convert it to a single field:
$json = json_decode($jsonstring, true);
//display the json string before conversion
echo "<pre>";
print_r($json['entries']);
echo "</pre>";
$myArray = [];
for($i = 0; $i < count($json['entries']); $i++) {
$myArray[] = substr($json['entries'][$i]['entry_date'], 0, 10).
substr($json['entries'][$i]['entry_time'], 10);
}
//display the json string after conversion
echo "<pre>";
print_r($myArray);
echo "</pre>";
The result will look like this:
Array
(
[0] => 2017-02-15T04:35:51.29Z
[1] => 2017-02-16T05:21:08.12Z
[2] => 2017-02-17T07:14:55.04Z
)
I'm executing the following statement:
select left(column,400) from table into outfile test;
I've also tried using substring function (with same results).
When I go to download the file and get a character count:
wc -c < test
I get 409 characters as a return.
Come someone assist me in why I'm getting an incorrect count?
The database table is set to utf8 and the column is longtext.
When I run the following it still doesn't give me correct length of characters:
select length(left(column, 400) from table where id in (1,2,3,4);
+-----------------------------+
| length(left(column,400)) |
+-----------------------------+
| 402 |
| 403 |
| 412 |
| 401 |
+-----------------------------+
The command wc -c is counting bytes, despite the character used for the switch. With the DB in UTF-8, the mysql left is counting characters. Since UTF-8 can use more than 1 byte per character, I expect the first 400 characters in column includes 8 characters that take 2 bytes (or less than 8 if some take 3 bytes). There's probably a newline at the end as well.
I am trying to fetch rows from my database by checking if the json in one of their fields contains a specific id.
Example: col(kats): [2,4,7,9]
I am trying to do so by using the following query
SELECT column FROM table WHERE column REGEXP '(\[|\,)1(\]|\,)'
The Problem: MySQL returns 1 for every row in the table.
MySQL requires that any literal backslash \ characters (which are literal in the REGEXP string as escape characters to the following []) be escaped themselves. Thus, you must double-escape [] as \\[ and \\].
From the docs:
Because MySQL uses the C escape syntax in strings (for example, “\n” to represent the newline character), you must double any “\” that you use in your REGEXP strings.
The rest of your pattern is basically correct, except that the comma , does not require escaping.
1 does not match:
> SELECT '[2,4,7,9]' REGEXP '(\\[|,)1(\\]|,)';
+--------------------------------------+
| '[2,4,7,9]' REGEXP '(\\[|,)1(\\]|,)' |
+--------------------------------------+
| 0 |
+--------------------------------------+
1 row in set (0.00 sec)
But 2 does match
> SELECT '[2,4,7,9]' REGEXP '(\\[|,)2(\\]|,)';
+--------------------------------------+
| '[2,4,7,9]' REGEXP '(\\[|,)2(\\]|,)' |
+--------------------------------------+
| 1 |
+--------------------------------------+
1 row in set (0.00 sec)
Is there a way to disable escape characters in a MySQL query? For example, for the following table:
mysql> select * from test1;
+------------------------+-------+
| name | value |
+------------------------+-------+
| C:\\media\data\temp\ | 1 |
| C:\\media\data\temp | 2 |
| /unix/media/data/temp | 3 |
| /unix/media/data/temp/ | 4 |
+------------------------+-------+
I want the following to be a valid query:
mysql> select * from test1 where name='C:\\media\data\temp\';
I know that I can instead use
mysql> select * from test1 where name='C:\\\\media\\data\\temp\\';
But I am building this query using my_snprintf(), so there instead I have to use
C:\\\\\\\\media\\\\data\\\\temp\\\\
...and so on!
Is there a way to disable escape characters for a single MySQL query ?
You can disable backslash escapes by setting NO_BACKSLASH_ESCAPES in the SQL mode:
-- save mode & disable backslashes
SET #old_sql_mode=##sql_mode;
SET ##sql_mode=CONCAT_WS(',', ##sql_mode, 'NO_BACKSLASH_ESCAPES');
-- run the query
SELECT 'C:\\media\data\temp\';
-- enable backslashes
SET ##sql_mode=#old_sql_mode;
For tabular output in MySQL command line, the “boxing” around columns enables one column value to be distinguished from another. For non-tabular output (such as is produced in batch mode or when the --batch or --silent option is given), special characters are escaped in the output so they can be identified easily. Newline, tab, NUL, and backslash are written as \n, \t, \0, and \. The --raw option disables this character escaping.
In MySQL, when I try to insert a backslash into my table, it does not accept it and gives me the content without the backslash.
id is set to auto increment:
Code:
INSERT INTO gender (sex, date) VALUES (
'male are allowed \ female are not allowed', "2012-10-06")
How do I insert a literal backslash?
Notes about escape sequences:
Escape Sequence Character Represented by Sequence
\0 An ASCII NUL (0x00) character.
\' A single quote (“'”) character.
\" A double quote (“"”) character.
\b A backspace character.
\n A newline (linefeed) character.
\r A carriage return character.
\t A tab character.
\Z ASCII 26 (Control+Z). See note following the table.
\\ A backslash (“\”) character.
\% A “%” character. See note following the table.
\_ A “_” character. See note following the table.
You need to escape your backslash :
INSERT INTO gender
(sex, date) VALUES (
'male are allowed \\ female are not allowed',
"2012-10-06")
Reference (with the list of all characters you must escape for mysql)
Handle backslashes and all control characters in mysql load data infile tool:
Step 1, create your table:
mysql> create table penguin (id int primary key, chucknorris VARCHAR(4000));
Query OK, 0 rows affected (0.01 sec)
Step 2, create your file to import and put this data in there.
1 spacealiens are on route
2 scramble the nimitz\
3 \its species 8472
4 \\\\\\\\\\\\\\\\\\
5 Bonus characters:!##$%^&*()_+=-[]\|}{;'":/.?>,< anything but tab
Step 3, insert into your table:
mysql> load data local infile '/home/el/foo/textfile.txt' into table penguin
fields terminated by '\t' lines terminated by '\n'
(#col1, #col2) set id=#col1, chucknorris=#col2;
Query OK, 4 rows affected, 1 warning (0.00 sec)
Records: 4 Deleted: 0 Skipped: 0 Warnings: 1
Step 4, and of course, it causes this strange output:
mysql> select * from penguin;
+----+-----------------------------------------------------------------+
| id | chucknorris |
+----+-----------------------------------------------------------------+
| 1 | spacealiens are on route |
| 2 | scramble the nimitz |
| 3 | |
| 4 | \\\\\\\\\ |
| 5 | Bonus characters:!##$%^&*()_+=-[]|}{;'":/.?>,< anything but tab |
+----+-----------------------------------------------------------------+
Step 5, analyze the warning:
mysql> show warnings;
+---------+------+--------------------------------------------------------+
| Level | Code | Message |
+---------+------+------------------------------------- ------------------+
| Warning | 1262 | Row 2 was truncated; it contained more data than there |
| | | were input columns |
+---------+------+--------------------------------------------------------+
1 row in set (0.00 sec)
Step 6, think about exactly what went wrong:
The backslash to the left of nimitz caused the mysql load data parser to concatenate the end of line 2 with the beginning of line 3. Then it bumped up against a tab and put 'scramble the nimitz\n3 into row 2.
The rest of row 3 is skipped because the extra words its species 8472 do not fit anywhere, it produces the warning you see above.
Row 4 had 18 backslashes, so there is no problem, and shows up as 9 backslahes because each was escaped. Had there been an odd number, the error on row 2 would have happened to row 4.
The bonus characters on row 5 came through normally. Everything is allowed except tab.
Step 7, reset table penguin:
mysql> delete from penguin;
Step 8, load into your table with the fields escaped by clause:
mysql> load data local infile '/home/el/foo/textfile.txt' into table penguin
fields terminated by '\t' escaped by '\b'
lines terminated by '\n' (#col1, #col2) set id=#col1,
chucknorris=#col2;
Query OK, 5 rows affected (0.00 sec)
Records: 5 Deleted: 0 Skipped: 0 Warnings: 0
Step 9, select from your table, interpret the results:
mysql> select * from penguin;
+----+------------------------------------------------------------------+
| id | chucknorris |
+----+------------------------------------------------------------------+
| 1 | spacealiens are on route |
| 2 | scramble the nimitz\ |
| 3 | \its species 8472 |
| 4 | \\\\\\\\\\\\\\\\\\ |
| 5 | Bonus characters:!##$%^&*()_+=-[]\|}{;'":/.?>,< anything but tab |
+----+------------------------------------------------------------------+
5 rows in set (0.00 sec)
And now everything is as we expect. The backslash at the end of line 2 does not escape the newline. The backslash before i on row 3 doesn't do anything. The 18 backslashes on row 4 are not escaped. And the bonus characters come through ok.
you can use this code :
$yourVariable = addcslashes($_POST["your param"],"\\");
for example in my web form i want insert local directory :
$localAddress = addcslashes($_POST["localAddress"],"\\");