MySQL 5.7 JSON column update - mysql

I am using MySQL 5.7. I have a table with a JSON column.
MySQL [test_db]> select * from mytable;
+----+-------+---------------------+
| id | name | hobby |
+----+-------+---------------------+
| 1 | Rahul | {"Game": "Cricket"} |
| 2 | Sam | null |
+----+-------+---------------------+
Here, for row id = 2, I want to insert a data. I did -
update mytable set hobby = JSON_SET(hobby, '$.Game', 'soccer') where id = 2;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
It seems like data inserted properly, But when I checked
MySQL [test_db]> select * from mytable;
+----+-------+---------------------+
| id | name | hobby |
+----+-------+---------------------+
| 1 | Rahul | {"Game": "Cricket"} |
| 2 | Sam | null |
+----+-------+---------------------+
data is not inserted, Can anybody give some hint, what I am missing here.
Thanks.

Hobby is NULL, and you can't set a property on NULL, so use an IF statement instead, to convert null to an empty object first (Or initialize hobby as an empty object instead of NULL):
UPDATE mytable
SET hobby = JSON_SET(IF(hobby IS NULL, '{}', hobby), '$.Game', 'soccer')
WHERE id = 2;
Alternatitvely, use COALESCE:
UPDATE mytable
SET hobby = JSON_SET(COALESCE(hobby, '{}'), '$.Game', 'soccer')
WHERE id = 2;
See dbfiddle here.

Related

My mysql statement to query by primary key sometimes returns more than one row, so what happened?

My schema is this:
CREATE TABLE `user` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_name` varchar(10) NOT NULL,
`account_type` varchar(10) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=latin1
INSERT INTO user VALUES (1, "zhangsan", "premiumv"), (2, "lisi", "premiumv"), (3, "wangwu", "p"), (4, "maliu", "p"), (5, "hengqi", "p"), (6, "shuba", "p");
I have the following 6 rows in the table:
+----+-----------+--------------+
| id | user_name | account_type |
+----+-----------+--------------+
| 1 | zhangsan | premiumv |
| 2 | lisi | premiumv |
| 3 | wangwu | p |
| 4 | maliu | p |
| 5 | hengqi | p |
| 6 | shuba | p |
+----+-----------+--------------+
Here is mysql to query the table by id:
SELECT * FROM user WHERE id = floor(rand()*6) + 1;
I expect it to return one row, but the actual result is non-predictive. It either will return 0 row, 1 row or sometimes more than one row. Can somebody help clarify this? Thanks!
You're testing each row against a different random number, so sometimes multiple rows will match. To fix this, calculate the random number once in a subquery.
SELECT u.*
FROM user AS u
JOIN (SELECT floor(rand()*6) + 1 AS r) AS r
ON u.id = r.r
This method of selecting a random row from a table seems like a poor design. If there are any gaps in the id sequence (which can happen easily -- MySQL doesn't guarantee that they'll always be sequential, and deleting rows will leave gaps) then it could return an empty result. The usual way to select a random row from a table is with:
SELECT *
FROM user
ORDER BY RAND()
LIMIT 1
The WHERE part must be evaluated for each row to see if there is a match. Because of this, the rand() function is evaluated for every row. Getting an inconsistent number of rows seems reasonable.
If you add LIMIT 1 to your query, the probability of returning rows from the end diminishes.
It's because the WHERE clause floor(rand()*6) + 1 is evaluated against every rows in the table to see if the condition matches the criteria. The value could be different each time it is matched against the row from the table.
You can test with a table that has same values in the column used in WHERE clause, and you can see the result:
select * from test;
+------+------+
| id | name |
+------+------+
| 1 | a |
| 2 | b |
| 1 | c |
| 2 | d |
| 1 | e |
| 2 | f |
+------+------+
select * from test where id = floor(rand()*2) + 1;
+------+------+
| id | name |
+------+------+
| 1 | a |
| 2 | d |
| 1 | e |
+------+------+
In the above example, the expression floor(rand()*2) + 1 returns 1 when matching against the first row (with name = 'a') so it is included in the result set. But then it returns 2 when matching against the forth row (with name = 'd'), so it is also included in the result set even the value of id is different from the value of the first row in the result set.

Mysql query, allowing duplicates

I have a table named Data( id, url ). One of the api in my project returns me the list of ids ( there could be duplicate ids in this list). For the sake of this question lets assume this list as ( 1, 1, 2, 3, 4, 4)
I am trying to find the urls associated with these ids.
My first and naive attempt was to use IN clause:
SELECT url from Data where id in ( 1, 1, 2, 3, 4, 4);
This returns me four rows. i.e. urls for id 1,2,3 and 4.
What I want is six rows, each one for specified id ( duplicate rows need to be retained )
I understood that IN clause is not helpful in this situation. Could anyone please point me to right direction?
I could fire a query for individual id by iterating the list but its a last resort for me.
UPDATE: Adding more details about table
mysql> desc Data;
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| url | varchar(11) | YES | | NULL | |
+-------+-------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
mysql> select * from Data;
+----+-------+
| id | url |
+----+-------+
| 1 | a.com |
| 2 | b.com |
| 3 | c.com |
| 4 | d.com |
+----+-------+
4 rows in set (0.00 sec)
mysql> select url from Data where id in(1,1,2,3,4,4);
+-------+
| url |
+-------+
| a.com |
| b.com |
| c.com |
| d.com |
+-------+
4 rows in set (0.00 sec)
What I want is:
+-------+
| url |
+-------+
| a.com |
| a.com |
| b.com |
| c.com |
| d.com |
| d.com |
+-------+
It's not pretty, but
select url
from Data
inner join
( SELECT id FROM (
SELECT 1 as id UNION ALL
SELECT 1 as id UNION ALL
SELECT 2 as id UNION ALL
SELECT 3 as id UNION ALL
SELECT 4 as id UNION ALL
SELECT 4 as id
) as list_table ) as table2
on (Data.id = table2.id);
I found pretty much no way to select values from a list or join a table to a list, but you could check out this SO Question
this works very good
SELECT url
from table1
where id in ( 1, 1, 2, 3, 4, 4);
Demo
to drop the unique on id
alter table Data drop index PRI;
to drop primary key
ALTER TABLE Data DROP INDEX `PRIMARY`;

MySQL, how to merge table duplicates entries [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I remove duplicate rows?
Remove duplicates using only a MySQL query?
I have a large table with ~14M entries. The table type is MyISAM ans not InnoDB.
Unfortunately, I have some duplicate entries in this table that I found with the following request :
SELECT device_serial, temp, tstamp, COUNT(*) c FROM up_logs GROUP BY device_serial, temp, tstamp HAVING c > 1
To avoid these duplicates in the future, I want to convert my current index to a unique constraint using SQL request :
ALTER TABLE up_logs DROP INDEX UK_UP_LOGS_TSTAMP_DEVICE_SERIAL,
ALTER TABLE up_logs ADD INDEX UK_UP_LOGS_TSTAMP_DEVICE_SERIAL ( `tstamp` , `device_serial` )
But before that, I need to clean up my duplicates!
My question is : How can I keep only one entry of my duplicated entries? Keep in mind that my table contain 14M entries, so I would like avoid loops if it is possible.
Any comments are welcome!
Creating a new unique key on the over columns you need to have as uniques will automatically clean the table of any duplicates.
ALTER IGNORE TABLE `table_name`
ADD UNIQUE KEY `key_name`(`column_1`,`column_2`);
The IGNORE part does not allow the script to terminate after the first error occurs. And the default behavior is to delete the duplicates.
Since MySQL allows Subqueries in update/delete statements, but not if they refer to the table you want to update, I´d create a copy of the original table first. Then:
DELETE FROM original_table
WHERE id NOT IN(
SELECT id FROM copy_table
GROUP BY column1, column2, ...
);
But I could imagine that copying a table with 14M entries takes some time... selecting the items to keep when copying might make it faster:
INSERT INTO copy_table
SELECT * FROM original_table
GROUP BY column1, column2, ...;
and then
DELETE FROM original_table
WHERE id IN(
SELECT id FROM copy_table
);
It was some time since I used MySQL and SQL in general last time, so I´m quite sure that there is something with better performance - but this should work ;)
This is how you can delete duplicate rows... I'll write you my example and you'll need to apply to your code. I have Actors table with ID and I want to delete the rows with repeated first_name
mysql> select actor_id, first_name from actor_2;
+----------+-------------+
| actor_id | first_name |
+----------+-------------+
| 1 | PENELOPE |
| 2 | NICK |
| 3 | ED |
....
| 199 | JULIA |
| 200 | THORA |
+----------+-------------+
200 rows in set (0.00 sec)
-Now I use a Variable called #a to get the ID if the next row have the same first_name(repeated, null if it's not).
mysql> select if(first_name=#a,actor_id,null) as first_names,#a:=first_name from actor_2 order by first_name;
+---------------+----------------+
| first_names | #a:=first_name |
+---------------+----------------+
| NULL | ADAM |
| 71 | ADAM |
| NULL | AL |
| NULL | ALAN |
| NULL | ALBERT |
| 125 | ALBERT |
| NULL | ALEC |
| NULL | ANGELA |
| 144 | ANGELA |
...
| NULL | WILL |
| NULL | WILLIAM |
| NULL | WOODY |
| 28 | WOODY |
| NULL | ZERO |
+---------------+----------------+
200 rows in set (0.00 sec)
-Now we can get only duplicates ID:
mysql> select first_names from (select if(first_name=#a,actor_id,null) as first_names,#a:=first_name from actor_2 order by first_name) as t1;
+-------------+
| first_names |
+-------------+
| NULL |
| 71 |
| NULL |
...
| 28 |
| NULL |
+-------------+
200 rows in set (0.00 sec)
-the Final Step, Lets DELETE!
mysql> delete from actor_2 where actor_id in (select first_names from (select if(first_name=#a,actor_id,null) as first_names,#a:=first_name from actor_2 order by first_name) as t1);
Query OK, 72 rows affected (0.01 sec)
-Now lets check our table:
mysql> select count(*) from actor_2 group by first_name;
+----------+
| count(*) |
+----------+
| 1 |
| 1 |
| 1 |
...
| 1 |
+----------+
128 rows in set (0.00 sec)
it works, if you have any question write me back

How to get one row as a result when querying two tables following FNF?

I have a MySQL database with a few tables. They look something like this -
The food table:
+----------+------------+--------------+
| username | date | food |
+----------+------------+--------------+
| test123 | 2012-09-16 | rice |
| test123 | 2012-09-16 | pizza |
| test123 | 2012-09-16 | french fries |
| test123 | 2012-09-16 | burger |
+----------+------------+--------------+
The main table:
+----------+------------+----------------+---------------+-------------+-------------+
| username | date | water_quantity | water_chilled | smoked_what | smoke_count |
+----------+------------+----------------+---------------+-------------+-------------+
| test123 | 2012-09-16 | 1 | no | cigarettes | 20 |
+----------+------------+----------------+---------------+-------------+-------------+
When I use the query SELECT * FROM main,food WHERE main.date=food.date;, I get four rows as a result. How would it be possible that I get the results in a single row? Ultimately, when I encode the results into JSON, I want it to look something like this -
[
{
"username":"test123",
"date":"2012-09-16",
"water_quantity":"1",
"water_chilled":"no",
"smoked_what":"cigarettes",
"smoke_count":"20",
{
"food":"rice",
"food":"pizza",
"food":"french fries",
"food":"burger",
},
}
]
or something similar to. I am a newbie to MySQL and databases in general and also to JSON.. Thanks in advance for the help.
select m.*, GROUP_CONCAT(food SEPARATOR ',') AS food FROM main m INNER JOIN food f ON f.username = m.username and f.date = m.date;
Of course you can change what fields you select to control the output but that will solve your duplication issue.
As for the nested list of foods within the result set, you can use GROUP_CONCAT
SEE: http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat
I will see if I can recreate for demo
DEMO:
mysql> create table main (id INT NOT NULL AUTO_INCREMENT, username varchar(12) NOT NULL, date DATETIME, water_quality INT, water_chilled CHAR(3), smoked_what varchar(32), smoke_count INT, primary key (id));
Query OK, 0 rows affected (0.04 sec)
mysql> create table food (id INT NOT NULL AUTO_INCREMENT, username varchar(12) NOT NULL, date DATETIME, food varchar(32), primary key (id));
Query OK, 0 rows affected (0.04 sec)
mysql> insert into food VALUES (1,'test123','2012-09-16','rice'),(2,'test123','2012-09-16','pizza'),(3,'test123','2012-09-16','french fries'),(4,'test123','2012-09-16','burger');Query OK, 4 rows affected (0.00 sec)
Records: 4 Duplicates: 0 Warnings: 0
mysql> insert into main VALUES (1, 'test123', '2012-09-16', 1, 'no', 'cigarettes', 20);
Query OK, 1 row affected (0.00 sec)
mysql> select m.*, GROUP_CONCAT(food SEPARATOR ',') AS food FROM main m INNER JOIN food f ON f.username = m.username and f.date = m.date;
+----+----------+---------------------+---------------+---------------+-------------+-------------+----------------------------------+
| id | username | date | water_quality | water_chilled | smoked_what | smoke_count | food |
+----+----------+---------------------+---------------+---------------+-------------+-------------+----------------------------------+
| 1 | test123 | 2012-09-16 00:00:00 | 1 | no | cigarettes | 20 | rice,pizza,french fries,burger |
+----+----------+---------------------+---------------+---------------+-------------+-------------+----------------------------------+
1 row in set (0.00 sec)
mysql>

What's wrong with this simple query?

I did a very simple query yesterday but this morning I couldn't remember how I did it, and whatever I tried doesn't work.
I want to do a simple SELECT COUNT(*) and then update table TEST. We want how many values from column start are (table1) are between the values in column txStart and column txEnd (from table TEST).
The SELECT COUNT(*) alone works well.
mysql> SELECT COUNT(*) FROM table1, TEST where table1.start BETWEEN TEST.txStart AND TEST.txEnd;
+----------+
| COUNT(*) |
+----------+
| 95149 |
+----------+
1 row in set (0.03 sec)
The UPDATE never happened.
mysql> UPDATE TEST
SET rdc_1ips =
SELECT COUNT(*) FROM table1, TEST WHERE table1.start between TEST.txStart AND TEST.txEnd;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT COUNT(*) FROM table1, TEST WHERE table1.start between TEST.txStart A' at line 1
Here is a preview of the table1 and table TEST
mysql> SELECT * from table1 limit 2;
+----+--------+------+---------+---------+------+-------+
| id | strand | chr | start | end | name | name2 |
+----+--------+------+---------+---------+------+-------+
| 1 | - | 1 | 2999997 | 3000096 | NULL | NULL |
| 2 | + | 1 | 2999998 | 3000097 | NULL | NULL |
+----+--------+------+---------+---------+------+-------+
mysql> SELECT * FROM TEST;
+------+-----------+--------------+-------+---------+---------+----------+
| chr | pos_start | name | name2 | txStart | txEnd | rdc_1ips |
+------+-----------+--------------+-------+---------+---------+----------+
| 1 | 3204575 | NM_001011874 | Xkr4 | 3204562 | 3661579 | 0 |
+------+-----------+--------------+-------+---------+---------+----------+
Put the sub-select in brackets.
Additionally I would give the inner "test" table an alias (just to be sure)
SET rdc_1ips =
(SELECT COUNT(*) FROM table1, TEST t2
WHERE table1.start between t2.txStart AND t2.txEnd)
But I'm not sure if this will work even if the syntax is correct. MySQL has some obnoxious limitiations when it comes to selecting from the same table that you want to update.