I am stuck with a problem where I have a table with JSON column like this:
ID|VALUE
1 |{"a":"text1234","b":"default"}
2 |{"a":"text1234","b":"default"}
3 |{"a":"text1234","b":"text234"}
4 |{"a":"text1234","b":"default2"}
5 |{"a":"text1234","b":"default2"}
I would like to get all rows where value "b" is duplicate, so with the table above I would get rows 1,2,4,5.
I tried to group rows by value->b
$value_ids = ProductsAttributesValues::groupBy("value->b")->get();
but when i dd($value_ids) rows are not grouped by value->default. And I can't find a way to group them, so I can then count them. Or would there be a better way with doing this?
Try the json_extract function:
select count(id) dup_count, json_extract(`value`,"$.b") as dup_value
from test
group by json_extract(`value`,"$.b")
having dup_count>1
;
-- result set:
| dup_count | dup_value |
+-----------+------------+
| 2 | "default" |
| 2 | "default2" |
-- to get the id involved:
select id,dup_count,dup_value
from (select id,json_extract(`value`,"$.b") as dup_v
from test) t1
join
(select count(id) dup_count, json_extract(`value`,"$.b") as dup_value
from test
group by json_extract(`value`,"$.b")
having dup_count>1) t2
on t1.dup_v=t2.dup_value
;
-- result set:
| id | dup_count | dup_value |
+------+-----------+------------+
| 1 | 2 | "default" |
| 2 | 2 | "default" |
| 4 | 2 | "default2" |
| 5 | 2 | "default2" |
Here is the queries that can do your task.
/*Extract value of "b" - Step 1*/
DROP TEMPORARY TABLE IF EXISTS d1;
CREATE TEMPORARY TABLE d1
SELECT
ID, `VALUE`, SUBSTR(VALUE FROM POSITION(',"b":' IN VALUE)+5 FOR 1000) AS v
FROM mytest
;
/*Extract value of "b" - Step 2*/
DROP TEMPORARY TABLE IF EXISTS d2;
CREATE TEMPORARY TABLE d2
SELECT
ID, LEFT(v, LENGTH(v)-1) AS b
FROM
d1
;
ALTER TABLE d2 ADD INDEX b(b);
/* Search for duplicates */
DROP TEMPORARY TABLE IF EXISTS duplicates;
CREATE TEMPORARY TABLE duplicates
SELECT
b, COUNT(b) AS b_count
FROM
d2
GROUP BY b HAVING COUNT(b)>1
;
ALTER TABLE duplicates ADD INDEX b(b);
/* Display for duplicates */
SELECT
d2.ID, d2.b
FROM
d2
INNER JOIN duplicates ON d2.b=duplicates.b
;
This should give you :
1 "default"
2 "default"
4 "default2"
5 "default2"
Related
I want to rows according to same column value.
Suppose this is a table
id name topic
1 A t
2 B a
3 c t
4 d b
5 e b
6 f a
I want result something like this.
id name topic
1 A t
3 c t
2 B a
6 f a
4 d b
5 e b
As you can see these are not order by topic neither by id, it sort about that topic which come first if t come first sort t first, one second a come then sort according to a then b.
if you apply ORDER BY topic it sort a b t or in DESC t b a but required result is t a b
Any suggestion ?
DROP TABLE IF EXISTS my_table;
CREATE TABLE my_table
(id INT NOT NULL AUTO_INCREMENT PRIMARY KEY
,topic CHAR(1) NOT NULL
);
INSERT INTO my_table VALUES
(1,'t'),
(2,'a'),
(3,'t'),
(4,'b'),
(5,'b'),
(6,'a');
SELECT x.*
FROM my_table x
JOIN
( SELECT topic, MIN(id) id FROM my_table GROUP BY topic ) y
ON y.topic = x.topic
ORDER
BY y.id,x.id;
+----+-------+
| id | topic |
+----+-------+
| 1 | t |
| 3 | t |
| 2 | a |
| 6 | a |
| 4 | b |
| 5 | b |
+----+-------+
You can use CASE expression in ORDER BY.
Query
select * from `your_table_name`
order by
case `topic`
when 't' then 1
when 'a' then 2
when 'b' then 3
else 4 end
, `name`;
Copy 1st 2 rows in same table and insert it with edited column as shown below.
Table 1 (ID is auto increment)
ID | CL1 | CL2 | CL3
1 | A | text1 | NULL
2 | B | text2 | NULL
Table 2
ID | CL3
21 | 45
24 | 63
Converted Table 1
ID | CL1 | CL2 | CL3
1 | A | text1 | NULL
2 | B | text2 | NULL
3 | A | text1 | 45
4 | B | text2 | 63
I know how to copy and insert all the rows with one column duplicated, but changing some column with different value is the problem.
Below is the query to copy all fields with 1 column changed:
INSERT INTO table1 (col1, col2, col3)
SELECT col1, col2, 1
FROM table1 LIMIT 2;
Ex: So now we have table2 which has table1 CL3's values. Now can we get the data from another table and insert them while copying?
Assuming you want the first 2 records from 1 table updated with the values from the 1st 2 rows from another table, then I think you will need to add a sequence number to each one and join based on that.
Something like as follows, but it won't be quick!
INSERT INTO table1 (ID, CL1, CL2, CL3)
SELECT NULL, a.CL1, a.CL2, b.CL3
FROM
(
SELECT CL1, CL2, #cnt1:=#cnt1 + 1 AS cnt
FROM table1
CROSS JOIN (SELECT #cnt1:=0) sub0
ORDER BY ID
LIMIT 2
) a
INNER JOIN
(
SELECT CL3, #cnt2:=#cnt2 + 1 AS cnt
FROM table2
CROSS JOIN (SELECT #cnt2:=0) sub0
ORDER BY ID
LIMIT 2
) b
ON a.cnt = b.cnt
I need to make a query that moves values of only one column one row up ↑ at a time:
+------------+----------------+
| anotherCOL | values_to_loop |
+------------+----------------+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
| 4 | 4 |
| 5 | 5 |
| 6 | 6 |
| 7 | 7 |
| 8 | 8 |
| 9 | 9 |
| 10 | 10 |
+------------+----------------+
So, the next time i run the query, it should look like this
+------------+----------------+
| anotherCOL | values_to_loop |
+------------+----------------+
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
| 5 | 6 |
| 6 | 7 |
| 7 | 8 |
| 8 | 9 |
| 9 | 10 |
| 10 | 1 |
+------------+----------------+
I need to loop the values of only one MYSQL COLUMN, as in move the values one ROW UP ↑ each time I run the query.
Notice: Tables provided are just illustrative, the data is different.
Here's how you can do it within a single UPDATE query:
UPDATE tbl a
INNER JOIN (
SELECT values_to_loop
FROM (SELECT * FROM tbl) c
ORDER BY anotherCOL
LIMIT 1
) b ON 1 = 1
SET a.values_to_loop =
IFNULL(
(SELECT values_to_loop
FROM (SELECT * FROM tbl) c
WHERE c.anotherCOL > a.anotherCOL
ORDER BY c.anotherCOL
LIMIT 1),
b.values_to_loop
)
It works as follows:
Updates all records from tbl
Joins with a temporary table to retrieve the top value of values_to_loop (the one that will go to the bottom)
Set the new value for values_to_loop to the corresponding value from the next row (c.anotherCOL > a.anotherCOL ... LIMIT 1)
Notes:
This works even if there are gaps in anotherCOL (eg: 1, 2, 3, 6, 9, 15)
It is required to use (SELECT * FROM tbl) instead of tbl because you're not allowed to use the table that you're updating in the update query
Faster query when there are no gaps in anotherCOL
If there are no gaps for values in anotherCOL you can use the query below that should work quite fast if you have an index on anotherCOL:
UPDATE tbl a
LEFT JOIN tbl b on b.anotherCOL = a.anotherCOL + 1
LEFT JOIN (
SELECT values_to_loop
FROM tbl
WHERE anotherCOL = (select min(anotherCOL) from tbl)
) c ON 1 = 1
SET a.values_to_loop = ifnull(
b.values_to_loop,
c.values_to_loop
)
I`ve created a sample table and added both a select to get the looped values and update to loop the values in the table. Also, using a #start_value variable to know the "1" which might be other. Try this:
CREATE TEMPORARY TABLE IF NOT EXISTS temp_table
(other_col INT, loop_col int);
INSERT INTO temp_table (other_col, loop_col) VALUES (1,1);
INSERT INTO temp_table (other_col, loop_col) VALUES (2,2);
INSERT INTO temp_table (other_col, loop_col) VALUES (3,3);
INSERT INTO temp_table (other_col, loop_col) VALUES (4,4);
INSERT INTO temp_table (other_col, loop_col) VALUES (5,5);
DECLARE start_value INT;
SELECT start_value = MIN(loop_col) FROM temp_table;
SELECT T1.other_col, ISNULL(T2.loop_col, start_value)
FROM temp_table T1
LEFT JOIN temp_table T2
ON T1.loop_col = T2.loop_col - 1;
UPDATE T1 SET
T1.loop_col = ISNULL(T2.loop_col, #start_value)
FROM temp_table T1
LEFT JOIN temp_table T2
ON T1.loop_col = T2.loop_col - 1;
SELECT *
FROM temp_table;
Let me know if it works for you.
Step by step:
1 - created a temp_table with values 1 to 5
2 - declared a start_value which will keep the lowest value for the column you to need to loop through
3 - select all rows from temp_table self left join with same temp_table. join condition is on loop_col - 1 so it can shift the rows up
4 - the same self left join, but this time update the values in place too.
please note that in case i get a null value, it should be the start_value there, because it cannot match
Perhaps these are what you had in mind:
update T
set values_to_loop = mod(values_to_loop, 10) + 1
update T
set values_to_loop =
coalesce(
(
select min(t2.values_to_loop) from T t2
where t2.values_to_loop > T.values_to_loop
),
(
select min(values_to_loop) from T
)
)
I have 2 tables which contain many columns. example of my tables:
table1
_______________________________________________
| a | b | c | d | e | f | g | h | i | ... | z |
-----------------------------------------------
table2
_______________________________________________
| a | b | c | d | e | f | g | h | i | ... | z |
-----------------------------------------------
And now, I want to copy or insert a record from table1 to table 2. This is my query :
INSERT INTO table2
SELECT table1.* FROM table1
WHERE table1.b = '1'
I don't find any errors in query, but all I want is insert a record from all columns except column 'a' in table 1 to table2.
I can do it by this query :
INSERT INTO table2 (b,c,d,...) // it takes a long line
SELECT table1.b,table1.c,table1.d,... FROM table1 // it takes a long line
WHERE table1.b = '1'
But this is not an efficient query line, because i just don't select 1 column.
Is there any efficient way?
You could try duplicating the table with a create like statement, then alter it to drop the columns you don't want and do the insert into select with that new table. Then drop that new table. (or use a temp table)
CREATE TABLE table3 LIKE table1
ALTER TABLE table3 DROP COLUMN x
INSERT INTO table2
SELECT * FROM table3
DROP TABLE table3
and so forth.
What i have is two columns specialid and date in tblSpecialTable, my table has duplicate specialID's, i want to delete from the table where date column is the older date and where specialid's are duplicated.
See my example:
mysql> SELECT * FROM test;
+------+---------------------+
| id | d |
+------+---------------------+
| 1 | 2011-06-29 10:48:41 |
| 2 | 2011-06-29 10:48:44 |
| 3 | 2011-06-29 10:48:46 |
| 1 | 2011-06-29 10:48:52 |
| 2 | 2011-06-29 10:48:53 |
| 3 | 2011-06-29 10:48:55 |
+------+---------------------+
mysql> DELETE t1 FROM test t1 INNER JOIN test t2 ON t1.id = t2.id AND t1.d < t2.d;
Query OK, 3 rows affected (0.00 sec)
mysql> SELECT * FROM test;
+------+---------------------+
| id | d |
+------+---------------------+
| 1 | 2011-06-29 10:48:52 |
| 2 | 2011-06-29 10:48:53 |
| 3 | 2011-06-29 10:48:55 |
+------+---------------------+
See also http://dev.mysql.com/doc/refman/5.0/en/delete.html
You have to use a "double-barrelled" match on the combination of fields from another query.
DELETE FROM tblSpecialTable
WHERE CONCAT(specialid, date) IN (
SELECT CONCAT(specialid, date)
FROM (
SELECT specialid, MAX(date) AS DATE, COUNT(*)
FROM tblSpecialTable
GROUP BY 1
HAVING COUNT(*) > 1) x
)
use a tmp table , set specialid column is unique. then use below sql:
insert into tmp(specailid,date) values(select specialid,date from tplSpecialTable order by date desc)
DELETE FROM tblSpecialTable
WHERE specialid NOT IN
(SELECT specialid FROM tblSpecialTable
GROUP BY specialid
HAVING COUNT(table.date) > 1
ORDER BY date
LIMIT COUNT(table.date) - 1 )
This isn't a fancy single query, but it does the trick:
CREATE TABLE tmp as SELECT * FROM tblspecialtable ORDER BY date DESC;
DELETE FROM tblspecialtable WHERE 1;
INSERT INTO tblspecialtable SELECT * FROM tmp GROUP BY specialid;
DROP TABLE tmp;
The first line creates a temporary table where the values are ordered by date, most recent first. The second makes room in the original table for the fixed values. The third consolidates the values, and since the GROUP BY command goes from the top down, it takes the most recent first. The final line removes the temporary table. The end result is the original table containing unique values of specialid with only the most recent dates.
Also, if you are programatically accessing your mysql table, it would be best to check if an id exists first, and then use the update command to change the date, or else add a new row if there is no existing specialID. Also, you should consider making specialID UNIQUE if you don't want duplicates.