can i use GROUP_CONCAT to update table? I have 2 tables
i
d | label
------------------------------
1 | ravi,rames,raja
------------------------------
2 | ravi
------------------------------
3 | ravi,raja
------------------------------
4 | null
------------------------------
5 | null
------------------------------
6 | rames
------------------------------
and
id | values
------------------------------
12 | raja
------------------------------
13 | rames
------------------------------
14 | ravi
------------------------------
And i want the result like following table--
id | label
------------------------------
1 | 12,13,14
------------------------------
2 | 14
------------------------------
3 | 14,12
------------------------------
4 | null
------------------------------
5 | null
------------------------------
6 | 13
------------------------------
but by using the following query -
SELECT `table1`.`id`, GROUP_CONCAT(`table2`.`id` ORDER BY `table2`.`id`) AS label
FROM `table1`
JOIN `table2` ON FIND_IN_SET(`table2`.`values`, `table1`.`nos`)
GROUP BY `table1`.`id`;
Im getting-
id | label
------------------------------
1 | 12,13,14
------------------------------
2 | 14
------------------------------
3 | 12,14
------------------------------
6 | 13
------------------------------
I want to keep the null value. otherwise the order of rows will be broken. please help.
sorry for the large font :(
You just need a LEFT JOIN to preserve the nulls:
SELECT `table1`.`id`, GROUP_CONCAT(`table2`.`id` ORDER BY `table2`.`id`) AS label
FROM `table1`
LEFT JOIN `table2` ON FIND_IN_SET(`table2`.`values`, `table1`.`nos`)
GROUP BY `table1`.`id`;
However, I recommend against updating a table to include comma-separated values in a column. It forces you to use FIND_IN_SET() when querying it, and breaks the ability to index the column, affecting the performance of your queries. The more sustainable action would be to normalize table1 so that it doesn't include a comma-separated column.
Update:
To use GROUP_CONCAT() in an UPDATE statement, you would use a syntax like the following. Substitute your correct table and column names, and in your case, you probably want to replace the entire JOIN subquery with your SELECT statement.
UPDATE
tbl_to_update
JOIN (SELECT id, GROUP_CONCAT(concatcolumn) AS label FROM tbl GROUP BY id) tbl_concat
ON tbl_to_update.id = tbl_concat.id
SET tbl_to_update.column_to_update = tbl_concat.label
WHERE <where condition>
So in your case:
UPDATE
table1
INNER JOIN (SELECT id, GROUP_CONCAT(id) AS label FROM table1 GROUP BY id) table2
ON FIND_IN_SET(`table2`.`label`, `table1`.`nos`)
SET table1.nos = table2.id
Related
If the date, item, and category are the same in the table,
I'd like to treat it as the same row and return n rows out of them(ex: if n is 3, then limit 0, 3).
------------------------------------------
id | date | item | category | ...
------------------------------------------
101 | 20220201| pencil | stationery | ... <---
------------------------------------------ | treat as same result
105 | 20220201| pencil | stationery | ... <---
------------------------------------------
120 | 20220214| desk | furniture | ...
------------------------------------------
125 | 20220219| tongs | utensil | ... <---
------------------------------------------ | treat as same
129 | 20220219| tongs | utensil | ... <---
------------------------------------------
130 | 20220222| tongs | utensil | ...
expected results (if n is 3)
-----------------------------------------------
id | date | item | category | ... rank
-----------------------------------------------
101 | 20220201| pencil | stationery | ... 1
-----------------------------------------------
105 | 20220201| pencil | stationery | ... 1
-----------------------------------------------
120 | 20220214| desk | furniture | ... 2
-----------------------------------------------
125 | 20220219| tongs | utensil | ... 3
-----------------------------------------------
129 | 20220219| tongs | utensil | ... 3
The problem is that I have to bring the values of each group as well.
If I have only one column to group by, I can compare id value with origin table, but I don't know what to do with multiple columns.
Is there any way to solve this problem?
For reference, I used a user variable to compare it with previous values,
I couldn't use it because the duration was slow.
SELECT
*,
IF(#prev_date=date and #prev_item=item and #prev_category=category,#ranking, #ranking:=#ranking+1) AS sameRow,
#prev_item:=item,
#prev_date:= date,
#prev_category:=category,
#ranking
FROM ( SELECT ...
I'm using Mysql 8.0 version and id value is not a continuous number because I have to order by before group by.
if I understand correctly, you can try to use dense_rank window function and set order by with your expected columns
if date column can represent the order number I would put it first.
SELECT *
FROM (
SELECT *,dense_rank() OVER(ORDER BY date, item, category) rnk
FROM T
) t1
SQLFIDDLE
Window functions come in very handy in this situation. But for those of us still using MySQL 5.7, where functions such as row_number don't exist, we have to either resort to using a user variable and resetting the value every time before the main statement, or defining the user variable directly in the statement.
method 1
set #row_id=0; -- remember to reset the row_id to 0 every time before the main query below
select id,date,item,category,rank from testtb join
(
select date,item,category, (#row_id:=#row_id+1) as rank
from
(select date,item,category from testtb group by date,item,category) t1
) t2
using(date,item,category);
method 2
select id,date,item,category,rank from testtb join
(
select date,item,category, (#row_id:=#row_id+1) as rank
from
(select date,item,category from testtb group by date,item,category) t1, (select #row_id := 0) as n
) t2
using(date,item,category);
I have a temporary table I've derived from a much larger table.
+-----+----------+---------+
| id | phone | attempt |
+-----+----------+---------+
| 1 | 12345678 | 15 |
| 2 | 87654321 | 0 |
| 4 | 12345678 | 16 |
| 5 | 12345678 | 14 |
| 10 | 87654321 | 1 |
| 11 | 87654321 | 2 |
+-----+----------+---------+
I need to find the id (unique) corresponding to the highest attempt made on each phone number. Phone and attempt are not unique.
SELECT id, MAX(attempt) FROM temp2 GROUP BY phone
The above query does not return the id for the corresponding max attempt.
Try this:
select
t.*
from temp2 t
inner join (
select phone, max(attempt) attempt
from temp2
group by phone
) t2 on t.phone = t2.phone
and t.attempt = t2.attempt;
It will return rows with max attempts for a given number.
Note that this will return multiple ids if there are multiple rows for a phone if the attempts are same as maximum attempts for that phone.
Demo here
As an alternative to the answer given by #GurV, you could also solve this using a correlated subquery:
SELECT t1.*
FROM temp2 t1
WHERE t1.attempt = (SELECT MAX(t2.attempt) FROM temp2 t2 WHERE t2.phone = t1.phone)
This has the advantage of being a bit less verbose. But I would probably go with the join option because it will scale better for large data sets.
Demo
I have a table (simplified) that looks like this:
id | name | selfreference | selfreference-name
------ | -------| --------------| ------------------
1 | Vienna | |
2 | Wien | | Vienna
3 | Виена | | Vienna
The selfreference column refers to the id numbers of the same table. In the above example, both Wien and Виена refer to the same city, so the value of their selfreference column should be equal to 1.
In other words, I need to do something like
update `places`
set `places`.`selfreference` =
(select `places`.`id` from `places`where `places`.`name` = `places`.`selfreference-name`)
but the SELECT statement above is obviously wrong. I am at a loss how to proceed.
Any tips would be greatly appreciated.
All best,
Tench
Edit: the desired output would look like this:
id | name | selfreference | selfreference-name
------ | -------| --------------| ------------------
1 | Vienna | |
2 | Wien | 1 | Vienna
3 | Виена | 1 | Vienna
Could be you need a self join
chekc with select
select a.*, b.*
from `places` as a
inner join `places` as b
where b.`name` = a.`selfreference-name`;
and then if the query above give you the right result
update `places` as a
inner join `places` as b
set b.`selfreference` = ab.`id`
where b.`name` = a.`selfreference-name`;
The following query does the job:
UPDATE places p1
INNER JOIN places p2 ON p1.`name` = p2.`selfreference-name`
SET p2.selfreference = p1.id;
p2 -> instance of table places which will be updated.
p1 -> instance of table places from where the id of the matching selfreference-name is taken.
WORKING DEMO BEFORE UPDATING
WORKING DEMO AFTER UPDATING
I have a table
--------------------
ID | Name | RollNO
--------------------
1 | A | 18
--------------------
2 | B | 19RMK2
--------------------
3 | C | 20
--------------------
My second table is
-----------------------
OldRollNo | NewRollNo
-----------------------
18 | 18RMK1
-----------------------
19 | 19RMK2
-----------------------
20 | 20RMK3
-----------------------
21 | 21RMK4
-----------------------
22 | 22RMK5
-----------------------
I want the resulting table like
----------------------------------
ID | Name | RollNo | LatestRollNo
----------------------------------
1 | A | 18 | 18RMK1
----------------------------------
2 | B | 19RMK2 | 19RMK2
----------------------------------
3 | C | 20 | 20RMK3
----------------------------------
What would be the select query like? This is just the replica of my problem. I have used CASE Statement with the select query but as the records in my table is large, it's taking too much time. In my second table the OldRollNo Column is unique.One more thing is that in the resultant RollNo column if the newly assigned RollNo is already present then it should be copied exactly to the next column i.e LatestRollNo. I have to check only those RollNo which are old.
Thanks.
Try something like this:
select t1.ID
, t1.Name
, t1.RollNO
, LatestRollNO = coalesce(n.NewRollNo, o.NewRollNo)
from t1
left join t2 o on t1.RollNO = o.OldRollNo
left join t2 n on t1.RollNO = n.NewRollNo
SQL Fiddle with demo.
It sounds like your issue is performance not logic; something like this should hopefully allow approriate index usage assuming you have the appropriate indexes on t2.OldRollNo and t2.NewRollNo.
The problem with OR or CASE in a WHERE clause is that these don't always lend themselves to efficient queries; hopefully this will be a bit more useful in your case.
select f.ID, f.name, f.RollNo, s.NewRollNo as "Latest RollNo"
from FirstTable f
inner join
SecondTable s on f.RollNo = s.OldRollNo or f.RollNo = s.NewRollNo
select t.id,t.name,t.rollno,tt.newrollno as latestrollno from
talble1 t
left join
table2 tt on t.rollno = tt.oldrollno
You need to use inner join.
SELECT t1.ID,t1.Name,t2.RollNo,t2.NewRollNo AS LatestRollNo
FROM Table1 t1
INNER JOIN Table2 t2
ON t1.RollNo=t2.OldRollNo OR t1.RollNo=t2.NewRollNo
Let's say we have this query
SELECT * FROM table
And this result from it.
id | user_id
------------
1 | 1
------------
2 | 1
------------
3 | 2
------------
4 | 1
How could I get the count of how often a user_id appears as another field (without some major SQL query)
id | user_id | count
--------------------
1 | 1 | 3
--------------------
2 | 1 | 3
--------------------
3 | 2 | 1
--------------------
4 | 1 | 3
We have this value currently in code, but we are implementing sorting to this table and I would like to be able to sort in the SQL query.
BTW if this is not possible without some major trick, we are just going to skip sorting on that field.
You'll just want to add a subquery on the end, I believe:
SELECT
t.id,
t.user_id,
(SELECT COUNT(*) FROM table WHERE user_id = t.user_id) AS `count`
FROM table t;
SELECT o.id, o.user_id, (
SELECT COUNT(id)
FROM table i
WHERE i.user_id = o.user_id
GROUP BY i.user_id
) AS `count`
FROM table o
I suspect this query as not being a performance monster but it should work.