MySQL/SQLite insert same random # to multiple rows - mysql

MySQL/SQLite
I want to insert a randomly generated number (of 9 positions) into multiple rows BUT they need to be the same for all rows matched in the query.
update products set tag_seed=( SELECT ABS(RANDOM() % 999999999) ) where [...];
Partialy works... Each row will have a different random number. I need them to be same.

This is logical since you will generate a new random for every update query. The easiest solution is to generate a random integer, store it in a local variable and use that variable in your queries:
SET #rand := (SELECT ABS(RAND() * 1000000000));
update products set tag_seed=#rand where [...];

Related

Updating batch of rows depending on last row of previous batch

I have following data -
create table #Test(
Id int
,JobNo int
)
insert into #Test
values
(1,100) ,(2,100)
,(3,101),(4,104)
,(5,105),(6,106)
My requirement is that I need to update batch of rows sequentially. Say batch size here is 2, then for rows between Id 3 to 4, I need to take the JobId value of 100 from first batch and increment it to 1. Likewise for rows between Id 5-6, I need to update JobId as 102.
Expected Output is -
Id,JobId
1,100
2,100
3,101
4,101
5,102
6,102
I am able to achieve this using while loop and counter but am just wondering if it can be done via partitioning and self-joins. I am not able to get the right partitioning criteria to divide them into equal batches. Even if I partition I don't know how to proceed with sequentially add up the values. A recursive CTE perhaps? Just pondering.
Try this:
UPDATE t_curr
SET JobNo = ISNULL(t_prev.JobNo + ((t_curr.Id - 1) / #batchSize), t_curr.JobNo)
FROM #Test t_curr
JOIN #Test t_prev ON t_prev.Id = t_curr.Id - #batchSize
Let me know if it is not something you need.

Loop to check duplicate random numbers in MYSQL

how to create a query in MYSQL, to compare a random number with the previous random numbers and if it exists it should generate another random number
You can try like this to create unique random numbers without duplication:
SELECT CAST(RAND() * 99999 AS INT) as mycol
FROM yourtable
WHERE "mycol" NOT IN (SELECT PreviousRand FROM yourtable)
LIMIT 1
I am assuming the your are storing the values in the table.

Update SET with variable column names, variable values in variable rows PDO

I'm trying to update a certain column of certain row WHERE id is certain value. The thing is, the number/names of columns are variable, and so are their respective ids.
For example:
UPDATE table SET column1="hello" WHERE id = 5
UPDATE table SET column2="cucumber" WHERE id = 6
How can I do a single mysql query in PDO to do this?
First thing I tried is...
UPDATE table SET column1="hello", column4="bye" WHERE id IN(5, 6)
But that query will update BOTH of those columns in rows where it finds BOTH of those ids, and that's not what I'm looking for. Is it only possible to do this query by query?
Keep in mind that the argument after SET is variable, so the columns to be updated, their values and their respective ids are also variable.
A solution where you can just purely bind values would be great, but if I have to build the query string with escaped variables, then that's OK too.
Thank you.
You can do this
UPDATE table t1 JOIN table t2
ON t1.id= 5 AND t2.id= 6
SET t1.column1= 'hello',
t2.column2 = 'cucumber';
Or if you want to do this on a single column
UPDATE table
SET column2 = CASE id
WHEN 5 THEN 'hello'
WHEN 6 THEN ''
END
WHERE id IN(5, 6);

Avoid row was cut by GROUP_CONCAT error on insert without changing group_concat_max_len

I have an insert that uses a GROUP_CONCAT. In certain scenarios, the insert fails with Row XX was cut by GROUP_CONCAT. I understand why it fails but I'm looking for a way to have it not error out since the insert column is already smaller than the group_concat_max_len. I don't want to increase group_concat_max_len.
drop table if exists a;
create table a (x varchar(10), c int);
drop table if exists b;
create table b (x varchar(10));
insert into b values ('abcdefgh');
insert into b values ('ijklmnop');
-- contrived example to show that insert column size varchar(10) < 15
set session group_concat_max_len = 15;
insert into a select group_concat(x separator ', '), count(*) from b;
This insert produces the error Row 2 was cut by GROUP_CONCAT().
I'll try to provide a few clarifications -
The data in table b is unknown. There is no way to say set group_concat_max_len to a value greater than 18.
I do know the insert column size.
Why group_concat 4 GB of data when you want the first x characters?
When the concatenated string is longer than 10 chars, it should insert the first 10 characters.
Thanks.
Your example GROUP_CONCAT is probably cooking up this value:
abcdefgh, ijklmnop
That is 18 characters long, including the separator.
Can you try something like this?
set session group_concat_max_len = 4096;
insert into a
select left(group_concat(x separator ', '),10),
count(*)
from b;
This will trim the GROUP_CONCAT result for you.
You temporarily can set the group_concat_max_len if you need to, then set it back.
I don't know MySQL very well, nor if there is a good reason to do this in the first place, but you could create a running total length, and limit the GROUP_CONCAT() to where that length is under a certain max, you'll still need to set your group_concat_max_len high enough to handle the longest single value (or utilize CASE logic to substring them to be under the max length you desire.
Something like this:
SELECT SUBSTRING(GROUP_CONCAT(col1 separator ', '),1,10)
FROM (SELECT *
FROM (SELECT col1
,#lentot := COALESCE(#lentot,0) + CHAR_LENGTH(col1) AS lentot
FROM Table1
)sub
WHERE lentot < 25
)sub2
Demo: SQL Fiddle
I don't know if it's SQL Fiddle being quirky or if there's a problem with the logic, but sometimes when running I get no output. Not big on MySQL so could definitely be me missing something. It doesn't seem like it should require 2 subqueries but filtering didn't work as expected unless it was nested like that.
Actually, a better way is to use DISTINCT.
I had a situation to add new two fields into existing stored procedure, in a way that a value for that new fields had been obtained by a LEFT JOIN, and because it may have contained a NULL value, a single "concat" value was multiplicated for some cases more than a 100 times.
Because, a group with that new field value contained many NULL values, GROUP_CONCAT exceeded maximum value (in my case 16384).

UPDATE based on DISTINCT in same table

In my MySQL table I've 192 rows with the same value in the field number, the value is 548.
I need distinct update this 192 rows with new value calculate from rand function in MySQL.
Each row should have a different value calculated random.
I tried this solution but in update I've still duplicate rows with the same value ...
UPDATE `tbl`
SET number = FLOOR(100 +(RAND() * 150))
WHERE
EXISTS (SELECT DISTINCT number)
AND number = 548;
update tbl set number = FLOOR(100 +(RAND() * 150)) where number = 548;
No need to check for number and running DISTINCT. If number is not present, it will simply update nothing.
SQLFiddle Demo