Set row as another row - MySQL - mysql

I have this query:
mysql_query("
UPDATE users SET
`clicks_yesterday`=`clicks_today`,`clicks_today`=0)
My structure in my database looks like this:
My question is, how can I do so whenever I run the query above, clicks_yesterday get's the value of clicks_today?
Regards

That's how you do it. assignments in SQL are evaluated in the order encountered (e.g. left -> right). But if you want to be ENTIRELY sure that things are assigned properly, then split it into two queries:
UPDATE users SET clicks_yesterday = clicks_today;
UPDATE users SET clicks_today = 0;

For MySQL what you have written works. It is however different on different DBs so it's important to test.
These are my tests for this question
CREATE TABLE `duals` (
`one` int(11) DEFAULT NULL,
`two` int(11) DEFAULT NULL
);
insert into duals values (1, 2);
select * from duals;
+------+------+
| one | two |
+------+------+
| 1 | 2 |
+------+------+
update duals set one = two, two = 0;
select * from duals;
+------+------+
| one | two |
+------+------+
| 2 | 0 |
+------+------+

Related

INSERT INTO WHERE LIKE Condition

I am working on a trigger which needs INSERT INTO with WHERE LIKE logic.
I have one table :
Tabel test;
idDocument = varchar(32) idUnit = varchar(3)
-----------------------------
| idDocument | idUnit |
-----------------------------
| AA/2021/KK | NULL |
| AA/2021/JJ | NULL |
| BB/2021/KK | NULL |
| CC/2021/JB | NULL |
-----------------------------
How to INSERT INTO using WHERE LIKE Condition and myquery still ERROR.
myquery :
INSERT INTO test ('idUnit') Values ('111') WHERE idDocument LIKE
'%KK%'
Normally to update existing rows with a new value you'd do something like this:
UPDATE test SET idUnit='111' WHERE idDocument LIKE '%KK%'
This will not insert data, it will only alter existing data.
Note:
INSERT is specifically for adding new rows of data
UPDATE is exclusively for updating existing rows with new data
You can't conditionally add new rows. You either add them or you don't. You can conditionally update or delete them.
Don't think about it in terms of inserting new data, always think in terms of rows and columns which is how SQL works.

mysql On Duplicate value in field, insert new row with new value

I want to add a new record in a table if duplicate value enters in a unique field. I don't want to update the existing one but want to add a new record by modifying the unique field value.
Is this possible in mysql?
EDIT:
Edited after user comment on this post:
You need write table locking on both of those two processes.
A WRITE lock has the following features:
The only session that holds the lock of a table can read and write data from the table.
Other sessions cannot read data from and write data to the table until the WRITE lock is released.
Also look at SQL UNIQUE Constraint
BEFORE EDIT:
Yes it is possible. And it took me awhile to figure it out. I build this on your input and compering values as test1, test2 etc, where test is always the same and has trailing number. As you specified.
It can be done as MySQL TRANSACTION in 4 steps.
Lets say you have table testT where name is unique to insure we have no doubles.
| id | name |
| --- | ----- |
| 1 | test1 |
| 2 | test3 |
And you want to insert a new item with name test1 we set is as:
SET #newName = 'test1';
Then we need to check if it already exists in table:
SELECT #check:=COUNT(*) FROM testT WHERE name = #newName;
We do a count here to get true or false and save it as #check here so we can compare it later. This will result into 1 row as test1 already exists in table.
Next we do another selection to get the highest number of test* and store it as #number, this next query selects all tests and does a SUBSTRING after 4 latter's giving us all numbers after first 4 latter's. (99999999999) numbers actually just to be sure we don't miss any but in our case result is only "3" because that is last record "test3" in table.
SELECT
#number:= SUBSTRING(name,5,99999999999)
FROM testT;
Now we can do an insert:
INSERT INTO testT(name)
VALUES
(
IF(#check = "", #newName , CONCAT(LEFT(#newName,4),RIGHT(#number,1)+1)
)
);
This tries to insert our #newName into table under IF condition, and that is if our #check is empty then he will insert #newName, if not it will take word test out of string and append a highest #number from earlier and add + 1 too it.
So result for #newName = 'test1' is below. If you change this into #newName = 'test3' result wold be same new insert test4.
**Schema (MySQL v5.7)**
SET #newName = 'test1';
---
**Query #1**
SELECT * FROM testT
ORDER BY id;
| id | name |
| --- | ----- |
| 1 | test1 |
| 2 | test3 |
| 3 | test4 |
---
And if you change it in ANY test* that number does not already exists it will insert it normally. In case below: #newName = 'test6'
SET #newName = 'test6';
**Query #1**
SELECT * FROM testT
ORDER BY id;
| id | name |
| --- | ----- |
| 1 | test1 |
| 2 | test3 |
| 3 | test6 |
This way an insert will always be made.
You can play with this here : View on DB Fiddle just by changing SET #newName = 'test6'
I am no expert and it took me couple of hours to figure this way out, as I wanted to know if this was even possible.
And I would appreciate if any other user can suggestion any other way or improve my method.

How to insert the default value in temporal tables in MySQL?

I want to create a temporal table from a SELECT statement in MySQL. It involves several JOINs, and it can produce NULL values that I want MySQL to take as zeroes. It sounds like an easy problem (simply default to zero), but MySQL (5.6.12) fails to elicit the default value.
For example, take the following two tables:
mysql> select * from TEST1;
+------+------+
| a | b |
+------+------+
| 1 | 2 |
| 4 | 25 |
+------+------+
2 rows in set (0.00 sec)
mysql> select * from TEST2;
+------+------+
| b | c |
+------+------+
| 2 | 100 |
| 3 | 100 |
+------+------+
2 rows in set (0.00 sec)
A left join gives:
mysql> select TEST1.*,c from TEST1 left join TEST2 on TEST1.b=TEST2.b;
+------+------+------+
| a | b | c |
+------+------+------+
| 1 | 2 | 100 |
| 4 | 25 | NULL |
+------+------+------+
2 rows in set (0.00 sec)
Now, if I want to save these values in a temporal table (changing NULL for zero), this is the code I would use:
mysql> create temporary table TEST_JOIN (a int, b int, c int default 0 not null)
select TEST1.*,c from TEST1 left join TEST2 on TEST1.b=TEST2.b;
ERROR 1048 (23000): Column 'c' cannot be null
What am I doing wrong? The worst part is that this code used to work before I did a system-wide upgrade (I don't remember which version of MySQL I had, but surely it was lower than my current 5.6). It used to produce the behavior I would expect: if it's NULL, use the default, not the frustrating error I'm getting now.
From the documentation of 5.6 (unchanged since 4.1):
Inserting NULL into a column that has been declared NOT NULL. For
multiple-row INSERT statements or INSERT INTO ... SELECT statements,
the column is set to the implicit default value for the column data
type. This is 0 for numeric types, the empty string ('') for string
types, and the “zero” value for date and time types. INSERT INTO ...
SELECT statements are handled the same way as multiple-row inserts
because the server does not examine the result set from the SELECT to
see whether it returns a single row. (For a single-row INSERT, no
warning occurs when NULL is inserted into a NOT NULL column. Instead,
the statement fails with an error.)
My current workaround is to store the NULL values in the temporal table, and then replace them by zeroes, but it seems rather cumbersome with many columns (and terribly inefficient). Is there a better way to do it?
BTW, I cannot simply ignore some columns in the query (as suggested for another question), because it's a multirow query.
IFNULL(`my_column`,0);
That would set NULLs to 0. Other values stay as is.
Just wrap your values/column names with IFNULL and it will convert them to whatever default value you put into the function. E.g. 0. Or "european swallow", or whatever you want.
Then you can keep strict mode on and still handle NULLs gracefully.

Fastest way to diff datasets and update/insert lots of rows into large MySQL table?

The schema
I have a MySQL database with one large table (5 million rows say). This table has several fields for actual data, an optional comment field, and fields to record when the row was first added and when the data is deleted. To simplify to one "data" column, it looks a bit like this:
+----+------+---------+---------+----------+
| id | data | comment | created | deleted |
+----+------+---------+---------+----------+
| 1 | val1 | NULL | 1 | 2 |
| 2 | val2 | nice | 1 | NULL |
| 3 | val3 | NULL | 2 | NULL |
| 4 | val4 | NULL | 2 | 3 |
| 5 | val5 | NULL | 3 | NULL |
This schema allows us to look at any past version of the data thanks to the created and deleted fields e.g.
SET #version=1;
SELECT data, comment FROM MyTable
WHERE created <= #version AND
(deleted IS NULL OR deleted > #version);
+------+---------+
| data | comment |
+------+---------+
| val1 | NULL |
| val2 | nice |
The current version of the data can be fetched more simply:
SELECT data, comment FROM MyTable WHERE deleted IS NULL;
+------+---------+
| data | comment |
+------+---------+
| val2 | nice |
| val3 | NULL |
| val5 | NULL |
DDL:
CREATE TABLE `MyTable` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`data` varchar(32) NOT NULL,
`comment` varchar(32) DEFAULT NULL,
`created` int(11) NOT NULL,
`deleted` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `data` (`data`,`comment`)
) ENGINE=InnoDB;
Updating
Periodically a new set of data and comments arrives. This can be fairly large, half a million rows say. I need to update MyTable so that this new data set is stored in it. This means:
"Deleting" old rows. Note the "scare quotes" - we don't actually delete rows from MyTable. We have to set the deleted field to the new version N. This has to be done for all rows in MyTable that are in the previous version N-1, but are not in the new set.
Inserting new rows. All rows that are in the new set and are not in version N-1 in MyTable must be added as new rows with the created field set to the new version N, and deleted as NULL.
Some rows in the new set may match existing rows in MyTable at version N-1 in which case there is nothing to do.
My current solution
Given that we have to "diff" two sets of data to work out the deletions, we can't just read over the new data and do insertions as appropriate. I can't think of a way to do the diff operation without dumping all the new data into a temporary table first. So my strategy goes like this:
-- temp table uses MyISAM for speed.
CREATE TEMPORARY TABLE tempUpdate (
`data` char(32) NOT NULL,
`comment` char(32) DEFAULT NULL,
PRIMARY KEY (`data`),
KEY (`data`, `comment`)
) ENGINE=MyISAM;
-- Bulk insert thousands of rows
INSERT INTO tempUpdate VALUES
('some new', NULL),
('other', 'comment'),
...
-- Start transaction for the update
BEGIN;
SET #newVersion = 5; -- Worked out out-of-band
-- Do the "deletions". The join selects all non-deleted rows in MyTable for
-- which the matching row in tempUpdate does not exist (tempUpdate.data is NULL)
UPDATE MyTable
LEFT JOIN tempUpdate
ON MyTable.data = tempUpdate.data AND
MyTable.comment <=> tempUpdate.comment
SET MyTable.deleted = #newVersion
WHERE tempUpdate.data IS NULL AND
MyTable.deleted IS NULL;
-- Delete all rows from the tempUpdate table that match rows in the current
-- version (deleted is null) to leave just new rows.
DELETE tempUpdate.*
FROM MyTable RIGHT JOIN tempUpdate
ON MyTable.data = tempUpdate.data AND
MyTable.comment <=> tempUpdate.comment
WHERE MyTable.id IS NOT NULL AND
MyTable.deleted IS NULL;
-- All rows left in tempUpdate are new so add them.
INSERT INTO MyTable (data, comment, created)
SELECT DISTINCT tempUpdate.data, tempUpdate.comment, #newVersion
FROM tempUpdate;
COMMIT;
DROP TEMPORARY TABLE IF EXISTS tempUpdate;
The question (at last)
I need to find the fastest way to do this update operation. I can't change the schema for MyTable, so any solution must work with that constraint. Can you think of a faster way to do the update operation, or suggest speed-ups to my existing method?
I have a Python script for testing the timings of different update strategies and checking their correctness over several versions. It's fairly long but I can edit into the question if people think it would be useful.
One of speed-ups is for loading -- LOAD DATA INFILE.
In so far as I've experienced audit-logging, you'll be better off with two tables, e.g.:
yourtable (id, col1, col2, version) -- pkey on id
yourtable_logs (id, col1, col2, version) -- pkey on (id, version)
Then add an update trigger on yourtable, which inserts the previous version in yourtable_logs.

Update one MySQL table with values from another

I'm trying to update one MySQL table based on information from another.
My original table looks like:
id | value
------------
1 | hello
2 | fortune
3 | my
4 | old
5 | friend
And the tobeupdated table looks like:
uniqueid | id | value
---------------------
1 | | something
2 | | anything
3 | | old
4 | | friend
5 | | fortune
I want to update id in tobeupdated with the id from original based on value (strings stored in VARCHAR(32) field).
The updated table will hopefully look like:
uniqueid | id | value
---------------------
1 | | something
2 | | anything
3 | 4 | old
4 | 5 | friend
5 | 2 | fortune
I have a query that works, but it's very slow:
UPDATE tobeupdated, original
SET tobeupdated.id = original.id
WHERE tobeupdated.value = original.value
This maxes out my CPU and eventually leads to a timeout with only a fraction of the updates performed (there are several thousand values to match). I know matching by value will be slow, but this is the only data I have to match them together.
Is there a better way to update values like this? I could create a third table for the merged results, if that would be faster?
I tried MySQL - How can I update a table with values from another table?, but it didn't really help. Any ideas?
UPDATE tobeupdated
INNER JOIN original ON (tobeupdated.value = original.value)
SET tobeupdated.id = original.id
That should do it, and really its doing exactly what yours is. However, I prefer 'JOIN' syntax for joins rather than multiple 'WHERE' conditions, I think its easier to read
As for running slow, how large are the tables? You should have indexes on tobeupdated.value and original.value
EDIT:
we can also simplify the query
UPDATE tobeupdated
INNER JOIN original USING (value)
SET tobeupdated.id = original.id
USING is shorthand when both tables of a join have an identical named key such as id. ie an equi-join - http://en.wikipedia.org/wiki/Join_(SQL)#Equi-join
It depends what is a use of those tables, but you might consider putting trigger on original table on insert and update. When insert or update is done, update the second table based on only one item from the original table. It will be quicker.