MySQL: compare entire row - mysql

I have a data transfer tool that transfers information from one database to another. Every hour it will issue an UPDATE on all the rows in a table. I already have an INSERT trigger to dump the data from that one table into a number of other tables. I added an UPDATE trigger to edit the other tables, but it's making the extra processing is making the entire UPDATE process run slowly.
I'd like to wrap the body of the UPDATE trigger in an IF statement that compares the old and new rows, and skips processing if nothing has changed. Is it possible to compare an entire row against another, like this?
IF new = old THEN ...
Or is there no other option than to check each column individually?

If speed is the issue here, I would either save a timestamp of when it was last edited or a checksum.
Using the latter approach, if you have a table with three rows A, B and C, I would modify this scheme to also include a new row, cksum.
Whenever you insert something, you would in the cksum insert a value generated using a fast hashing algorithm, for instance MD5. This checksum could be something like
checksum = MD5(A + B + C);
This way, whenever having to insert something, you would only have to compare with the cksum field.

Sadly, no, you're going to need to compare each column individually. Probably not the answer you were hoping for.

Related

re-inserting a table record and updating an auto increment primary index

I'm running MariaDB 5.5.56.
I'm looking to copy an entire row in a database, change one column, then insert the entire row back into the original database (I don't want to have to specify the individual fields because there's a lot of them). The problem I'm running into is how to deal with an auto-increment/primary key column.
example:
create temporary table t_ownership like ownership;
insert into t_ownership (select * from ownership where name='x' LIMIT 1);
update t_ownership set id='something else';
insert into ownership (select * from t_ownership);
I have a column "recno" that is an auto-increment that will create a collision in the database when I try to re-insert the slightly changed record back into the original table.
Something like this seems to work but doesn't result in an insert:
insert into ownership (select * from t_ownership) ON DUPLICATE KEY UPDATE recno=LAST_INSERT_ID(ownership.recno);
The above statement executes without error but does not add a row to table ownership.
So I think I'm close but not quite there...
What would be the best way to do this? I'd like to avoid doing an insert where I manually specify field/values. I just need to regenerate a new A.I. recno column on the insert.
NULL values inserted into auto-incremented fields end up just getting the next auto-increment value, behaving equivalent to INSERTing without specifying the field; so you should be able to update the source (temp copy) to have NULL for that field.
However, one potential issue that could present itself in scenarios like yours is that the CREATE TEMPORARY TABLE ... LIKE could result in a table that would not allow you to set such fields to NULL; this would require you to either ALTER the temporary table, or create it in a more explicit manner. Either way, it now makes code/queries that do not specify columns even more reliant on knowing columns.
Personally, I would take this route in the first place.
INSERT INTO theTable([list all but the auto-inc column])
SELECT [list all but the auto-inc column, with any replacements or modifications desired]
FROM ...[original query]...
It accomplishes the task in one query, makes the queries more self documenting, and only at the cost of a little typing (most of which a decent database browser, or query builder, will do for you).
The only argument really in favor of your current approach is that the table involved can be changed without necessarily breaking your queries; but that begs the question of whether it would be better for such table changes to break the queries, forcing them to be re-examined. If it is not an issue, it is a minor revision; but the alternative is queries that continue to be valid that have the potential to cause unexpected behavior due to copying information they were never intended to.

MySQL: Best way to update a large table

I have a table with huge amount of data. The source of data is an external api. Every few hours, I need to sync the database so that the changes are up to date from the external api. I am doing a full sync (api doesn't allow delta sync).
While sync happens, I want to make sure that the data from the database is also available for read. So, I am following below steps:
I have a cloumn in the table which acts as a flag for whether or not data is readable. Only the data with flag set is marked for read.
I am inserting all the data from the api into the table.
Once all the data is written, I am deleting all the data in the table with flag set.
After deletion, I am updating the table and setting the flag for all the rows.
Table has around ~50 million rows and is expected to grow. There is a customerId field in the table. Sync usually happens based on customerId by passing it to the api.
My problem is, step 3 and 4 above are taking a lot of time. Queries are something like:
Step 3 --> delete from foo where customer_id=12345678 and flag=1
Step 4 --> update foo set flag=1 where customer_id=12345678
I have tried partitioning the table based on customer_id and it works great where customer_id has less number of rows but for some customer_id, the number of rows in each partition itself goes till ~5 million.
Around 90% of data doesn't change between two syncs. How can I make this fast?
I was thinking of using just the update queries instead of insert queries and then check if there was any update. If not, I can issue an insert query for the same row. This way any updates will be taken care of along with the insert. But I am not sure if the operation will block read queries for this while update is in progress.
For your setup (read only data, full sync), the fastest way to update the table is to not update at all, but to import the data into a different table and to rename it afterwards to make it the new table.
Create a table like your original table, e.g. use
create table foo_import like foo;
If you have e.g. triggers, add them too.
From now on, let the import api write its (full) sync to this new table.
After a sync is done, swap the two tables:
RENAME TABLE foo TO foo_tmp,
foo_import TO foo,
foo_tmp to foo_import;
It will (literally) just require a second.
This command is atomic: it will wait for transactions that access these tables to finish, it will not present a situation where there is no table foo and it will completely fail (and not do anything) if one of the tables doesn't exist or foo_tmp already exists.
As a final step, empty your import table (that now contains your old data) to be ready for your next import:
truncate foo_import;
This will again just require a second.
The rest of your querys probably assume that flag=1. Until (if at all) you update the code to not use the flag anymore, you can set its default value to 1 to keep it compatible, e.g. use
alter table foo modify column flag tinyint default 1;
Since you don't have foreign keys, it doesn't have to bother you, but for others with a similar problem it might be useful to know that foreign keys will get adjusted, so foreign keys that are referencing foo will reference foo_import after renaming the tables. To make them point to the new table foo again, they have to be dropped and recreated. Everything else (e.g. views, queries, procedures) will resolve by the current name, so they will always access the current foo.
CREATE TABLE new LIKE real;
Load `new` by whatever means you have; take as long as needed.
RENAME TABLE real TO old, new TO real;
DROP TABLE old;
The RENAME is atomic and "instantaneous"; real is "always" available.
(I don't see the need for flag.)
OR...
Since you are actually updating a chunk of a table, consider these...
If the chunk is small...
Load the new data into a tmp table
DELETE the old rows
INSERT ... SELECT ... to move the new rows in. (Having the new data already in a table is probably the fastest way to achieve this.)
If the chunk is big, and you don't want to lock the table for "too long", there are some other tricks. But first, is there some form of unique row number for each row for the customer? (I'm thinking about batch-moving a bunch or rows at a time, but need more specifics before spelling it out.)

Augment and Prune a MySQL table

I need a little advice concerning a MySQL operation:
There is a database A wich yields several tables. With a query I selected a set of entries out of this database to copy these results into another table of database B.
Now the table in database B contains the results of my query on database A.
For instance the query is:
SELECT names.name,ages.age FROM A.names names A.ages ages WHERE ages.name = name.name;
And to copy these results into database B I would run:
INSERT INTO B.persons (SELECT name,age FROM A.names names A.age age WHERE age.name = name.name);
Here's my question: When the data of database A has changed I want to run an "update" on the table of database B.
So, the easy and dirty approach would be: Truncate the table in database B, re-run the query on database A and copy the result back to database B.
But isn't there a smarter way so that only new result rows of that query will be copied and those entries in database B which are not in database A anymore get deleted?
In short: Is there a way to "augment" the table of database B with new entries and "prune" old entries out?
Thanks for your help
I would do two things:
1) Ensure you have a primary key that's either an integer or a unique combination of columns at a minimum in database B
2) Use logical deletes instead of physical deletes i.e. have a boolean deleted column
Point 2 ensures you never have to delete and lose data, you just update the flag and in your queries put where deleted = 0 or where deleted is null.
When combined with a primary key it means everything can be handled easily by an INSERT ... WITH DUPLICATE KEY which will insert new rows and update existing ones - which means it can perform your 'deletes' at the same time too.
What you describe sounds like you want to replicate the table. There is no simple quick fix for what you describe. You could of course write some application logic to do it but it would not be so efficient as it would have to compare each entry in each table and then delete or update accordingly.
One solution would be to setup a foreign-key index between A and B and cascade updates and deletes to B. But this would only partly solve the problem. It would drop rows in B if they were deleted in A and it would update a key column in B if it were updated in A. But it would not update the other columns. Note also that this would require your table type to be INNODB.
Another would be to run inserts on B with A's values but use
INSERT ON DUPLICATE KEY UPDATE....
Again this would work fine for updates but not for Deletes.
You could try to setup actual MySQL replication but this is perhaps beyond the scope of your problem and is more involved.
Finally you could set up the foreign key index as described above and write a trigger that whenever an updates is applied to A then the corresponding key row in B is also updated. This seems like a plausible solution for you while not the cleanest I would admit.
It would seem that a small batch script run periodically on which ever environment your running on to duplicate the table would be the best to achieve what you are looking for.

No data if queries are sent between TRUNCATE and SELECT INTO. Using MySQL innoDB

Using a MySQL DB, I am having trouble with a stored procedure and event timer that I created.
I made an empty table that gets populated with data from another via SELECT INTO.
Prior to populating, I TRUNCATE the current data. It's used to track only log entries that occur within 2 months from the current date.
This turns a 350k+ log table into about 750 which really speeds up reporting queries.
The problem is that if a client sends a query precisely between the TRUNCATE statement and the SELECT INTO statement (which has a high probability considering the EVENT is set to run every 1 minute), the query returns no rows...
I have looked into locking a read on the table while this PROCEDURE is ran, but locks are not allowed in STORED PROCEDURES.
Can anyone come up with a workaround that (preferably) doesn't require a remodel?
I really need to be pointed in the right direction here.
Thanks,
Max
I'd suggest an alternate approach instead of truncating the table, and then selecting into it...
You can instead select your new data set into a new table. Next, using a single RENAME command, rename the new table to the existing table and the existing table to some backup name.
RENAME TABLE existing_table TO backup_table, new_table TO existing_table;
This is a single, atomic operation... so it wouldn't be possible for the client to read from the data after it is emptied but before it is re-populated.
Alternately, you could change your TRUNCATE to a DELETE FROM, and then wrap this in a transaction along with the SELECT INTO:
START TRANSACTION
DELETE FROM YourTable;
SELECT INTO YourTable...;
COMMIT

Wrong order of SQL statements on InnoDB from Kettle

In Kettle, I use the following logic in a transformation, given some Strings X and Y as input:
[User Defined Java Expression] Generate ID
[Insert / Update] Update/Insert table set id = generatedId, name=X, company=Y where name = X; don't update the ID column
[Database Value Lookup]select id from table where name = X
Idea is to update existing entries in the table or create new ones and get the ID of the interesting row in the next step (which may be an existing one or the newly generated one).
This works fine when executed on MySQL + MyISAM but fails on MySQL + InnoDB, with all other parameters being identical. The last step fails when the row is just being inserted in the second step but works for rows already existing in the database. It seems as if the connection tries to execute the SELECT of the last step before the actual insert happened.
All parameters are set to default in the MySQL settings (MySQL 5.1 and 5.5 show the same behavior).
So my questions are: What are the relevant parameters in Kettle and/or MySQL? How can I guarantee that this works as expected? I cannot switch back to MyISAM.
just use the block rows step between the insert step and the next step. Then the step before the block will complete before the next step starts.
Well, after having evaluated different possibilities, three seem to be possible:
Write my own step which performs the select/insert in a transaction
Serialize the whole transformation in its properties (makes everything REALLY slow)
Use Codeks idea and use the blocking step
I went with the third option for now as everything else is not possible for the moment.
Make sure the transaction generated by Update/Insert is committed and the locks are released before doing the SELECT operation takes place. It looks like there are lock problems