I'm trying to change some old deprecated code on how items are saved in my game. The current method is deleting all items, then inserting all items, then inserting equipment (child table of the items). It has a pretty big performance impact and I was wondering if changing the INSERTS to INSERTS ON DUPLICATE KEY UPDATE would have a noticeable impact on performance.
If it does, then I have a follow-up question. My plan is to load the items in with their original inventoryitemid and use that as the key to save with later.
The issue is when executing this following statement:
INSERT INTO inventoryitems (inventoryitemid, itemid) VALUES (?, ?) ON DUPLICATE KEY UPDATE ...
may miss some conditions. What I would like is for MySQL to INSERT if it doesn't exist with the default value (auto increment), otherwise UPDATE. At the moment new items are generated with an inventoryitemid at 0 since keys are generated on INSERT anyways.
tl;dr: I need a way to INSERT ON DUPLICATE KEY UPDATE without having the inventoryitemid beforehand (since new items are generated in-game with inventoryitemid of 0). At the moment I have to specify what the inventoryitemid is beforehand and I might not have access to that information.
First goal and issue
Try to insert a new item that doesn't exist in the database without having the inventoryitemid beforehand.
Item isn't inserted into the database with the next incremented db value.
Second goal (no issue)
Attempting to insert item into database with an existing inventory itemid
Database updates the item in the database successfully (yay!)
Trying out solution: Inserting value with NULL to try to trigger the autoincrement
When you're inserting a new row, specify inventoryitemid = NULL. This will create a new row because it's not a duplicate key, and it will auto-increment the inventoryitemid.
After you insert a new item like this, you can use LAST_INSERT_ID() to get the inventoryitemid that was assign to it. Then you can send that to the game code so it will know that the new item has this ID. It can then use this ID in future updates, instead of sending NULL.
When you're modifying an existing row, specify the value of its inventoryitemid. Then the ON DUPLICATE KEY UPDATE code will replace it instead of inserting a new row.
Related
When I insert data into a brand new table, it will assign a new id via AUTO_INCREMENT. So the first time I perform an insert I get an id of 1. However, if I delete the row and insert new data, the table acts as if there is still a preceding row (the new row will have an id of 2). This behavior concerns me because I feel like the data is still persisting somewhere. Any ideas of what it could be?
Your data is not persisting. MySql maintains a separate table about your table containing, among other things, the next auto-increment value for your table. You can reset this with:
ALTER TABLE tablename AUTO_INCREMENT = 1
However, be aware that if you are resetting to a value below another valid value in the table, you're asking for trouble.
you should simply use.
truncate table tablename;
This has been discussed before, however I cannot understand the answers I have found.
Essentially I have a table with three columns memo, user and keyid (the last one is primary and AUTO_INC). I insert a pair of values (memo and user). But if I try to insert that same pair again it should not happen.
From what I found out, the methods to do this all depend on a unique key (which I've got, in keyid) but what I don't understand is that you still need to do a second query just to get the keyid of the existing couple (or get nothing, in which case you go ahead with the insertion).
Is there any way to do all of this in a single query? Or am I understanding what I've read (using REPLACE or IGNORE) wrong?
You need to set a UNIQUE KEY on user + memo,
ALTER TABLE mytable
ADD CONSTRAINT unique_user_memo UNIQUE (memo,user)
and then using INSERT IGNORE or REPLACE according to your needs when inserting. Your current unique key is the primary key, that is all well and good, but you need a 2nd one in order to not allow the insertion of duplicate data. If you do not create a new unique key on the two columns together, then you'll need to do a SELECT query before every insert to check if the pair already exists.
I have an existing schema with a non-auto-incrementing primary key. The key is used as a foreign key in a dozen other tables.
I have inherited a program with major performance problems. Currently, when a new row is added to this table, this is how a new unique id is created:
1) a query for all existing primary key values is retrieved
2) a random number is generated
3) if the number does not exist in the retrieved values, use it, otherwise goto (2)
The app is multi-threaded and multi-server, so simply grabbing the existing ids once at startup isn't an option. I do not have unique information from the initiating request to grab and convert into a pseudo-unique value (like a member id).
I understand it is theoretically possible to perform surgery on the internals to add autoincrementing to an existing primary key. I understand it would also be possible to systematically drop all foreign keys pointing to this table, then create-rename-insert a new version of the table, then add back foreign keys, but this table format is dictated by a third-party app and if I mess this up then Bad Things happen.
Is there a way to leverage sql/mysql to come up with unique row values?
The closest I have come up with is choosing a number randomly from a large space and hoping it is unique in the database, then retrying when the odd collision occurs.
Ideas?
If the table has a primary key that isn't being used for foreign key references, then drop that primary key. The goal is to make your column an auto-incremented primary key.
So, look for the maximum value and then the following should do what you want:
alter table t modify id int not null auto_increment primary key;
alter table t auto_increment = <maximum value> + 1;
I don't think it is necessary to explicitly set the auto_increment value, but I like to be sure.
I think you can SELECT MAX('strange-id-column')+1. That value will be unique and you can put that sql code inside a transaction with the INSERT code in order to prevent errors.
It seems really expensive to pull back a list of all primary key values (for large sets), and then to generate psuedo-random value and verify it's unique, by checking it against the list.
One of the big problems I see with this approach is that a pseudo-random number generator will generate the same sequence of values, when the sequence is started with the same seed value.
If that ever happens, then there will be collision after collision after collision until the sequence reaches a value that hasn't yet been used. And the next time it happens, you'd spin through that whole list again, to add one more value.
I don't understand why the value has to be random.
If there's not a requirement for pseudo-randomness, and an ascending value would be okay, here's what I would do if I didn't want to make any changes to the existing table:
I'd create another "id-generator" table that has an auto_increment column. I perform inserts to that table to generate id values.
Instead of running a query to pull back all existing id values from the existing table, I'd instead perform an INSERT into the "id-generator" table, and then a SELECT LAST_INSERT_ID() to retrieve the id of the row I just inserted, and that would use that as "generated" id value.
Basically, emulating an Oracle SEQUENCE object. It wouldn't be necessary to keep all of the rows in "id-generator" table. So, I could perform a DELETE of all rows that have an id value less than the maximum id value.
If there is a requirement for pseudo-randomness (shudder) I'd probably just attempt the INSERT as a way to find out if the key exists or not. If the insert fails due to a duplicate key, I'd try again with a different id value.
The repeated sequence from a pseudo-random generator scares me... if I got several collisions in a row... are these from a previously used sequence, or are they values from a different sequence. I don't have any way of knowing. Abandoning the sequence and restarting with a new seed, if that seed has been used before, I'm off chasing another series of previously generated values.
For low levels of concurrency (average concurrent ongoing inserts < 1) You can use optimistic locking to achieve a unique id without autoincrement:
set up a one-row table for this function, eg:
create table last_id (last_id bigint not null default 0);
To get your next id, retrieve this value in your app code, apply your newId function, and then attempt to update the value, eg:
select last_id from last_id; // In DB
newId = lastId + 1 // In app code
update last_id set last_id=$newId where last_id=$lastId // In DB
Check the number of rows that were updated. If it was 0, another server beat you to it and you should return to step 1.
hey let me explain my problem. I have a mysql table in which i store data feeds from say 5 different sites. Now i update the feeds once daily. I have this primary key FeedId which auto-increments. Now what i do is when i update feeds from particular site i delete previous data for that site from my table and enter the new one. This way the new data is filled in the rows occupied by previous deleted data and if this time there are more feeds rest are entered at the end of table. But the FeedId is incremented for all the new data.
What i want is that the feeds stored in old locations retain previous Id n only the extra ones being saved at the end get new incremented Ids. Please help as i cant figure out how to do that.
A better solution would be to set a unique key on the feed (aside from the auto-incremented key). Then use INSERT ON DUPLICATE KEY UPDATE
INSERT INTO feeds (name, url, etc, etc2, `update_count`)
VALUES ('name', 'url', 'etc', 'etc2', 1)
ON DUPLICATE KEY UPDATE
`etc` = VALUES(`etc`),
`etc2` = VALUES(`etc2`),
`update_count` = `update_count` + 1;
The benefit is that you're not incrementing the ids, and you're still doing it in one atomic query. Plus, you're only updating / changing what you need to change. (Note that I included the update_count column to show how to update a field)...
Marking the post as delete based on the comments
Try REPLACE INTO to merge the data.
More information #:
http://dev.mysql.com/doc/refman/5.0/en/replace.html
I have a many-to-many relation in 3 tables: ProgramUserGroup and Feature are the two main tables, and the link between them is LinkFeatureWithProgramUserGroup, where I have Foreign key relations to the two parent tables.
I have a dataset with inserts: I want to add a new row to ProgramUserGroup, and a related (existing) Feature to the LinkFeatureWithProgramUserGroup table.
When Inserting new rows, I'm setting the default id to -1:
<diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"> <DataSetUserGroup xmlns="http://tempuri.org/DataSetUserGroup.xsd">
<ProgramUserGroup diffgr:id="ProgramUserGroup1" msdata:rowOrder="0" diffgr:hasChanges="inserted">
<id>-1</id>
<Name>99999999999</Name>
<Active>false</Active>
</ProgramUserGroup>
<LinkFeatureWithProgramUserGroup diffgr:id="LinkFeatureWithProgramUserGroup1" msdata:rowOrder="0" diffgr:hasChanges="inserted">
<id>-1</id>
<Feature_id>7</Feature_id>
<ProgramUserGroup_id>-1</ProgramUserGroup_id>
</LinkFeatureWithProgramUserGroup> </DataSetUserGroup> </diffgr:diffgram>
while I'm updating the tables, I get an error:
"The INSERT statement conflicted with the FOREIGN KEY constraint "FK-LinkFeatu-Progr-7DCDAAA2". The conflict occurred in database "x", table "dbo.ProgramUserGroup", column 'id'."
The code for the update is the following:
DataSetUserGroupTableAdapters.LinkFeatureWithProgramUserGroupTableAdapter lfa = new LinkFeatureWithProgramUserGroupTableAdapter();
DataSetUserGroupTableAdapters.ProgramUserGroupTableAdapter pug = new ProgramUserGroupTableAdapter();
pug.Update(dsUserGroup.ProgramUserGroup);
lfa.Update(dsUserGroup.LinkFeatureWithProgramUserGroup);
if I check the ProgramUserGroup table's new row's ID, it has been updated from -1 to ##identity (like 1099), so it's okay - it inserts the new row.
But In the LinkFeatureWithProgramUserGroup table, the related ProgramUserGroup.ID value is still -1, it was not updated anyhow.
How could I force the update of the link table's keys as well?
I've tried
pug.Update(dsUserGroup.ProgramUserGroup);
dsUserGroup.Merge(dsUserGroup.ProgramUserGroup);
lfa.Update(dsUserGroup.LinkFeatureWithProgramUserGroup);
But didn't solve the problem :(
Thanks,
b.
Yes, there's a work-around for this.
You need to tell your parent table's table-adapter to refresh the
data-table after update operation.
This is how you can do that.
Open the properties of ProgramUserGroupTableAdapter -> Default Select Query -> Advnaced options. and Check the option of Refresh the data table. Save the adapter now. Now when you call update on table-adapter, the data-table will be updated [refreshed] after the update operation and will reflect the latest values from database table. if the primary-key or any coloumn is set to auto-increment, the data-table will have those latest value post recent update.
Now you can Call the update as pug.Update(dsUserGroup.ProgramUserGroup);
Read latest values from the ProgramUserGroup coloumns and assign respective values into the child table before update. This will work exactly the way you want.
alt text http://ruchitsurati.net/files/tds1.png