I have a database which stores user-published articles. The owner can modify their article at any time.
I do want to add a backup feature, in case the user accidentally deletes the content of their article or something else goes wrong when they update it.
For this reason, I have the content column which stores the content of the article, as well as a backup_content which is intended to keep a copy of the content before the last update.
The user has a "Restore" button which is meant to replace the new content with the backup. Very much like an "Undo" feature.
My prepared statement to insert/update an article is as follows:
REPLACE INTO custom_pages (name, banner_url, full_url, backup_content, content, updated_on) VALUES (?, ?, ?, content, ?, CURRENT_TIMESTAMP);
Here, I tried putting the previous value of content in backup_content and then changing content with the new value. Doing so sets the backup_content to NULL however.
I've seen a few answers on SO on how to achieve a copy, but those answers seem to apply strictly for update and insert, and don't seem to work in Replace queries. I'd prefer one statement over two, and that's where I'm having trouble.
Is there any way to achieve such copy in a single Replace statement?
I would also place my support behind Gordon Linoff's suggestion that you create a continuous update history via triggers and one-to-many related tables.
However, if a significant architectural change is not practical for you right now, you can achieve what you are attempting with INSERT INTO...ON DUPLICATE KEY UPDATE instead of the older REPLACE INTO feature.
Using REPLACE INTO...SELECT FROM may result in more than one access against the table's index, but INSERT INTO...ON DUPLICATE KEY UPDATE should hit it only once.
Since name has a unique index, the presumption is that you never attempt to UPDATE, and instead always execute an INSERT which copies the old value to backup_content.
-- Inserting a row which does not yet exist..
INSERT INTO custom_pages (name, banner_url, full_url, content)
VALUES ('uniquename', 'http://example.com', 'http://example.com', 'this is the original content');
-- In practice, you use this format:
-- uniquename already exists, so update necessary fields
INSERT INTO custom_pages (name, banner_url, full_url, content)
VALUES ('uniquename', 'http://example.com', 'http://example.com', 'this is new content')
ON DUPLICATE KEY UPDATE
-- Update from the VALUES() list
banner_url = VALUES(banner_url),
-- Set backup_content to old content BEFORE updating
-- content from VALUES()
backup_content = content,
content = VALUES(content),
updated_on = NOW();
Using this method, you would never use the first INSERT statement without its ON DUPLICATE KEY clause. Instead, always use the second one; rows that don't exist by unique key will be created, those that already exist will be updated.
Here it is in action: http://sqlfiddle.com/#!9/2f687/1
I think you should re-think your data structure. If you want to preserve history, then use a separate table not column. Something like custom_pages_history. You would remove the backup_content column from your table and instead rely on the history table.
Then, define a trigger on inserts and updates to insert a row into the history table.
The advantages of this approach are:
You have complete history of all the articles.
The changes will be timestamped.
A user can go back to any earlier version of the article, if desired.
This doesn't directly answer your question about replace. Instead of replace you would do an update from the history table.
Related
I have and old MySQL database where I need to insert new columns into tables (to support new parts of the front-end). But some of the old parts use SQL commands that depend on column count and order instead of their names. e.g.:
INSERT INTO `data` VALUES (null /*auto-id*/, "name", "description", ...)
When I add new columns into this table, I get the error:
1136 - Column count doesn't match value count at row 1
Right now I know about the INSERT which needs to be changed to:
INSERT INTO `data` (`name`, `desc`, ...) VALUES ("name", "description", ...)
The question is: are there any other commands that can use similar syntax that rely on an order or count of the columns instead of their names? I need to update all the old SQL commands before updating the DB and using trial & error method would be really long.
SELECTs are not a problem, because the front-end uses associative mapping and correctly uses their names everywhere so new columns will be just ignored. Also I'm sure there are no commands that modifying the DB structure (e.g. ALTER TABLE).
You ruled out data structure modifying queries, so this leaves us with insert, update, delete, and select.
Insert you are already aware of.
Update requires each updated field to be specified, so mostly that's ok. However, subqueries may be used in the where clause, and mysql allows multi-table updates, so my points around select do apply.
Delete applies to a whole record, so there is nothing that an extra field would influence. However, subqueries may be used in the where clause, so my points around select do apply.
You tried to rule out select, but you should not. It is not only the final resultset that can be influenced by a new field:
A subquery may use select * that and an extra field may cause error in the outer query. For example the newly introduced field mayhave the same name as another field in the outer query leading to ambiguous field name error.
If select * is used in union, then column counts may not match after adding a new field.
Natural joins may also be affected by an introduction of a new field.
I have created a table which is having a conditional row insertion function,so at times new rows are not inserted into the column. Here the problem is, even when row insertion is failed the auto_inc column increments and thus the values stored in that will be some what like this:
Sl No.
1
2
4
7
8
9
it looks really messy please help.thanks in advance
A sspencer7593 has mentioned
"The behavior of AUTO_INCREMENT is fairly well defined. And it's primarily designed to generate unique values. It's not designed to prevent gaps."
However as MySQL allows you to assign a custom value to AUTO_INCREMENT column a workaround to your scenario would be to assign value of Max(SI_No)+1 while inserting the row. In this case you will ensure that you would add next incremented value only when row is actually inserted.
Typical syntax would look like
INSERT INTO TABLENAME (ID,SOMECOLUMN) VALUES ((SELECT MAX(ID)+1 NEWID FROM TABLENAME) ,someValue);
Note:- it would prevent gaps you are seeing during insertion and last row deletion cases . If you delete row in between you would still see the Gaps but I think this should be OK with you
Can you please add your php code and table structure? I think insert query is being executed even condition fails.
This is expected behavior with INSERT ... SELECT, or when an INSERT statement fails or is rolled back. The innodb_autoinc_lock_mode setting can also influence the behavior. We will also see this when a value is supplied for the AUTO_INCREMENT column, or when rows are deleted.
The behavior of AUTO_INCREMENT is fairly well defined. And it's primarily designed to generate unique values. It's not designed to prevent gaps.
got an answer for this question thanks to # juergen d
this should be the query:
String queryString = "INSERT INTO hcl_candidates(SL_No,candidate,phone,pan,mailid) SELECT MAX(SL_No)+1, ?, ?, ?, ? FROM hcl_candidates";
I've been using MySQL at work, but I'm still a bit of a noob at more advanced queries, and often find myself writing lengthy queries that I feel (or hope) could be significantly shortened.
I recently ran into a situation where I need to create X number of new entries in a table for each entry in another table. I also need to copy a value from each row in the second table into each row I'm inserting into the first.
To be clear, here's pseudocode for what I'm attempting to do:
For each row in APPS
create new row in TOKENS
set (CURRENT)TOKENS.APP_ID = (CURRENT)APPS.APP_ID
Any help is appreciated, even if it boils down to "this isn't possible."
As a note, the tables only share this one field, and I'll be setting other fields statically or via other methods, so simply copying isn't really an option.
You don't need a loop, you can use a single INSERT command to insert all rows at once:
INSERT INTO TOKENS (APP_ID)
SELECT APP_ID
FROM APPS;
If you want to set other values for that row, simply modify the INSERT list and SELECT clause. For example:
INSERT INTO TOKENS (APP_ID, static_value, calculated_value)
SELECT APP_ID, 'something', 'calculated-' + APP_ID
FROM APPS
I'd like to build as an experiment a sort of dictionary where any user can suggest new words.
In order to avoid duplicates, I used to do a query SELECT that search for that word and if size is zero then I do the INSERT INTO.
I feel this method works well only if you need to warn the user br lese, but in my case I want something faster and automated and silent.
The very first entry of the word (the very first time a user suggests that word) is going to be the ID of the page word so I don't want to use REPLACE.
I was wondering whether using INSERT IGNORE can be the solution?
INSERT IGNORE will do the trick for you here. You just need to make sure you have a UNIQUE index defined on the column you don't want duplicated.
Another options is INSERT INTO ... ON DUPLICATE KEY UPDATE which won't insert the value again, but will allow you update other columns in that row. A counter or timestamp for example.
"INSERT INTO" to ignore later duplicates, or "INSERT INTO ... ON DUPLICATE KEY UPDATE" to take new fields from the later duplicates: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
Alright here is the problem... we have posts that are missing a custom field. We just recently received the values for that field.
Is there a way via phpmyadmin to insert in to the table post_meta a custom field named "translation" and then the value for each post already published?
I am trying to avoid having to go back to each post and adding this custom field one by one.
Thanks!
Yes, it's doable ... but tricky. You'll have to run an INSERT script on the wp_postmeta table. Remember, the table has three columns: post_id, meta_key, and meta_value.
So if you know the ID of the post and the meta value you want to set, you'd run the following query:
INSERT INTO `wp_postmeta` (post_id, meta_key, meta_value) VALUES (*ID*, 'translation', *VALUE*;
Where *ID* is the id of the post you're attaching the value for and *VALUE* is the meta value of the "translation" field.
Like I said, doable ... but you'll need a separate INSERT query for each post. You could dump all of these into a single text file and run the entire set in one pass if you want, otherwise it might take as much time as it would to just add the key through the WordPress UI in the first place.
You can easily add the column with phpMyAdmin, but you're probably going to have the read the data with some sort of code and iterate through updating the values. PhpMyAdmin isn't really the best tool for importing data that's not already in .sql format.