Insert Consecutive Id's between other Id's - mysql

I have the following table :
id x y z
1 z
3
6 x
7 zy
....
10000
I need to add id's in between the other id's that are already without deleting the data inside. Can't seem to find any solution, tryed all sorts of things but ended up making blank rows.
Kinda new to sql all together.

Using the information you provided in the comment...
Got a backup that i need to import to current db which is made with update only querys and need to create the rows first so that I can import them.
... and the fact that you are using MySQL, I think there is a simple solution for your problem.
Create a copy of your backup file (to have the original in case it doesn't work as expected), open it in a text editor and replace UPDATE <table_name> with INSERT INTO <table_name> (put the actual name of your table instead of <table_name>).
If some of the rows you want to import already exists in the table you have the following options to solve the conflicts:
use INSERT IGNORE INTO <table_name> as the replacement string to ignore the rows from the backup (the rows already existing in the table remain unmodified); technically, IGNORE doesn't ignore the rows you want to insert; it attempts to insert them and fails because they already exists but treats the failures as warnings (they are normally errors);
use REPLACE INTO <table_name> as the replacement string to replace the existing rows with the data from the backup; technically, REPLACE does DELETE followed by INSERT; it is not the best solution if the rows you want to insert are not complete.

Related

Insert only selected columns from MySQL dump file into database

I am having a simple (I think) problem.
I am having a dump of MySQL database before disaster.
I need to import and replace from this dump only three columns from single table (in over 5000 rows, so that's why I am aware of doing it manually).
What should I do to do it and do not destroy anything else in working database?
I am just thinking that there is an option to skip columns during import and replace (UPDATE command I think) only these I need.
I will be thankful for help :(
------------ UPDATE ---------------
Okay, I used PHPMyAdmin and first I used SELECT query to get only three columns from whole table. Then I dumped it and I have SQL file with a dump containing only three columns.
Now, having this dump, can I (I do not know how to name it) edit or change something inside this typical MySQL dump file to make it possible to import these three columns with replace all the existing values?
I mean - to make existing column empty, then use maybe "INSERT INTO" but to whole table?
It is just over 2600 rows and I can not change it manually, so it would be better do use automation.
As far as I know, this is not possible. You can try to use sed in order to extract only the table you want - but specifically 3 columns would be complicated if not impossible.
Can I restore a single table from a full mysql mysqldump file?
The best way would be as #Ali said and just import it to a temp DB and then export the required data/columns to a new dump.
Restore DB to temp db then:
mysql> CREATE TABLE `tempTable` AS SELECT `columnYouWant` from `table`;
$> mysqldump yourDB tempTable > temp.sql
// Since you updated the question:
You want to probably use REPLACE INTO with your dump with the --replace option - though this will delete the row and replace it, not just the individual columns. If you want just the individual columns, the only way I can think of is with UDPATE. To use UPDATE, your options are:
Multi-table update
UPDATE mydb.mytable AS dest JOIN tempdb.mytable AS origin USING (prim_key)
SET dest.col1 = origin.col1,
dest.col2 = origin.col2,
...
Then drop the temp database.
Search/Replace Dump
Take your dump and use the INSERT ... ON DUPLICATE KEY UPDATE option to add it to the end of each insert line (assuming you exported/dumped individual insert commands).

re-inserting a table record and updating an auto increment primary index

I'm running MariaDB 5.5.56.
I'm looking to copy an entire row in a database, change one column, then insert the entire row back into the original database (I don't want to have to specify the individual fields because there's a lot of them). The problem I'm running into is how to deal with an auto-increment/primary key column.
example:
create temporary table t_ownership like ownership;
insert into t_ownership (select * from ownership where name='x' LIMIT 1);
update t_ownership set id='something else';
insert into ownership (select * from t_ownership);
I have a column "recno" that is an auto-increment that will create a collision in the database when I try to re-insert the slightly changed record back into the original table.
Something like this seems to work but doesn't result in an insert:
insert into ownership (select * from t_ownership) ON DUPLICATE KEY UPDATE recno=LAST_INSERT_ID(ownership.recno);
The above statement executes without error but does not add a row to table ownership.
So I think I'm close but not quite there...
What would be the best way to do this? I'd like to avoid doing an insert where I manually specify field/values. I just need to regenerate a new A.I. recno column on the insert.
NULL values inserted into auto-incremented fields end up just getting the next auto-increment value, behaving equivalent to INSERTing without specifying the field; so you should be able to update the source (temp copy) to have NULL for that field.
However, one potential issue that could present itself in scenarios like yours is that the CREATE TEMPORARY TABLE ... LIKE could result in a table that would not allow you to set such fields to NULL; this would require you to either ALTER the temporary table, or create it in a more explicit manner. Either way, it now makes code/queries that do not specify columns even more reliant on knowing columns.
Personally, I would take this route in the first place.
INSERT INTO theTable([list all but the auto-inc column])
SELECT [list all but the auto-inc column, with any replacements or modifications desired]
FROM ...[original query]...
It accomplishes the task in one query, makes the queries more self documenting, and only at the cost of a little typing (most of which a decent database browser, or query builder, will do for you).
The only argument really in favor of your current approach is that the table involved can be changed without necessarily breaking your queries; but that begs the question of whether it would be better for such table changes to break the queries, forcing them to be re-examined. If it is not an issue, it is a minor revision; but the alternative is queries that continue to be valid that have the potential to cause unexpected behavior due to copying information they were never intended to.

MySQL: Copy from 1 table to another not overwriting existing?

I have two tables:
tableOriginal
tableBackup
They have exactly the same structure.
I want a SQL statement I can run anytime of the day, that will copy all the rows from tableOriginal to tableBackup WITHOUT overwriting items in tableBackup. Basically, this command must synchronize tableBackup with tableOriginal.
How do I do that?
INSERT INTO tableBackup(SELECT * FROM tableOriginal)
As long as there is no issue with primary keys being updated or replaced with new incoming data this should not create an issue for you. However as you already know, backup table will have more data after your command since it did not delete previous data it had
Why don't you delete first all the data in tableBackup, then INSERT the data in tableOriginal to tableBackup
DELETE FROM tableBackup
INSERT INTO tableBackup(SELECT * FROM tableOriginal)
Why do we need to delete first?
Because if we're going to insert unique data into the tableBackup,
next time we insert it will not execute, because we will insert/add some data that is already been there..
Hope you get what I'm trying to say.

MySQL: compare entire row

I have a data transfer tool that transfers information from one database to another. Every hour it will issue an UPDATE on all the rows in a table. I already have an INSERT trigger to dump the data from that one table into a number of other tables. I added an UPDATE trigger to edit the other tables, but it's making the extra processing is making the entire UPDATE process run slowly.
I'd like to wrap the body of the UPDATE trigger in an IF statement that compares the old and new rows, and skips processing if nothing has changed. Is it possible to compare an entire row against another, like this?
IF new = old THEN ...
Or is there no other option than to check each column individually?
If speed is the issue here, I would either save a timestamp of when it was last edited or a checksum.
Using the latter approach, if you have a table with three rows A, B and C, I would modify this scheme to also include a new row, cksum.
Whenever you insert something, you would in the cksum insert a value generated using a fast hashing algorithm, for instance MD5. This checksum could be something like
checksum = MD5(A + B + C);
This way, whenever having to insert something, you would only have to compare with the cksum field.
Sadly, no, you're going to need to compare each column individually. Probably not the answer you were hoping for.

mysqldump table without dumping the primary key

I have one table spread across two servers running MySql 4. I need to merge these into one server for our test environment.
These tables literally have millions of records each, and the reason they are on two servers is because of how huge they are. Any altering and paging of the tables will give us too huge of a performance hit.
Because they are on a production environment, it is impossible for me to alter them in any way on their existing servers.
The issue is the primary key is a unique auto incrementing field, so there are intersections.
I've been trying to figure out how to use the mysqldump command to ignore certain fields, but the --disable-keys merely alters the table, instead of getting rid of the keys completely.
At this point it's looking like I'm going to need to modify the database structure to utilize a checksum or hash for the primary key as a combination of the two unique fields that actually should be unique... I really don't want to do this.
Help!
To solve this problem, I looked up this question, found #pumpkinthehead's answer, and realized that all we need to do is find+replace the primary key in each row with the NULL so that mysql will use the default auto_increment value instead.
(your complete mysqldump command) | sed -e "s/([0-9]*,/(NULL,/gi" > my_dump_with_no_primary_keys.sql
Original output:
INSERT INTO `core_config_data` VALUES
(2735,'default',0,'productupdates/configuration/sender_email_identity','general'),
(2736,'default',0,'productupdates/configuration/unsubscribe','1'),
Transformed Output:
INSERT INTO `core_config_data` VALUES
(NULL,'default',0,'productupdates/configuration/sender_email_identity','general'),
(NULL,'default',0,'productupdates/configuration/unsubscribe','1'),
Note: This is still a hack; For example, it will fail if your auto-increment column is not the first column, but solves my problem 99% of the time.
if you don't care what the value of the auto_increment column will be, then just load the first file, rename the table, then recreate the table and load the second file. finally, use
INSERT newly_created_table_name (all, columns, except, the, auto_increment, column)
SELECT all, columns, except, the, auto_increment, column
FROM renamed_table_name
You can create a view of the table without the primary key column, then run mysqldump on that view.
So if your table "users" has the columns: id, name, email
> CREATE VIEW myView AS
SELECT name, email FROM users
Edit: ah I see, I'm not sure if there's any other way then.
Clone Your table
Drop the column in clone table
Dump the clone table without the structure (but with -c option to get complete inserts)
Import where You want
This is a total pain. I get around this issue by running something like
sed -e "s/([0-9]*,/(/gi" export.sql > expor2.sql
on the dump to get rid of the primary keys and then
sed -e "s/VALUES/(col1,col2,...etc.) VALUES/gi" LinxImport2.sql > LinxImport3.sql
for all of the columns except for the primary key. Of course, you'll have to be careful that ([0-9]*, doesn't replace anything that you actually want.
Hope that helps someone.
SELECT null as fake_pk, `col_2`, `col_3`, `col_4` INTO OUTFILE 'your_file'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM your_table;
LOAD DATA INFILE 'your_file' INTO TABLE your_table
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n';
For added fanciness, you can set a before insert trigger on your receiving table that sets the new primary key for reach row before the insertion occurs, thereby using regular dumps and still clearing your pk. Not tested, but feeling pretty confident about it.
Use a dummy temporary primary key:
Use mysqldump normally --opts -c. For example, your primary key is 'id'.
Edit the output files and add a row "dummy_id" to the structure of your table with the same type as 'id' (but not primary key of course). Then modify the INSERT statement and replace 'id' by 'dummy_id'. Once imported, drop the column 'dummy_id'.
jimyi was on the right track.
This is one of the reasons why autoincrement keys are a PITA. One solution is not to delete data but add to it.
CREATE VIEW myView AS
SELECT id*10+$x, name, email FROM users
(where $x is a single digit uniquely identifying the original database) either creating the view on the source database (which you hint may not be possible) or use an extract routine like that described by Autocracy or load the data into staging tables on the test box.
Alternatively, don't create the table on the test system - instead put in separate tables for the src data then create a view which fetches from them both:
CREATE VIEW users AS
(SELECT * FROM users_on_a) UNION (SELECT * FROM users_on_b)
C.
The solution I've been using is to just do a regular SQL export of the data I'm exporting, then removing the primary key from the insert statements using a RegEx find&replace editor. Personally I use Sublime Text, but I'm sure TextMate, Notepad++ etc. can do the same.
Then I just run the query in which ever database the data should be inserted to by copy pasting the query into HeidiSQL's query window or PHPMyAdmin. If there's a LOT of data I save the insert query to an SQL file and use file import instead. Copy & paste with huge amounts of text often makes Chrome freeze.
This might sound like a lot of work, but I rarely use more than a couple of minutes between the export and the import. Probably a lot less than I would use on the accepted solution. I've used this solution method on several hundred thousand rows without issue, but I think it would get problematic when you reach the millions.
I like the temporary table route.
create temporary table my_table_copy
select * from my_table;
alter table my_table_copy drop id;
// Use your favorite dumping method for the temporary table
Like the others, this isn't a one-size-fits-all solution (especially given OP's millions of rows) but even at 10^6 rows it takes several seconds to run but works.