I am trying to insert a large dump of custom CMS's news section into WordPress. Unfortunalety, columns doesn't match. Some of them - yeah, sure - like title, date or content. But WordPress required a lot of columns which this dump doesn't have. Is there a way to either omit this count on insert or filling it with dummy (preferably blank) data? Search and replace (even with regular expressions) won't do here, since it is really huge file and simple 'find' takes a lot of time.
You stated that you were given a "table." If it included a schema, create the table and insert the data. Otherwise, create the table based on the data columns and insert your data. This is considered your staging table. You can now write a SELECT statement to select the data from your staging table that will be inserted into your destination table. You will finally prepend an INSERT statement to insert your selected data. It should look something like this:
INSERT INTO destinationTable (fruits, animals, numbers, plants)
SELECT fruits, animals, numbers, '' FROM stagingTable
If you did not have plants in your staging table, you would simply SELECT '' or SELECT NULL for that column. You can then simply drop your staging table.
Assuming the answer to my clarification is yes, you can insert multiple rows by delimiting them with a column.
INSERT INTO table (col1, col2)
VALUES (row1val1, row1val2),
(row2val1,row2val2)
Related
If I have a table that has these rows:
animal (primary)
-------
man
dog
cow
and I want to delete all the rows and insert my new rows (that may contain some of the same data), such as:
animal (primary)
-------
dog
chicken
wolf
I could simply do something like:
delete from animal;
and then insert the new rows.
But when I do that, for a split second, 'dog' won't be accessible through the SELECT statement.
I could simply insert ignore the new data and then delete the rest, one by one, but that doesn't feel like the right solution when I have a lot of rows.
Is there a way to insert the new data and then have MySQL automatically delete the rest afterward?
I have a program that selects data from this table every 5 minutes (and the code I'm writing now will be updating this table once every 30 minutes), so I would like to be as accurate as possible at all times, and I would rather have too many rows for a split second than too few rows for the same time.
Note: I know that this may seem like it is unnecessary but I just feel like if I leave too many of those unlikely possibilities in different places, there will be times where things go wrong.
You may want to use TRUNCATE instead of DELETE here. TRUNCATE is faster than DELETE and resets the table back to its empty state (meaning IDENTITY columns are reset to original values as well).
Not sure why you're having problems with selecting a value that was deleted and re-added, maybe I'm missing some context. But if you're wiping the table clean, you might want to use truncate instead.
You could add another column timestamp and change the select statement to accommodate this scenario where it needs to check for the latest value.
If this is for school, I would argue that you need a timestamp and that is what your professor is looking for. You shouldn't need to truncate a table to get the latest values, you need to adjust the thinking behind the table and how you are querying data. Hope this helps!
Check out these:
How to make a mysql table with date and time columns?
Why not update values instead?
My other questions would be:
How are you loading this into the table?
What does that code look like?
Can you change the way you Select from the table?
What values are being "updated" and change in such a way that you need to truncate the entire table?
If you don't want to add new column, there is an other method.
1. At first step, update table in any way that mark all existing rows for deletion in future. For example:
UPDATE `table_name` SET `animal`=CONCAT('MUST_BE_DELETED_', `animal`)
At second step, insert new rows.
On final step, remove all marked rows:
DELETE FROM `table_name` WHERE `animal` LIKE 'MUST_BE_DELETED_%'
You could implement this by having the updated_on column as timestamp and you may even utilize some default values, but let's go with an example without them.
I presume the table would look something like this:
CREATE TABLE `new_table` (
`animal` varchar(255) NOT NULL,
`updated_on` timestamp,
PRIMARY KEY (`animal`)
) ENGINE=InnoDB
This is just a dummy table example. What's important are the two queries later on.
You would simply perform a query to insert the data, such as:
insert into my_table(animal)
select animal from my_view where animal = 'dogs'
on duplicate key update
updated_on = current_timestamp;
Please notice that my_view is your table/view/query by which you supply the values to insert into your table. Also notice that you need to have primary/unique key constraint on your animal column in this example, in order to work.
Then, you proceed with the following query, to "purge" (delete) the old values:
delete from my_table
where updated_on < (
select *
from (
select max(updated_on) from my_table
) as max_date
);
Please notice that you could make a separate view in order to obtain this max_date value for updated_on entry. This entry should indicate the timestamp for your last updated/inserted values in a previous query, so you could proceed with utilizing it in a where clause in order to issue deletion of old records that you don't want/need anymore.
IMPORTANT NOTE:
Since you are doing multiple queries and it's supposed to be a single operation, I'd advise you to utilize it within a single trancations and to utilize a proper rollback on various potential outcomes (i.e. in case of mysql exceptions). You might wish to utilize a proper stored procedure for that.
I have some words like ["happy","bad","terrible","awesome","happy","happy","horrible",.....,"love"].
These words are large in number, exceeding 100 ~ 200 maybe.
I want to saving that to DB at the same time.
I think calling to DB connection at every word is so wasteful.
What is the best way to save?
table structure
wordId userId word
You are right that executing repeated INSERT statements to insert rows one at a time i.e processing RBAR (row by agonizing row) can be expensive, and excruciatingly slow, in MySQL.
Assuming that you are inserting the string values ("words") into a column in a table, and each word will be inserted as a new row in the table... (and that's a whole lot of assumptions there...)
For example, a table like this:
CREATE TABLE mytable (mycol VARCHAR(50) NOT NULL PRIMARY KEY) ENGINE=InnoDB
You are right that running a separate INSERT statement for each row is expensive. MySQL provides an extension to the INSERT statement syntax which allows multiple rows to be inserted.
For example, this sequence:
INSERT IGNORE INTO mytable (mycol) VALUES ('happy');
INSERT IGNORE INTO mytable (mycol) VALUES ('bad');
INSERT IGNORE INTO mytable (mycol) VALUES ('terrible');
Can be emulated with single INSERT statement
INSERT IGNORE INTO mytable (mycol) VALUES ('happy'),('bad'),('terrible');
Each "row" to be inserted is enclosed in parens, just as it is in the regular INSERT statement. The trick is the comma separator between the rows.
The trouble with this comes in when there are constraint violations; either the whole statement succeeds or fails. Unlike the individual inserts, where one of them can fail and the other two succeed.
Also, be careful that the size (in bytes) of the statement does not exceed the max_allowed_packet variable setting.
Alternatively, a LOAD DATA statement is an even faster way to load rows into a table. But for a couple of hundred rows, it's not really going to be much faster. (If you were loading thousands and thousands of rows, the LOAD DATA statement could potentially be much faster.
It would be helpful to know you are generating that list of words but you could do
insert into table (column) values (word), (word2);
Without more info that is about as much as we can help
You could add a loop in whatever language is needed to iterate over the list to add them.
I have to import loads of files into a database, the problem is, with time it got more columns.
The files are all insert-lines from SQLite, but i need them in MySQL, SQLIte doesn't provide column-names in their sql files, so the MySQL-script crashes when there are more or less columns as in the insert statement.
Is there a solution for this? Maybe over a join?
The new added columns are in the end, so the first are ALWAYS the same.
Is there any possibility to insert the sql-file in a temporary table, then make a join on an empty table (or 1 ghost record) to get the right amount of columns, and then do a insert on each line from that table to the table i want to have the data in?
Files looks like:
INSERT into theTable Values (1,1,Text,2913, txt,);
And if columns were added the file is like
INSERT into theTable Values (1,1,Text,2913, txt,added-Text);
Is there a way to extend/merge ones mysql table structure into another, while keeping the table data intact?
For example i have developed something on the local copy of the database and to transfer all database changes into production i have to copy all new columns etc into production afterwards.
Would be nice to see all the differences between databases and have some dump generated based on these differences.
Thanks.
local_table
loc_id, loc_desc, loc_price
production_table
pro_id, pro_desc, pro_price
I want to insert data from production_table to local_table and if the loc_id is same as pro_id I want to ignore it. Thereby, I'm only inserting the new rows, without replacing/changing the data in local_table:
insert ignore into local_table(loc_id, loc_desc, loc_price)
select pro_id, pro_desc, pro_list_price\n
from production_table
join local_table
where loc_id != pro_id;
Been searching on Google for a while now without finding the answer to my problem. I have like 10 tables where 5 of them contains 150 rows. I want to add 15 rows to these 5 tables, is there any simple solution for this? I know it's easy to add the rows manually but I want to know anyway. What I'm looking for is something like this:
INSERT INTO all_tables VALUES (col1, col2, col3) WHERE row_number() = '150'
Is it possible? Thanks in advance!
You can only target updates to one table at a time, which must always be specified by name. Also, you cannot specify a WHERE clause on an INSERT. Your best bet is probably to write one INSERT and copy and paste for the rest.
You could:
Loop through a list of the relevant table names.
Run a dynamic query like select count(*) into #c1 from SpecifiedTable against the relevant table, returning the count into a declared variable.
If the returned value is 150, run another dynamic query to insert the relevant values into the specified table.
You can find out more about dynamic queries and returning values from them in MySQL here. If this is a once-off, you will probably find it easier to do it manually.