Adding recors in MySql - mysql

I want to add some 1000 records into my table for creating a database. Inserting each record manually is not at all practical. Is there a proper way to do this?

In MySQL you can insert multiple rows with a single insert statement.
insert into table values (data-row-1), (data-row-2), (data-row-3)
If you run a mysqldump on your database, you will see that this is what the output does.
The insert is then run as a single "transaction", so it's much, much faster than running 1000 individual inserts

Related

How to automatically run a query when a table is amended by INSERT/UPDATE/DELETE?

I have a query which basically "syncs" all the data from a table in one database, to a replicated table in another database.
Here is the simple query:
TRUNCATE TABLE [Database2].[dbo].[USER_SYNC]
INSERT INTO [Database2].[dbo].[USER_SYNC]
SELECT * FROM [Database1].[dbo].[USER]
Now, after some research, I had a look into using a trigger to do this, however, I read up that stored procedures and heavy queries such as this should not be used within a trigger.
Therefore, what is the best way in which I can automatically run this query from within SQL, whenever a record in database1 is inserted, amended or deleted?
And if what I read up about triggers was incorrect, then how would I go about creating one for my procedure? Thanks.
If you need to sync tables you do not need to truncate one every time on update, delete or insert.
Create identical copy of user table.
Create on update, on delete, on insert triggers on the original user table.
In the trigger update, delete or insert to the duplicate table only one row at a time - the one that was updated, deleted or inserted to the original user table. This will not be a heavy query.
UPDATE:
http://www.mysqltutorial.org/create-the-first-trigger-in-mysql.aspx
http://dev.mysql.com/doc/refman/5.7/en/trigger-syntax.html

How to handle milions of separate insert queries

I have a situation in which I have to insert over 10 million separate records into one table. Normally a batch insert split into chunks does the work for me. The problem however is that this over 3gig file contains over 10 million separate insert statements. Since every query takes 0.01 till 0.1 seconds, it will take over 2 days to insert everything.
I'm sure there must be a way to optimize this by either lowering the insert time drasticly or somehow import in a different way.
I'm now just using the cli
source /home/blabla/file.sql
Note: It's a 3th party that is providing me this file. I'm
Small update
I removed any indexes
Drop the indexes, then re-index when you are done!
Maybe you can parse the file data and combine several INSERT queries to one query like this:
INSERT INTO tablename (field1, field2...) VALUES (val1, val2, ..), (val3, val4, ..), ...
There are some ways to improve the speed of your INSERT statements:
Try to insert many rows at once if this is an option.
An alternative can be to insert the data into a copy of your desired table without indexes, insert the data there, then add the indexes and rename your table.
Maybe use LOAD DATA INFILE, if this is an option.
The MySQL manual has something to say about that, too.

MySQL: Copy from 1 table to another not overwriting existing?

I have two tables:
tableOriginal
tableBackup
They have exactly the same structure.
I want a SQL statement I can run anytime of the day, that will copy all the rows from tableOriginal to tableBackup WITHOUT overwriting items in tableBackup. Basically, this command must synchronize tableBackup with tableOriginal.
How do I do that?
INSERT INTO tableBackup(SELECT * FROM tableOriginal)
As long as there is no issue with primary keys being updated or replaced with new incoming data this should not create an issue for you. However as you already know, backup table will have more data after your command since it did not delete previous data it had
Why don't you delete first all the data in tableBackup, then INSERT the data in tableOriginal to tableBackup
DELETE FROM tableBackup
INSERT INTO tableBackup(SELECT * FROM tableOriginal)
Why do we need to delete first?
Because if we're going to insert unique data into the tableBackup,
next time we insert it will not execute, because we will insert/add some data that is already been there..
Hope you get what I'm trying to say.

No data if queries are sent between TRUNCATE and SELECT INTO. Using MySQL innoDB

Using a MySQL DB, I am having trouble with a stored procedure and event timer that I created.
I made an empty table that gets populated with data from another via SELECT INTO.
Prior to populating, I TRUNCATE the current data. It's used to track only log entries that occur within 2 months from the current date.
This turns a 350k+ log table into about 750 which really speeds up reporting queries.
The problem is that if a client sends a query precisely between the TRUNCATE statement and the SELECT INTO statement (which has a high probability considering the EVENT is set to run every 1 minute), the query returns no rows...
I have looked into locking a read on the table while this PROCEDURE is ran, but locks are not allowed in STORED PROCEDURES.
Can anyone come up with a workaround that (preferably) doesn't require a remodel?
I really need to be pointed in the right direction here.
Thanks,
Max
I'd suggest an alternate approach instead of truncating the table, and then selecting into it...
You can instead select your new data set into a new table. Next, using a single RENAME command, rename the new table to the existing table and the existing table to some backup name.
RENAME TABLE existing_table TO backup_table, new_table TO existing_table;
This is a single, atomic operation... so it wouldn't be possible for the client to read from the data after it is emptied but before it is re-populated.
Alternately, you could change your TRUNCATE to a DELETE FROM, and then wrap this in a transaction along with the SELECT INTO:
START TRANSACTION
DELETE FROM YourTable;
SELECT INTO YourTable...;
COMMIT

Maximum values possible in a WHERE IN query

I have a table with over 3000000 entries, and i need to delete 500000 of them with given ID's.
My idea is to create a query like:
DELETE FROM TableName WHERE ID IN (id1, id2, ...........)
which I generate with a simple C# code.
The question is:
is there a limit to how many values I can set in the array of ID's.
And if someone have a better way to achieve this delete more efficiently I'm open to ideas.
If your IDs can't be determined with whatever comparison (as in WHERE ID < 1000000) you could
INSERT them into a temp table with multiple inserts and then
JOIN this temp table to yours
But inserts may become problematic. You should check that. How could you speed this thing up?
make deletes in several bulks
insert IDs into temp table in bulks
At the end my solution which works not so bad:
1. Sorted the ID's (to save server paging)
2. Created with C# code query's with 500 ID's in them.
3. sent the query's one by one.
I assume that when i worked with query having 1000+ ids the sql server time to process the query was slowing me down (after all any query you run in sql server is being process and optimized).
I Hope this help someone