Is there a way to populate (generate in an easy way) a dummy data in MySQL Workbench based on an existing database schema ?
I know I can select database, and click on "Select Rows - Limit 1000" and start inserting values for each row. But that would be rather long proces because of fairly complex database schema.
I guess there is something inside of MySQL Workbench to get around this, right?
There's a neat trick to quickly fill a table with dummy data (actually all duplicated rows).
Start with a normal insert query:
insert into table t1 values (value1, value2 ...);
That is your base record. Now do another insert with a select:
insert into table t1 select * from t1;
Now you have 2 records. Do the same query again for 4 records, again for 8, 16, 32 etc. Of course you have to take care not to insert duplicate keys (e.g. by trimming the select statement, or use an auto inc value without copying it or have no indices at all and add them later etc.).
In MySQL Workbench you can just duplicate this query 20 times in the editor (copy/paste) and run the entire editor content once to get 1 million rows.
Related
We are using MySQL Workbench 6.2 to migrate data. Our source table and destination table have different numbers of columns. Say, the source table has 16 and destination table has 18.
When we migrate, Workbench says
Error wrong number of columns in target DB.
Do the source and destination table columns numbers to be same? Or is there some way to tell Workbench default or derived values for the destination table columns?
Maybe you are doing a SELECT INTO query like this in your migrate script/step, that depends on the number of columns in the database rather than the actual column names.
SELECT *
INTO newtable [IN externaldb]
FROM table1;
Or
insert into items_ver
select * from items where item_id=2;
Somewhere along the line, your databases have changed in the number of rows available, so you cant do either of those
You can either specicy the columns like this:
insert into items_ver(column1, column2, column3)
select column1, column2, column3 from items where item_id=2;
Or add the 2 missing columns from the one database into the other, and ensure that they all match up.
In MySQL Workbench migration wizard (or schema transfer wizard) there is no way to do that. All columns in source and target databases must match.
I have to import loads of files into a database, the problem is, with time it got more columns.
The files are all insert-lines from SQLite, but i need them in MySQL, SQLIte doesn't provide column-names in their sql files, so the MySQL-script crashes when there are more or less columns as in the insert statement.
Is there a solution for this? Maybe over a join?
The new added columns are in the end, so the first are ALWAYS the same.
Is there any possibility to insert the sql-file in a temporary table, then make a join on an empty table (or 1 ghost record) to get the right amount of columns, and then do a insert on each line from that table to the table i want to have the data in?
Files looks like:
INSERT into theTable Values (1,1,Text,2913, txt,);
And if columns were added the file is like
INSERT into theTable Values (1,1,Text,2913, txt,added-Text);
I used to run this command to insert some rows in a counter table:
insert into `monthly_aggregated_table`
select year(r.created_at), month(r.created_at), count(r.id) from
raw_items r
group by 1,2;
This query is very heavy and takes some time to run (millions of rows), and the raw_items table is MyISAM, so it was causing table locking and writes to it had to wait for the insert to finish.
Now I created a slave server to do the SELECT.
What I would like to do is to execute the SELECT in the slave, but get the results and insert into the master database. Is it possible? How? What is the most efficient way to do this? (The insert used to have 1.3 million rows)
I am running MariaDB 10.0.17
You will have to split the action in 2 parts with a programming language like java or php in between.
First the select, then load the resultset into your application, and then insert the data.
Another optimization which you could do to speed the select up is add one new column in your table "ym_created_at" containing a concatenation of year(created_at) and month(created_at). Place an index on that column and then run the updated statement:
insert into `monthly_aggregated_table`
select ym_created_at, count(r.id) from
raw_items r
group by 1;
Easier and might be a lot quicker since not functions are acting on the columns you are using the group by on.
I have executed an insert query as follows -
Insert into tablename
select
query1 union query2
Now if I execute the select part of this insert query,it takes around 2-3 minutes.However,the entire insert script is taking more than 8 minutes.As per my knowledge the insert and corresponding select queries should take almost the same time for execution.
So is their any other factor that could impact the execution time of the insert?
It's not correct that insert and corresponding select takes the same time, it should not!
The select query just "reads" data and transmit them; if you are trying the query in an application (like phpMyadmin) is very likely to limit query result to paginate them, so the select is faster (as it doesn't select all the data).
The insert query must read that data, insert in the table, update primary key tree, update every other index on that table, update every view using that table, triggering any trigger on that table/column, ecc... so the insert operates a LOT way more actions than an insert.
So IT'S normal that the insert is slower than the select, how much slower depends on your tables and db structure.
You could optimize the insert with some db specific options, for example you could read here for mysql, if you are on DB2 you could crete a temp file then cpyf that into the real one, and so on...
the scripts i've been working with in SQL work with close to 40,000 records and i've noticed a huge increase in execution time for when i use an UPDATE command
in 2 tables that have like 10 fields in them each, INSERT executes quicker for both combined than this UPDATE command
UPADTE table1
INNER JOIN table2 ON table1.primarykey = table2.primarykey
SET table1.code = table2.code
really what the UPDATE is doing is copying the code from one table to another where the identical records exists, this is because table1 is a staging table between 2 databases while table2 is a possessing table to insert staging table's data across multiple tables, both tables have the same number of record which is about 40,000
now to me UPDATE should be executing a lot quicker, considering it's only connecting 2 identical tables and inserting data for 1 field it should be running quicker than 2 INSERTS where 40,000 records are being created over 10 fields (so in other words, inserting 800,000 pieces of data) and i'm running the queries in a SQL console windows to avoid php timeouts
is UPDATE somehow more resource hungry than INSERT and is there any way to get it to go faster (apart from changing the fact that i use a separate table for processing, the staging table updates frequently so i copy the data like a snapshot and work with that, the code field is NULL to begin with so i only copy over records with a NULL code meaning records where code is not NULLhave already been worked with)
Is that UPDATE command the actual SQL? Because you need a WHERE clause to avoid updating every record in the table...
Also, INSERT doesn't first need to find the record to update from 2 joined tables.