My Sql WorkBench - wrong number of columns in target DB - mysql

We are using MySQL Workbench 6.2 to migrate data. Our source table and destination table have different numbers of columns. Say, the source table has 16 and destination table has 18.
When we migrate, Workbench says
Error wrong number of columns in target DB.
Do the source and destination table columns numbers to be same? Or is there some way to tell Workbench default or derived values for the destination table columns?

Maybe you are doing a SELECT INTO query like this in your migrate script/step, that depends on the number of columns in the database rather than the actual column names.
SELECT *
INTO newtable [IN externaldb]
FROM table1;
Or
insert into items_ver
select * from items where item_id=2;
Somewhere along the line, your databases have changed in the number of rows available, so you cant do either of those
You can either specicy the columns like this:
insert into items_ver(column1, column2, column3)
select column1, column2, column3 from items where item_id=2;
Or add the 2 missing columns from the one database into the other, and ensure that they all match up.

In MySQL Workbench migration wizard (or schema transfer wizard) there is no way to do that. All columns in source and target databases must match.

Related

Is there a faster query for this mysql query?

I am working with mysql dbs. There are two columns in a particular table (column1 and column2) and 10000000+ rows. I want to get all entries where column1 is one of a list of 50000 no.s. I am using this query currently:
Select * from db.table where column1 in (list of 50000 no.s)
Is there a faster query than this?
I can not talk about MySQL - only SQL Server - but the same principle may apply.
On SQL Server an IN has a serious problem of no statistics. Which means that with a non trivial number, the query plan is a table scan.
It is better to make a temporary table and load the ID's (AND put in a unique index on it which puts up statistics) and then JOIN between the two tables. More for the query analyzer to work with.
INDEX(column1)
Are there only 2 columns in the table? If not, then don't use SELECT *, but spell out the column names.
Please provide EXPLAIN SELECT ...

SSIS prevent the insert of data rows from flat file that already exist in the SQL Server table

‌I‌‌ need to create a SSIS package in which I am reading a flat file (provided monthly with many defined columns) and writing the data to a already defined SQL Server table (with lot of data already in SQL table). In the SQL table design view, I have datatypes including float ,datetime , bigint, varchar (which are already defined and CANNOT be changed)
I need to prevent the insert of any data rows from flat file that already exist in the SQL Server table.‌ How can I achieve this ?
I‌ tried to achieve this using lookup transformation ‌‌‌but in Edit mappings I get an error while creating relationships "Cannot map the lookup column because the column is set to a floating point data type" . I am able to create the relationships for all other data types but then there are some data rows in source file which differ from data in sql table in floating point values only and the expectation is that these rows will be inserted.
‌ Is there any other simple way to achieve this ?
T‌hanks.
Please try to convert the columns which has problem in mapping using data conversion.
Thanks
neither SSIS nor SQL bulk load (the SQL feature that is behind the SSIS load task) permit this out of the box.
you can use the method described by #sasi, and in your lookup, define the sql query yourself with sql cast (the convert keyword). But even if you could solve your cast issue this way, you will surely face a performance problem if you load a large amount of data.
There are two way to deal with it:
The first (the easiest but quiet slow compared to the other option, maybe even more slow than your solution in some conditions), use an insert statement command using sql command for each row like the following:
INSERT target_table (val1, val2, id)
SELECT $myVal1, $myVal2, $myCandidateKey
WHERE NOT EXISTS (SELECT 1 FROM target_table as t WHERE t.id = $myCandidateKey);
The second implies the creation of a staging table on the target database. This table has the same structure than your target table. It is created once for good. You must also create an index on what is supposed to be the key that will define if the record might already be loaded. Your process will empty it prior any execution for an obvious purpose. Instead of loading the target table with SSIS, you load this staging table. Once this staging table is loaded, you will run the following command just once:
INSERT target_table (val1, val2, id)
SELECT stg.val1, stg.val2, stg.id
FROM staging_target_table as stg
WHERE NOT EXISTS (SELECT 1 FROM target_table as t WHERE t.id = stg.id);
This is extremely fast, compared to the first solution.
in this case, I supposed that what permits you to recognize you row is a key (the "id" column), but if you actually want to compare the full row, you will have to add the comparison like this for the first solution:
INSERT target_table (val1, val2)
SELECT $myVal1, $myVal2
WHERE NOT EXISTS (SELECT 1 FROM target_table as t WHERE t.val1 = $myVal1 and t.val2 = $myVal2);
or like this for the second solution:
INSERT target_table (val1, val2, id)
SELECT stg.val1, stg.val2, stg.id
FROM staging_target_table as stg
WHERE NOT EXISTS (SELECT 1 FROM target_table as t WHERE t.val1 = stg.val1 and t.val2 = stg.val2);

How to populate existing database with dummy data in MySQL Workbench?

Is there a way to populate (generate in an easy way) a dummy data in MySQL Workbench based on an existing database schema ?
I know I can select database, and click on "Select Rows - Limit 1000" and start inserting values for each row. But that would be rather long proces because of fairly complex database schema.
I guess there is something inside of MySQL Workbench to get around this, right?
There's a neat trick to quickly fill a table with dummy data (actually all duplicated rows).
Start with a normal insert query:
insert into table t1 values (value1, value2 ...);
That is your base record. Now do another insert with a select:
insert into table t1 select * from t1;
Now you have 2 records. Do the same query again for 4 records, again for 8, 16, 32 etc. Of course you have to take care not to insert duplicate keys (e.g. by trimming the select statement, or use an auto inc value without copying it or have no indices at all and add them later etc.).
In MySQL Workbench you can just duplicate this query 20 times in the editor (copy/paste) and run the entire editor content once to get 1 million rows.

Replace one column of a table with another column of another table in SQL

I have a table with several columns Table1(Col A, Col B)
Now I have one more table with one column. Table2 (Col C)
What I want to do is:
Replace Col B of table1 with Col C of tabl 2.
Is it possible in SQL? I am using phpmyadmin to execute queries
Why I need to do this?
- I was playing around with the database structure and changed the type of text to integer which messed up the entries in the column
- Good thing: I have a backup excel file so now i am planning to replace the effected column to by the orginal values in the backedup excel file.
No can do.
You seem to be making an incorrect assumption, namely that the order of rows in a table is significant. Else what's confusing some of the commenters would be clear to you: there's no information in table2 to relate it to table1.
Since you still have the data in Excel, drop table2 and re-create it with rows having the key to table1. Then write a view to join them. Easiest is probably to insert that join result into a third table, and then drop the first two and rename the third.

Mysql column count doesn't match value count fix for unattended

I have to import loads of files into a database, the problem is, with time it got more columns.
The files are all insert-lines from SQLite, but i need them in MySQL, SQLIte doesn't provide column-names in their sql files, so the MySQL-script crashes when there are more or less columns as in the insert statement.
Is there a solution for this? Maybe over a join?
The new added columns are in the end, so the first are ALWAYS the same.
Is there any possibility to insert the sql-file in a temporary table, then make a join on an empty table (or 1 ghost record) to get the right amount of columns, and then do a insert on each line from that table to the table i want to have the data in?
Files looks like:
INSERT into theTable Values (1,1,Text,2913, txt,);
And if columns were added the file is like
INSERT into theTable Values (1,1,Text,2913, txt,added-Text);