Mysql workbench synchronize different databases - mysql

I'm using Mysql workbench to develop my database for my application.
I use at least two databases,for example:
my_local : my local testing database that it's always synchronized with mysql workbench
myserver_database : the final database in the server,keep in mind that this database is in production and users WILL update it and i can't loose any information stored into it.
Now i can synchronyze my database every time i want but i can't find a way to update the scheme to the final server because they have different names,i get something like:
my_local => N/A
N/A <= myserver_database
in the past i simply renamed the database in mysql workbench but it doesen't seem to work anymore,probably because of a bug.
I want to be able to synchronize the same workbench scheme with different databases,regardless of the database name,i didn't find a way to force the database name even by modifying the default_scheme.
Please keep in mind i'll do it a lot of times so it's better to avoid triky or dangerous solutions if possible.

I know this question is quite old but I was able to do this on workbench 5.2.40 and there are not many updated resources online explaining how.
First I got a script of my old database:
mysqldump -no-data myolddb > script.sql
(I only want to synch the schemas, this can be done on the workbench too)
now the trick is to modify the script by adding use mynewdb; as its first line, this way the workbench won't say N/A or default schema nonsense.
On the workbench I created a EER model of mynewdb which is on my server, and then "Database->Synchronize with any source" and select from "model Schemadata" to "Script file" in the wizard using the script I modified initially. And then the Synch wizard worked like it should.

Related

Is there a good way to perform SQL dump of MySQL database in DataGrip?

I'm trying to use JetBrains DataGrip as my primary DB tool. However, I still find myself using SequelPro for SQL Dump. Here is why:
On a database level, I couldn't find any SQL dump functionality. The only options seems to be "copy DDL", which copies the schema, but not the content.
On a table level, sure, I can export data as SQL Inserts. But then it seems the only way to do so is to export it from each table separately, which is unacceptable. Another downside is, when exporting data as INSERT, it creates a separate INSERT statement for each row.
I tried to look for plugins, but couldn't find any. DataGrip users, if you came up with any solutions, please let me know. Sequel Pro works like a charm, but I really would love to use one database client at the end of the day.
PS. SSHing to a server and running sqldump is not an option for me, for various security reasons.
In 2016.2 there is some functionality, check like in screenshot.
2016.3 will be integrated with mysqldump.

How to execute SQL queries inside MySQL workbench

I am developing an application using Django framework. As you may know the workflow is you first describe your objects in Python classes and then you synchronize the database.
I made a MySQL Workbench EER diagram. Since then I continued to develop the application, so the database model is not updated in the EER diagram nor the MySQL Workbench model.
I tried to synchronize it using the built-in feature "Synchronize with Any Source" of MySQL Workbench, but this feature is not working for some reason and causes a segmentation fault. The queries to be executed inside the MySQL Workbench model are displayed but at the last step I get an empty SQL alter script. I tried manually copying the queries in that script and clicking the "Execute" button, but I had no luck with that. I think MySQL stores queries internally. Anyway. I submitted the bug to MySQL Workbench developers here and now it is fixed, but not yet released. I am now looking for a workaround while waiting for the next release.
Although I have a specific problem, the question remains generic.
Is it possible to execute queries on the MySQL Workbench model in order to alter it?
Did you try the "Forward Engineer" option? It allows you to reflect all the changes that were made to your table relations directly to the database which is a pretty useful functionality.
There are some catches though like the inability to maintain existing data every time forward engineering is performed however this can be compensated by entering some example data which will be shipped with the ER diagram the next time you perform "Forward engineer".

How to generate the whole database script in MySQL Workbench?

I want to take the whole database. Where do I find the database file?
And is there a way to write the whole database with all data to a text file (like the one in SQL Server)?
How to generate SQL scripts for your database in Workbench
In Workbench Central (the default "Home" tab) connect to your MySQL instance, opening a SQL Editor tab.
Click on the SQL Editor tab and select your database from the SCHEMAS list in the Object Browser on the left.
From the menu select Database > Reverse Engineer and follow the prompts. The wizard will lead you through connecting to your instance, selecting your database, and choosing the types of objects you want to reverse engineer.
When you're all done, you will have at least one new tab called MySQL Model. You may also have a tab called EER Diagram which is cool but not relevant here.
Click in the MySQL Model tab
Select Database > Forward Engineer
Follow the prompts. Many options present themselves, including Generate INSERT Scripts for Tables which allows you to script out the data contained within your tables (perfect for lookup tables).
Soon you will see the generated script in front of you. At this point you can Copy to Clipboard or Save to Text File.
The wizard will take you further, but if you just want the script you can stop here.
A word of caution: the scripts are generated with CREATE commands. If you want ALTER you'll have to (as far as I can tell) manually change the CREATEs to ALTERs.
This is guaranteed to work, I just did it tonight.
Q#1: I would guess that it's somewhere on your MySQL server?
Q#2: Yes, this is possible. You have to establish a connection via Server Administration. There you can clone any table or the entire database.
This tutorial might be useful.
EDIT
Since the provided link is no longer active, here's a SO answer outlining the process of creating a DB backup in Workbench.
In MySQL Workbench 6, commands have been repositioned as the "Server Administration" tab is gone.
You now find the option "Data Export" under the "Management" section when you open a standard server connection.
there is data export option in MySQL workbech
I found this question by searching Google for "mysql workbench export database sql file". The answers here did not help me, but I eventually did find the answer, so I am posting it here for future generations to find:
Answer
In MySQLWorkbench 6.0, do the following:
Select the appropriate database under MySQL Connections
On the top-left hand side of screen, under the MANAGEMENT heading, select "Data Export".
Here is a screenshot for reference:
None of these worked for me. I'm using Mac OS 10.10.5 and Workbench 6.3. What worked for me is Database->Migration Wizard... Flow the steps very carefully
In the top menu of MySQL Workbench click on database and then on forward engineer. In the options menu with which you will be presented, make sure to have "generate insert statements for tables" set.
Try the export function of phpMyAdmin.
I think there is also a possibility to copy the database files from one server to another, but I do not have a server available at the moment so I can't test it.
Using Windows 10 and MySql Workbench 8.0
Go to Server tab
Go to Database Export
This opens up something like this
Select the schema to export in the Tables to export
Click the button Start Export
Surprisingly the Data Export in the MySql Workbench is not just for data, in fact it is ideal for generating SQL scripts for the whole database (including views, stored procedures and functions) with just a few clicks. If you want just the scripts and no data simply select the "Skip table data" option. It can generate separate files or a self contained file. Here are more details about the feature: http://dev.mysql.com/doc/workbench/en/wb-mysql-connections-navigator-management-data-export.html
in mysql workbench server>>>>>>export Data
then follow instructions it will generate insert statements for all tables data each table will has .sql file for all its contained data

Migration from MySQL to Postgresql with auto-increments - how?

I'm considering a MySQL to Postgresql migration for my web application, but I'm having a really hard time converting my existing MySQL database to Postgresql.
I tried :
mysldump with --compatible=postgresql
migration wizard from EnterpriseDB
Postgresql Data Wizard from EMS
DBConvert from DMSoft
and NONE of the above programs do a good job converting my database!
I saw some Perl and Python scripts for converting mysql to postgresql, but I can't figure out how to use them....(I installed ActivePerl and don't understand what I'm supposed to do next to run that script!)
I use Auto Increment fields (as a primary key) all the time, and these are just ignored... I understand that Postgresql does auto-increments in another way (with sequences), but it can't be THAT hard for MIGRATION software to implement that, or is it?
Did anybody have better luck converting a MySQL database with auto-increments as primary keys?
I know this is probably not the answer you are looking for, but: I don't believe in "automated" migration tools.
Take your existing SQL Scripts that create your database schema, do a search and replace for the necessary data types (autonumber maps to serial which does all the sequence handling automagically for you), remove all the "engine=" stuff and then run the new script against Postgres.
Dump the old database into flat files and import them into the target.
I have done this several times with sample databases that were intended for MySQL and it really doesn't take that long.
Probably just as long as trying all the different "automated" tools.
Why not use an ETL Tool? you dont have to worry about dumps or stuff like that.
I have migrated to PostgresSQL and MySQL and have had no problems with the auto increment fields.
You just need to know the connection credentials and thats it. I personally use Pentaho ( it's open source ).
Download Pentaho ETL from http://kettle.pentaho.org/
Unzip and run Pentaho (using .bat file spoon.bat)
Create a new Job:
Create DB connection for source data base (PostgreSQL) - using menu: Tools→Wizard→Create DataBase Connection (F3) Create DB connection for destination data base (Mysql) - using technique described above.
Run the Wizard: Tools → Wizard → Copy Tables (Ctrl-F10).
Select source (left dialog panel), and destination (left dialog panel). Click Finish.
The Job will be generated - Run the job.
If you need any help let me know.
Even when you familiar with all "PostgreSQL gotchas", doing every step by hand may take a lot of time, especially when your db is "big".
Try some other scripts/tools.
I know this is an old question but I just ran into the same problem migrating from MySQL to Postgres. After trying several migration tools out the very best one I could find, which will migrate your database structure as cleanly as possible, was Pgloader https://github.com/dimitri/pgloader/ it will take care of changing the Auto Increment to Postgres sequences no problem and it's super fast.

migration from Progress DB to MySQL using linux

I am trying to replicate a Progress database to MySQL 5.1..now , I came across a few softwares and a few suggestions on stackoverflow as well as other websites which necessitate the involvement of a Software like Pro2SQL or other SQL migration tools like MySQL migration tool.But the problem that i am faced with is that I will be using Linux to run the mysql.i am working on linux.Is there a software for linux(I am using bash scripting to query the MYSQL database) or another other means?
Currently , I am using jdbc to connect and retrieve, but mapping the database is hard and may create flaws in the long run due to mapping problems.Also, this proccess will be repeated quiet often..for backup.
Since, MySQL migration tool is a good solution , but it doesnt support linux command prompt, so I have to implement in another better / optimized way..Please suggest what should be done further.Thanks a ton for the support..
If it is just about dumping :
If i get your problem the solution holds in two lines (If you are following SQL standards) :
pg_dump <yourdatabase>
mysql < <yourfile.sql>
With the first line you are dumping your database, many options exists whether you want to dump tables, content, schemas, etc... go to the man page for more details
With the second lines you are just loading them into your mysql.
If it is about mapping :
Take a look a Kettle, it's an Open Source ETL, it works really well on Linux and you can automize task using crontabs.
Hope I could help,