mysqlsh to dump and load full schema - mysql

I want to use mysqlsh to do the following:
Dump the FULL schema of a given database (not just tables, but functions, triggers, everything related to this database schema, same as mysqldump -R DATABASE > DATABASE.sql)
Load this full schema into a brand new database I just created (similar to mysql --database=NEWDATABASE < DATABASE.sql)
When I run mysqlsh --execute 'util.dumpTables("DATABASE", [], "SQL/DATABASE", {all:true});', it of course just dumps the tables, and this can easily be imported into a brand new database with this command mysqlsh --database=NEWDATABASE --execute 'util.loadDump("SQL/DATABASE", {schema: 'NEWDATABASE', ignoreVersion:true,resetProgress:true});. The problem is it is missing the functions and stored procedures.
So then I tried mysqlsh --execute 'util.dumpSchemas(["DATABASE"], "DATABASE");', and then load it into a new DB with mysqlsh --database=NEWDATABASE --execute 'util.loadDump("DATABASE", {dryRun: true, ignoreVersion:true});', but I instantly notice that it is trying to load into the original database, not my new database. So how do I load it into a NEW database, one with a totally different name?
In case you are wondering, I am trying to learn how to maximize mysqlsh for my use case. So the old mysqldump is not an option in this case.

I think you will just have to edit the .sql file(s) with a text editor before you try to load it.
This tool is really for dumping schemas and importing them to a different MySQL instance, but leaving the schema names unchanged.

Related

How to import the data from a data dump (SqLite3) into django?

I have a data dump file(.sql) containing certain data, I would like to import it to my database(currently Sqlite3 but thinking about changing to MySQL) in order to use it in my test website.
Also, I need to know if it's possible to add the models too automatically, I presume it needs to be added manually, but any how, if there is any way to solve it, please suggest it.
There is a way to help you generate the Django models automatically given you have an existing database.
However this is a shortcut. You may then fine tune your models and add them to your app as needed.
Structuring your models in apps might force you to use the Models db_table meta option.
If at some point you would like to switch databases (Sqlite3 -> MySQL) you can export (dump) your current data to json. Then you could import (load) the data to the new database (after creating the database tables with migrate command). To do this you can use Django management commands:
Dump data
Load data
I was able to get an alternative answer after researching a bit.
Since I'm having a PostgreSQL data dump file with a file extension '.sql', I was capable of running a single command that imported the whole data dump into my local database, which is PostgreSQL. I'm using PgAdmin4 as my database management system and I installed psql during the installation of PgAdmin4, I added the psql to the path of my command prompt, hence it was accessible.
In order to import the data dump, I used the command provided below,
psql -U <username> -d <database_name> < <file.sql>
The '<' after database_name is necessary, so be sure to include it.
Here is the username of the configured account, is the database to which the data dump should be added, , is the file containing the data dump.

MySQL Workbench - How to clone a database on the same server with different name?

I am using MYSQL Workbench and I want to clone a database on the same server with different name. It should duplicate the all the tables structure and data into the new database.
I know the usual way is probably using data export to generate a sql script of the database and then run the script on the new database but I encounter some issues with it.
Anyway, is there any better way or easier way to do so?
You can use migration wizard from MySQL Workbench. Just choose the same local connection in both source and target selection, then change schema name on manual editing step. If nothing appears on manual editing step click next and the source and targets will appear. Click slowly on the source database name and edit to the correct name. Go thorough to the end and voilà - you have two identical databases with different names. Note you must have created the target database already and granted permissions to it for the MySQL Workbench user.
I tried to do it in MySQL Workbench 8.0. However I kept receiving an error regarding column-statics. The main idea is to use mysqldump.exe, located in the installation directory of MySQL Workbench, to export the data. So, supposing a Windows oriented platform:
Open Powershell, navigate to mysqldump.exe directory. In my case the command is:
cd C:\Program Files\MySQL\MySQL Workbench 8.0 CE
Export database by executing mysqldump providing the right arguments:
./mysqldump.exe --host=[hostServerIP] --protocol=tcp --user=[nameOfUser] --password=[yourPassword] --dump-date=FALSE --disable-keys=FALSE --port=[portOfMysqlServer] --default-character-set=utf8 --skip-triggers --column-statistics=0 "[databaseName]"
Without changing directory, import the exported file (.sql) by using the following command in Powershell:
Get-Content "[pathToExportedDataFile]" | ./mysql.exe --user=[nameOfUser] --password=[yourPassword] --port=[portOfMysqlServer] --host=[hostServerIP] --database=[nameOfNewDatabase] --binary-mode=1
You can check in the documentation here for more information regarding the mysqldump options.
Please note the following:
Do not forget to replace the values in [] with your own values and remove the []. Do not remove the quotes("") where the are present.
Do not switch Powershell for cmd or something like git-bash, since the above will not work.
As far as step 3 is concerned, I created the new database from MySQL Workbench and then ran the powershell command.
List item First, create a new database using CREATE DATABASE statement.
Second, export all the database objects and data of the database from which you want to copy using mysqldump tool.
Third, import the SQL dump file into the new database.

retrieve source code create a mysql table

I got a DB in MYSQL (that I haven't created), I do not have the code that was used for it.
I want to know what was the code used to create one of the tables in the DB , is there an option to do so? I need to create the same table but on diffrent data..
Thanks alot!
P
In MySQL Workbench you can display the DDL for any DB object. Just right click on it in the schema tree on either Copy to Clipboard or Send to SQL Editor and Create Statement:
This is a late answer, but since I don't see any reference to it, I'll suggest you to perform a dump of your database. Every decent DBMS has now a tool to do it. With MySQL, from command line, this would be :
mysqldump -u <username> <database_name> > yourfile.sql
This performs a complete dump in SQL format of your base, enabling you to recreate it elsewhere. No need for any special tool to do it when you need to. Just pass the content of the file to the regular MySQL's client.
If you want to get only the database schema without any data, just pass "--no-data" option.
mysqldump --no-data -u <username> <database_name> > yourfile.sql
You'll now be able to recreate a brand new, virgin database, having all attributes and special features of the previous one, without the data.

What parameters does MySQL Workbench pass to mysqldump?

I need to write a script to automate MySQL backup of a database. So to determine what I will need, I go into MySQL Workbench, select the Schema, select Data Export, set a couple of controls (at the moment: Export to Self-Contained File & Include Create Schema) and Start Export.
Export Progress shows me command-line:
Running: mysqldump --defaults-file="/tmp/tmpTbhnzh/extraparams.cnf" --user=*** --host=*** --protocol=tcp --port=3306 --default-character-set=utf8 --skip-triggers "<schema-name>"
I need to know what is in that temporary "defaults file" if I'm to replicate whatever it is that MySQL Workbench passes to mysqldump. But the backup completes so quickly and deletes the file that I can't even copy it, of course!
Is there a way I can know just what arguments Workbench is passing to mysqldump so I can know I'm generating a good, robust script? (To be clear: I'm sure I can look up the mysqldump documentation to find arguments corresponding to whatever UI items I fill in explicitly, but I'm wondering what other "goodies" MySQL Workbench might know about and put in the parameters file.)
A bit of digging about in the python scripts (there's one called wb_admin_export.py) and the answer is....not very exciting... it's your password.
It also includes ignore-tables if there are any to ignore.

Import MySQL dump to PostgreSQL database

How can I import an "xxxx.sql" dump from MySQL to a PostgreSQL database?
This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.
This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:
/* content of the script.lisp */
LOAD DATABASE
FROM mysql://dbuser#localhost/dbname
INTO postgresql://dbuser#localhost/dbname;
/*run this in the terminal*/
pgloader script.lisp
And after that your postgresql DB will have all of the information that you had in your MySQL SB.
On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)
Mac OS X
brew update && brew install pgloader
pgloader mysql://user#host/db_name postgresql://user#host/db_name
Don't expect that to work without editing. Maybe a lot of editing.
mysqldump has a compatibility argument, --compatible=name, where "name" can be "oracle" or "postgresql", but that doesn't guarantee compatibility. I think server settings like ANSI_QUOTES have some effect, too.
You'll get more useful help here if you include the complete command you used to create the dump, along with any error messages you got instead of saying just "Nothing worked for me."
The fastest (and most complete) way I found was to use Kettle. This will also generate the needed tables, convert the indexes and everything else. The mysqldump compatibility argument does not work.
The steps:
Download Pentaho ETL from http://kettle.pentaho.org/ (community version)
Unzip and run Pentaho (spoon.sh/spoon.bat depending on unix/windows)
Create a new job
Create a database connection for the MySQL source
(Tools -> Wizard -> Create database connection)
Create a database connection for the PostgreSQL source (as above)
Run the Copy Tables wizard (Tools -> Wizard -> Copy Tables)
Run the job
You could potentially export to CSV from MySQL and then import CSV into PostgreSQL.
For those Googlers who are in 2015+.
I've wasted all day on this and would like to sum things up.
I've tried all the solutions described at this article by Alexandru Cotioras (which is full of despair). Of all the solutions mentioned there only one worked for me.
— lanyrd/mysql-postgresql-converter # github.com (Python)
But this alone won't do. When you'll be importing your new converted dump file:
# \i ~/Downloads/mysql-postgresql-converter-master/dump.psql
PostgreSQL will tell you about messed types from MySQL:
psql:/Users/jibiel/Downloads/mysql-postgresql-converter-master/dump.psql:381: ERROR: type "mediumint" does not exist
LINE 2: "group_id" mediumint(8) NOT NULL DEFAULT '0',
So you'll have to fix those types manually as per this table.
In short it is:
tinyint(2) -> smallint
mediumint(7) -> integer
# etc.
You can use regex and any cool editor to get it done.
MacVim + Substitute:
:%s!tinyint(\w\+)!smallint!g
:%s!mediumint(\w\+)!integer!g
You can use pgloader.
sudo apt-get install pgloader
Using:
pgloader mysql://user:pass#host/database postgresql://user:pass#host/database
Mac/Win
Download Navicat trial for 14 days (I don't understand $1300) - full enterprise package:
connect both databases mysql and postgres
menu - tools - data transfer
connect both dbs on this first screen. While still on this screen there is a general / options - under the options check on the right side check - continue on error
* note you probably want to un-check index's and keys on the left.. you can reassign them easily in postgres.
at least get your data from MySQL into Postgres!
hope this helps!
I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrations and sessions tables. Here it is, just tested:
#!/bin/sh
MUSER="root"
MPASS="mysqlpassword"
MDB="origdb"
MTABLES="car dog cat"
PUSER="postgres"
PDB="destdb"
mysqldump -h 127.0.0.1 -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql
sed -i 's/UNLOCK TABLES;//g' outputfile.sql
sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql
psql -h localhost -d $PDB -U $PUSER -f outputfile.sql
You will get a lot of warnings you can safely ignore like this:
psql:outputfile.sql:82: WARNING: nonstandard use of escape in a string literal
LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
^
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
With pgloader
Get a recent version of pgloader; the one provided by Debian Jessie (as of 2019-01-27) is 3.1.0 and won't work since pgloader will error with
Can not find file mysql://...
Can not find file postgres://...
Access to MySQL source
First, make sure you can establish a connection to mysqld on the server running MySQL using
telnet theserverwithmysql 3306
If that fails with
Name or service not known
log in to theserverwithmysql and edit the configuration file of mysqld. If you don't know where the config file is, use find / -name mysqld.cnf.
In my case I had to change this line of mysqld.cnf
# By default we only accept connections from localhost
bind-address = 127.0.0.1
to
bind-address = *
Mind that allowing access to your MySQL database from all addresses can pose a security risk, meaning you probably want to change that value back after the database migration.
Make the changes to mysqld.cnf effective by restarting mysqld.
Preparing the Postgres target
Assuming you are logged in on the system that runs Postgres, create the database with
createdb databasename
The user for the Postgres database has to have sufficient privileges to create the schema, otherwise you'll run into
permission denied for database databasename
when calling pgloader. I got this error although the user had the right to create databases according to psql > \du.
You can make sure of that in psql:
GRANT ALL PRIVILEGES ON DATABASE databasename TO otherusername;
Again, this might be privilege overkill and thus a security risk if you leave all those privileges with user otherusername.
Migrate
Finally, the command
pgloader mysql://theusername:thepassword#theserverwithmysql/databasename postgresql://otherusername#localhost/databasename
executed on the machine running Postgres should produce output that ends with a line like this:
Total import time ✓ 877567 158.1 MB 1m11.230s
It is not possible to import an Oracle (binary) dump to PostgreSQL.
If the MySQL dump is in plain SQL format, you will need to edit the file to make the syntax correct for PostgreSQL (e.g. remove the non-standard backtick quoting, remove the engine definition for the CREATE TABLE statements adjust the data types and a lot of other things)
Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:
import MySQLdb
from magic import Connect #Private mysql connect information
import psycopg2
dbx=Connect()
DB=psycopg2.connect("dbname='honey'")
DC=DB.cursor()
mysql='''show tables from honey'''
dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
for table in ts: tables.append(table[0])
for table in tables:
mysql='''describe honey.%s'''%(table)
dbx.execute(mysql); rows=dbx.fetchall()
psql='drop table %s'%(table)
DC.execute(psql); DB.commit()
psql='create table %s ('%(table)
for row in rows:
name=row[0]; type=row[1]
if 'int' in type: type='int8'
if 'blob' in type: type='bytea'
if 'datetime' in type: type='timestamptz'
psql+='%s %s,'%(name,type)
psql=psql.strip(',')+')'
print psql
try: DC.execute(psql); DB.commit()
except: pass
msql='''select * from honey.%s'''%(table)
dbx.execute(msql); rows=dbx.fetchall()
n=len(rows); print n; t=n
if n==0: continue #skip if no data
cols=len(rows[0])
for row in rows:
ps=', '.join(['%s']*cols)
psql='''insert into %s values(%s)'''%(table, ps)
DC.execute(psql,(row))
n=n-1
if n%1000==1: DB.commit(); print n,t,t-n
DB.commit()
As with most database migrations, there isn't really a cut and dried solution.
These are some ideas to keep in mind when doing a migration:
Data types aren't going to match. Some will, some won't. For example, SQL Server bits (boolean) don't have an equivalent in Oracle.
Primary key sequences will be generated differently in each database.
Foreign keys will be pointing to your new sequences.
Indexes will be different and probably will need tweaked.
Any stored procedures will have to be rewritten
Schemas. Mysql doesn't use them (at least not since I have used it), Postgresql does. Don't put everything in the public schema. It is a bad practice, but most apps (Django comes to mind) that support Mysql and Postgresql will try to make you use the public schema.
Data migration. You are going to have to insert everything from the old database into the new one. This means disabling primary and foreign keys, inserting the data, then enabling them. Also, all of your new sequences will have to be reset to the highest id in each table. If not, the next record that is inserted will fail with a primary key violation.
Rewriting your code to work with the new database. It should work but probably won't.
Don't forget the triggers. I use create and update date triggers on most of my tables. Each db sites them a little different.
Keep these in mind. The best way is probably to write a conversion utility. Have a happy conversion!
I had to do this recently to a lot of large .sql files approximately 7 GB in size. Even VIM had troubling editing those. Your best bet is to import the .sql into MySql and then export it as a csv which can be then imported to Postgres.
But, the MySQL export as a csv is horrendously slow as it runs the select * from yourtable query. If you have a large database/table I would suggest using some other method. One way is to write a script that reads the sql inserts line by line and uses string manipulation to reformat it to "Postgres-compliant" insert statements and then execute these statements in Postgres
I could copy tables from MySQL to Postgres using DBCopy Plugin for SQuirreL SQL Client.
This was not from a dump, but between live databases.
Use your xxx.sql file to set up a MySQL database and make use of FromMysqlToPostrgreSQL. Very easy to use, short configuration and works like a charm. It imports your database with the set primary keys, foreign keys and indices on the tables. You can even import data alone if you set appropriate flag in the config file.
FromMySqlToPostgreSql migration tool by Anatoly Khaytovich, provides an accurate migration of table data, indices, PKs, FKs... Makes an extensive use of PostgreSQL COPY protocol.
See here too: PG Wiki Page
If you are using phpmyadmin you can export your data as CSV and then it will be easier to import in postgres.
Take a dump file of mysql database.
use this tool for converting local mysql database to local postgresql database.
take a clone in new folder or root directory:
git clone https://github.com/AnatolyUss/nmig.git
cd nmig
git checkout v5.5.0
nano config/config.json open this file after checkout.
Add souce database and destination database and also username, password
"source": {
"host": "localhost",
"port": 3306,
"database": "test_db",
"charset": "utf8mb4",
"supportBigNumbers": true,
"user": "root",
"password": "0123456789"
}
"target": {
"host" : "localhost",
"port" : 5432,
"database" : "test_db",
"charset" : "UTF8",
"user" : "postgres",
"password" : "0123456789"
}
After modification of config/config.json file run:
npm install
npm run build
npm start
After all this command you notice you mysql database is transferred to postgresql database.