Importing MySQL to Postgres. Permission issues - mysql

I have MySql and Postgres databases. I have been working on Mysql DB which is populated with my data. Now for me to use heroku, I need to port it to Postgres. These are the steps I followed:
I exported data from my Mysql DB by simple dump command:
mysqldump -u [uname] -p[pass] db_name > db_backup.sql
I logged into my Postgres
sudo su postgres
Now when I try to import the sql into Postgres, it does not have access to db_backup.sql. I changed the permissions for all users and made the dump file read/write to all but still I cannot import the sql.
My question is what is the correct way to duplicate (both schema and data) from Mysql to Postgres. Also why am I not able to access the dump file even after changing the permissions? And if I have a dump from Mysql what are the chances that it runs into the issues while running it on Postgres (I do not have any procedural stuff in my Mysql. Just creation of tables and dumping data into those tables.)?
Thanks!
P.S. I am on Mac-Mavericks if that matters

While the primary part of the question was answered by #wildplasser I thought I would put the entire answer for people looking at porting MySQL data to Postgres.
After trying out multiple solutions, the easiest and quite smooth solution was this: https://github.com/lanyrd/mysql-postgresql-converter
This worked quite smoothly. But just one problem- it does not port any of Mysql sequences to Postgres. This means if you have auto-increment primary ids, you will have to change your Postgres schema separately and create serial sequences after the porting is done. Apart from that, it was quite smooth.
To talk about the permission issue, logging in as Postgres user and trying to access dump created by original user failed, the right way to do it was stay logged in as original user and use postgres user only for DB operation by using -U postgresuser command.
E.g.: psql -U postgres databasename < data_base_dump
While for many this must be the obvious way of doing it, I must admit it was one of those eureka moments for me :)

Related

Some file lost in MySQL database. How to re-create it in proper way?

The problem is, that one MYI and one MYD file from MySQL database has been accidentally deleted. The only file left intact is FRM one. Only one table from the whole database is damaged that way, all other tables are OK and the database works generally fine, except the table with deleted files, which is obviously inaccessible.
There's a full database dump in pure SQL format available.
The question is, how do I re-create these files and table in safe and proper manner?
My first idea was to extract the full create table command from the dump and run it on live database. It's not so easy, as the whole dump file has over 10GB, so any operations within its content are really pain in . Yes, I know about sed and know how to use it - but I consider it the last option to choose.
Second and current idea is to create copy of this database on independent server, make a dump of the table in question and then use resulting SQL file to create the table again on the production server. I'm not quite experienced with MySQL administration tasks (well, just basic ones), but for me this option seems to be safe and reasonable.
Will the second option work as I expect?
Is it the best option, or are there any more recommendable solutions?
Thank you in advance for your help.
The simplest solution is to copy the table you deleted. There's a chance mysqld still has an open file handle to the data files you deleted. On UNIX/Linux/OS X, a file isn't truly deleted while some process still has an open file handle to it.
So you might be able to do this:
mysql> CREATE TABLE mytable_copy LIKE mytable;
mysql> INSERT INTO mytable_copy SELECT * FROM mytable;
If you've restarted MySQL Server since you deleted the files, this won't work. If the server has closed its file handle to the data file, this won't work. If you're on Windows, I have no idea.
The next simplest solution is to restore your existing 10GB dump file to a temporary instance of MySQL Server, as you said. I'd use MySQL Sandbox but some people would use a virtual machine, or if you're using an AWS environment, launch a spot EC2 instance or a small RDS instance.
Then dump just the table you need:
mysqldump -h tempserver mydatabase mytable > mytable.sql
Then restore it to your real server.
mysql -h realserver mydatabase < mytable.sql
(I'm omitting the user & password options, I prefer to put those in .my.cnf anyway)

Switching hosts want to transfer my database

I'm considering switching to a new hosting provider, and I would like to transfer my database for my production site to the new hosting provider. I'm using mysql. What are the steps I would need to take to transfer my db?
Appreciate any help.
Thank you,
Brian
Assuming a relatively simple app (PHP, something like that), one app server, one db server, then briefly:
On the new host, create the necessary accounts on the database that you're using on the old host's database.
Copy the app code over.
"Lock" your app on the old host so no data changes can occur (if this is feasible.)
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html is your friend. Dump schema and data, and capture it to a file. Here is the command I used to dump the database exampledb that has the login of example:
mysqldump --add-drop-table -u example -p exampledb > output.sql
(The --add-drop-table makes it easier to re-run the script if you need to later. But it does create a script that will destroy your database, so careful how you run it.)
Now copy (maybe using scp) the output.sql file to your new host.
On the new host, run mysql to build the database with the schema and data from the old host. I use a command like this one, assuming user "example" and a database name of "exampledb":
mysql -u example -p exampledb < output.sql
(Be careful to run this ONLY ON THE NEW HOST. It will obliterate your database.)
The nice thing is, you've got a blank slate of a new machine. You can keep trying different things on that machine without breaking anything.
Turn on the app on new host. Test. If it's been a while, you may need to make changes to get your code up to a newer version of the language. (I did in my case. But maybe you were better about keeping your code up to date.)
Shut down app on old host.
Point DNS/router/whatever to new host.
What'd I miss? (Just went through this moving my silly website to a new machine.)
It's pretty simple, especially for just a single database?
mysqldump followed by a mysqlimport.
MySQL Dump
Generating the .sql file is all you need, because that will contain all of the table information such as CREATE INDEXES, which when you then run through all of your inserts, will add the indexes.
If you struggle with command lines, may I suggest using Navicat Lite. It is free, and is the best GUI that I've seen on the market.
Navicat Lite

Import MySQL dump to PostgreSQL database

How can I import an "xxxx.sql" dump from MySQL to a PostgreSQL database?
This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.
This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:
/* content of the script.lisp */
LOAD DATABASE
FROM mysql://dbuser#localhost/dbname
INTO postgresql://dbuser#localhost/dbname;
/*run this in the terminal*/
pgloader script.lisp
And after that your postgresql DB will have all of the information that you had in your MySQL SB.
On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)
Mac OS X
brew update && brew install pgloader
pgloader mysql://user#host/db_name postgresql://user#host/db_name
Don't expect that to work without editing. Maybe a lot of editing.
mysqldump has a compatibility argument, --compatible=name, where "name" can be "oracle" or "postgresql", but that doesn't guarantee compatibility. I think server settings like ANSI_QUOTES have some effect, too.
You'll get more useful help here if you include the complete command you used to create the dump, along with any error messages you got instead of saying just "Nothing worked for me."
The fastest (and most complete) way I found was to use Kettle. This will also generate the needed tables, convert the indexes and everything else. The mysqldump compatibility argument does not work.
The steps:
Download Pentaho ETL from http://kettle.pentaho.org/ (community version)
Unzip and run Pentaho (spoon.sh/spoon.bat depending on unix/windows)
Create a new job
Create a database connection for the MySQL source
(Tools -> Wizard -> Create database connection)
Create a database connection for the PostgreSQL source (as above)
Run the Copy Tables wizard (Tools -> Wizard -> Copy Tables)
Run the job
You could potentially export to CSV from MySQL and then import CSV into PostgreSQL.
For those Googlers who are in 2015+.
I've wasted all day on this and would like to sum things up.
I've tried all the solutions described at this article by Alexandru Cotioras (which is full of despair). Of all the solutions mentioned there only one worked for me.
— lanyrd/mysql-postgresql-converter # github.com (Python)
But this alone won't do. When you'll be importing your new converted dump file:
# \i ~/Downloads/mysql-postgresql-converter-master/dump.psql
PostgreSQL will tell you about messed types from MySQL:
psql:/Users/jibiel/Downloads/mysql-postgresql-converter-master/dump.psql:381: ERROR: type "mediumint" does not exist
LINE 2: "group_id" mediumint(8) NOT NULL DEFAULT '0',
So you'll have to fix those types manually as per this table.
In short it is:
tinyint(2) -> smallint
mediumint(7) -> integer
# etc.
You can use regex and any cool editor to get it done.
MacVim + Substitute:
:%s!tinyint(\w\+)!smallint!g
:%s!mediumint(\w\+)!integer!g
You can use pgloader.
sudo apt-get install pgloader
Using:
pgloader mysql://user:pass#host/database postgresql://user:pass#host/database
Mac/Win
Download Navicat trial for 14 days (I don't understand $1300) - full enterprise package:
connect both databases mysql and postgres
menu - tools - data transfer
connect both dbs on this first screen. While still on this screen there is a general / options - under the options check on the right side check - continue on error
* note you probably want to un-check index's and keys on the left.. you can reassign them easily in postgres.
at least get your data from MySQL into Postgres!
hope this helps!
I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrations and sessions tables. Here it is, just tested:
#!/bin/sh
MUSER="root"
MPASS="mysqlpassword"
MDB="origdb"
MTABLES="car dog cat"
PUSER="postgres"
PDB="destdb"
mysqldump -h 127.0.0.1 -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql
sed -i 's/UNLOCK TABLES;//g' outputfile.sql
sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql
psql -h localhost -d $PDB -U $PUSER -f outputfile.sql
You will get a lot of warnings you can safely ignore like this:
psql:outputfile.sql:82: WARNING: nonstandard use of escape in a string literal
LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
^
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
With pgloader
Get a recent version of pgloader; the one provided by Debian Jessie (as of 2019-01-27) is 3.1.0 and won't work since pgloader will error with
Can not find file mysql://...
Can not find file postgres://...
Access to MySQL source
First, make sure you can establish a connection to mysqld on the server running MySQL using
telnet theserverwithmysql 3306
If that fails with
Name or service not known
log in to theserverwithmysql and edit the configuration file of mysqld. If you don't know where the config file is, use find / -name mysqld.cnf.
In my case I had to change this line of mysqld.cnf
# By default we only accept connections from localhost
bind-address = 127.0.0.1
to
bind-address = *
Mind that allowing access to your MySQL database from all addresses can pose a security risk, meaning you probably want to change that value back after the database migration.
Make the changes to mysqld.cnf effective by restarting mysqld.
Preparing the Postgres target
Assuming you are logged in on the system that runs Postgres, create the database with
createdb databasename
The user for the Postgres database has to have sufficient privileges to create the schema, otherwise you'll run into
permission denied for database databasename
when calling pgloader. I got this error although the user had the right to create databases according to psql > \du.
You can make sure of that in psql:
GRANT ALL PRIVILEGES ON DATABASE databasename TO otherusername;
Again, this might be privilege overkill and thus a security risk if you leave all those privileges with user otherusername.
Migrate
Finally, the command
pgloader mysql://theusername:thepassword#theserverwithmysql/databasename postgresql://otherusername#localhost/databasename
executed on the machine running Postgres should produce output that ends with a line like this:
Total import time ✓ 877567 158.1 MB 1m11.230s
It is not possible to import an Oracle (binary) dump to PostgreSQL.
If the MySQL dump is in plain SQL format, you will need to edit the file to make the syntax correct for PostgreSQL (e.g. remove the non-standard backtick quoting, remove the engine definition for the CREATE TABLE statements adjust the data types and a lot of other things)
Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:
import MySQLdb
from magic import Connect #Private mysql connect information
import psycopg2
dbx=Connect()
DB=psycopg2.connect("dbname='honey'")
DC=DB.cursor()
mysql='''show tables from honey'''
dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
for table in ts: tables.append(table[0])
for table in tables:
mysql='''describe honey.%s'''%(table)
dbx.execute(mysql); rows=dbx.fetchall()
psql='drop table %s'%(table)
DC.execute(psql); DB.commit()
psql='create table %s ('%(table)
for row in rows:
name=row[0]; type=row[1]
if 'int' in type: type='int8'
if 'blob' in type: type='bytea'
if 'datetime' in type: type='timestamptz'
psql+='%s %s,'%(name,type)
psql=psql.strip(',')+')'
print psql
try: DC.execute(psql); DB.commit()
except: pass
msql='''select * from honey.%s'''%(table)
dbx.execute(msql); rows=dbx.fetchall()
n=len(rows); print n; t=n
if n==0: continue #skip if no data
cols=len(rows[0])
for row in rows:
ps=', '.join(['%s']*cols)
psql='''insert into %s values(%s)'''%(table, ps)
DC.execute(psql,(row))
n=n-1
if n%1000==1: DB.commit(); print n,t,t-n
DB.commit()
As with most database migrations, there isn't really a cut and dried solution.
These are some ideas to keep in mind when doing a migration:
Data types aren't going to match. Some will, some won't. For example, SQL Server bits (boolean) don't have an equivalent in Oracle.
Primary key sequences will be generated differently in each database.
Foreign keys will be pointing to your new sequences.
Indexes will be different and probably will need tweaked.
Any stored procedures will have to be rewritten
Schemas. Mysql doesn't use them (at least not since I have used it), Postgresql does. Don't put everything in the public schema. It is a bad practice, but most apps (Django comes to mind) that support Mysql and Postgresql will try to make you use the public schema.
Data migration. You are going to have to insert everything from the old database into the new one. This means disabling primary and foreign keys, inserting the data, then enabling them. Also, all of your new sequences will have to be reset to the highest id in each table. If not, the next record that is inserted will fail with a primary key violation.
Rewriting your code to work with the new database. It should work but probably won't.
Don't forget the triggers. I use create and update date triggers on most of my tables. Each db sites them a little different.
Keep these in mind. The best way is probably to write a conversion utility. Have a happy conversion!
I had to do this recently to a lot of large .sql files approximately 7 GB in size. Even VIM had troubling editing those. Your best bet is to import the .sql into MySql and then export it as a csv which can be then imported to Postgres.
But, the MySQL export as a csv is horrendously slow as it runs the select * from yourtable query. If you have a large database/table I would suggest using some other method. One way is to write a script that reads the sql inserts line by line and uses string manipulation to reformat it to "Postgres-compliant" insert statements and then execute these statements in Postgres
I could copy tables from MySQL to Postgres using DBCopy Plugin for SQuirreL SQL Client.
This was not from a dump, but between live databases.
Use your xxx.sql file to set up a MySQL database and make use of FromMysqlToPostrgreSQL. Very easy to use, short configuration and works like a charm. It imports your database with the set primary keys, foreign keys and indices on the tables. You can even import data alone if you set appropriate flag in the config file.
FromMySqlToPostgreSql migration tool by Anatoly Khaytovich, provides an accurate migration of table data, indices, PKs, FKs... Makes an extensive use of PostgreSQL COPY protocol.
See here too: PG Wiki Page
If you are using phpmyadmin you can export your data as CSV and then it will be easier to import in postgres.
Take a dump file of mysql database.
use this tool for converting local mysql database to local postgresql database.
take a clone in new folder or root directory:
git clone https://github.com/AnatolyUss/nmig.git
cd nmig
git checkout v5.5.0
nano config/config.json open this file after checkout.
Add souce database and destination database and also username, password
"source": {
"host": "localhost",
"port": 3306,
"database": "test_db",
"charset": "utf8mb4",
"supportBigNumbers": true,
"user": "root",
"password": "0123456789"
}
"target": {
"host" : "localhost",
"port" : 5432,
"database" : "test_db",
"charset" : "UTF8",
"user" : "postgres",
"password" : "0123456789"
}
After modification of config/config.json file run:
npm install
npm run build
npm start
After all this command you notice you mysql database is transferred to postgresql database.

How to take dump of serverdb in mysql

I have database on the server but as a developer when we found some bug in the product then to resolved that bug quickly we need to take dump of database which is currently present on the server.As the db size is much larger so it is not possible everyday to create dump and download it which is wasting some times.So I wanted know is there any tool or way which will only give me data which is not present on my local machine and I can integrate that new data into db which is present on the local host machine. So it will save development time.I know some db difference tools like mysql-diff, Toad for MySql are there but I dont think they will solved problem as they are useful to see the differences between two db only.If they can solved my problem then please let me know how?
Any help to achieve this will be appreciable.
As you're talking about a production database, I'd err on the side of caution, and just use mysqldump to dump out the relevant tables, rather than the whole database.
mysqldump -u dbuser -p -h 127.0.0.1 database_name table1 table2 table_etc
Alternatively, you could try rsync to synchronise the actual database files. You'll need to flush the tables, too - to ensure the data is written to disk, rather than hanging around in buffers.
If you do try the rsync method, just be sure to test it extensively.

How do I register an mysql database?

Sorry for a noob question regarding MySQL. I downloaded FlightStats to learn about mysql but I can't figure out how to register it with my localhost mysql db. I know in MS SQL you can simply register any sql db using sql studio. I tried to google but come up with no result. Perhaps, my search phrase is wrong. I'm searching with "how to register a mysql database, register a mysql database...etc.". How do you register or setup an database from existing database like FlightStats? I'm using DBVisualizer. Is there a way in dbVis that I'm not aware of to regsiter a database?
Thanks
edit: sorry for the bad wording. I found this. I have the .myd, .myi and .frm and I want to get it to restore(?) with my local mysql instance. I look at all the answers but I'm still confuse as how you restore the database from those 3 files.
A little background first. The FlightStats download page linked to in the original question appears to provide zipped tarballs of the binary table storage files from the MySQL data directory. Given that this is considered a viable means of distribution, and combined with the use of MERGE tables, I would surmise that this tarball contains a bunch of MyISAM data files (.myi, .myd). Jack's edit confirms that this is the situation.
This is an atypical means of distributing a MySQL data set, although not at all uncommon when backing up MyISAM storage, and probably not all that unheard of for moving large data sets around; it likely works out considerably more space-efficient than a corresponding dump file. Of course, in SQL Server land, it's pretty common to attach database files into an instance.
Broadly speaking, you'd recover the database as follows:
Locate the MySQL data directory; typically /var/mysql or similar
Create a new directory with the desired database name e.g. flightdata
Extract the .myi, .myd and other files from the tarball into this directory
Make sure the entire directory is owned by the user MySQL runs as (usually mysql) - use chmod -R to make sure you get everything
Open a MySQL console
USE <database-name>
SHOW TABLES
You should see some tables listed. In addition, the downloads page linked includes a couple of SQL scripts, which contain SQL commands that you need to run against your database once it's in place. These will cause the merge definitions and table indexes to be rebuilt. You can pipe these into the command-line client, e.g. mysql -u<username> -p<password> <database-name> < <sql-file>.
It may be a good idea to shut down the MySQL server while you're doing this; use e.g. /etc/init.d/mysql stop or similar, and restart once the files are extracted in place.
There's generally a way to import sql files using a GUI database tool. I'm not familiar with DBVisualizer, but as long as you have a MySQL command line client installed you can do it there as well. It's pretty easy:
Create a blank schema. You can do this in your GUI tool or on the command line client. Just use CREATE DATABASE flightstats;, or whatever name you want.
Use the following command line syntax to import/run an sql file on the new schema: mysql -u <username> -p flightstats < /path/to/file.sql
The -p option prompts for a password. I generally set up the database using step 1 as the root user, then GRANT some permissions on it to a new user id, then use that user id to run the SQL file.
This process is pretty much what a GUI tool will do in the background.
Registering a database? dont know what that means however mysql gui tools can help you creating a database. Have a look at it or better you download phpmyadmin.
Google WAMP for Windows.
Google MAMP for Mac.
Google LAMP for Linux.
Any questions?