Situation
I had a table with 8 columns, then I need 2 more fields :
company_wieght
server_type_weight
So I run a migration to add those fields, and now I have 10 columns.
I got 0 data in there right now.
I want
to copy/paste all the data from my staging server back to my local. I keep get a error :
How do I solve this problem ?
Is there a way to force paste rows,and leave the other 2 rows as NULL/blank ? With that I can add data to them later via migration.
I'm a little stuck now on that.
Seems there are several aspects mixed in here. For MySQL Workbench: source and target column count in a copy/paste action must be the same, no way around it. However, I wouldn't copy over data with copy/paste unless it's really not much and not needed many times. Instead I'd export the existing data to a csv file, load that in a spreadsheet (e.g. in Open Office) and add 2 dummy columns. Export that to csv again and import in MySQL Workbench.
One way to do it:
Make sure that your new columns are nullable in migration
$table->integer('company_weight')->nullable(); // make sure you use nullable()
$table->integer('server_type_weight')->nullable();
Dump the table data on your staging server
$ mysqldump -u<username> -p --no-create-info --compact --skip-comments \
--complete-insert <database> <table> > /path/to/file.sql
Download resulting file.sql to your local machine
Import data to your local database
$ mysql -u<username> -p <database> < /path/to/file.sql
I have a MySQL table with c.1,850 rows and two columns - ID (int - not auto-incrementing) and data (mediumblob). The table is c.400MiB, with many individual entries exceeding 1MiB and some as large as 4MiB. I must upload it to a typical Linux shared-hosting installation.
So far, I have run into a variety of size restrictions. Bigdump, which effortlessly imported the rest of the database, cannot handle this table - stopping at different places, whichever method I have used (various attempts using SQL or CSV). Direct import using phpMyAdmin has also failed.
I now accept that I have to split the table's content in some way, if the import is ever to be successful. But as (for example) the last CSV displayed 1.6m rows in GVIM (when there are only 1,850 rows in the table), I don't even know where to start with this.
What is the best method? And what settings must I use at export to make the method work?
mysqldump -u username -p -v database > db.sql
Upload the SQL file to your FTP.
Create a script in a language of your choice (eg: PHP) that will call system/exec commands to load in the SQL file into the MySQL database.
nohup mysql -u username -p newdatabase < db.sql &
this will run a process in background for you.
you might have to run initially a which mysqldump and which mysql to get the absolute path of the executables.
How can I import an "xxxx.sql" dump from MySQL to a PostgreSQL database?
This question is a little old but a few days ago I was dealing with this situation and found pgloader.io.
This is by far the easiest way of doing it, you need to install it, and then run a simple lisp script (script.lisp) with the following 3 lines:
/* content of the script.lisp */
LOAD DATABASE
FROM mysql://dbuser#localhost/dbname
INTO postgresql://dbuser#localhost/dbname;
/*run this in the terminal*/
pgloader script.lisp
And after that your postgresql DB will have all of the information that you had in your MySQL SB.
On a side note, make sure you compile pgloader since at the time of this post, the installer has a bug. (version 3.2.0)
Mac OS X
brew update && brew install pgloader
pgloader mysql://user#host/db_name postgresql://user#host/db_name
Don't expect that to work without editing. Maybe a lot of editing.
mysqldump has a compatibility argument, --compatible=name, where "name" can be "oracle" or "postgresql", but that doesn't guarantee compatibility. I think server settings like ANSI_QUOTES have some effect, too.
You'll get more useful help here if you include the complete command you used to create the dump, along with any error messages you got instead of saying just "Nothing worked for me."
The fastest (and most complete) way I found was to use Kettle. This will also generate the needed tables, convert the indexes and everything else. The mysqldump compatibility argument does not work.
The steps:
Download Pentaho ETL from http://kettle.pentaho.org/ (community version)
Unzip and run Pentaho (spoon.sh/spoon.bat depending on unix/windows)
Create a new job
Create a database connection for the MySQL source
(Tools -> Wizard -> Create database connection)
Create a database connection for the PostgreSQL source (as above)
Run the Copy Tables wizard (Tools -> Wizard -> Copy Tables)
Run the job
You could potentially export to CSV from MySQL and then import CSV into PostgreSQL.
For those Googlers who are in 2015+.
I've wasted all day on this and would like to sum things up.
I've tried all the solutions described at this article by Alexandru Cotioras (which is full of despair). Of all the solutions mentioned there only one worked for me.
— lanyrd/mysql-postgresql-converter # github.com (Python)
But this alone won't do. When you'll be importing your new converted dump file:
# \i ~/Downloads/mysql-postgresql-converter-master/dump.psql
PostgreSQL will tell you about messed types from MySQL:
psql:/Users/jibiel/Downloads/mysql-postgresql-converter-master/dump.psql:381: ERROR: type "mediumint" does not exist
LINE 2: "group_id" mediumint(8) NOT NULL DEFAULT '0',
So you'll have to fix those types manually as per this table.
In short it is:
tinyint(2) -> smallint
mediumint(7) -> integer
# etc.
You can use regex and any cool editor to get it done.
MacVim + Substitute:
:%s!tinyint(\w\+)!smallint!g
:%s!mediumint(\w\+)!integer!g
You can use pgloader.
sudo apt-get install pgloader
Using:
pgloader mysql://user:pass#host/database postgresql://user:pass#host/database
Mac/Win
Download Navicat trial for 14 days (I don't understand $1300) - full enterprise package:
connect both databases mysql and postgres
menu - tools - data transfer
connect both dbs on this first screen. While still on this screen there is a general / options - under the options check on the right side check - continue on error
* note you probably want to un-check index's and keys on the left.. you can reassign them easily in postgres.
at least get your data from MySQL into Postgres!
hope this helps!
I have this bash script to migrate the data, it doesn't create the tables because they are created in migration scripts, so I need only to convert the data. I use a list of the tables to not import data from the migrations and sessions tables. Here it is, just tested:
#!/bin/sh
MUSER="root"
MPASS="mysqlpassword"
MDB="origdb"
MTABLES="car dog cat"
PUSER="postgres"
PDB="destdb"
mysqldump -h 127.0.0.1 -P 6033 -u $MUSER -p$MPASS --default-character-set=utf8 --compatible=postgresql --skip-disable-keys --skip-set-charset --no-create-info --complete-insert --skip-comments --skip-lock-tables $MDB $MTABLES > outputfile.sql
sed -i 's/UNLOCK TABLES;//g' outputfile.sql
sed -i 's/WRITE;/RESTART IDENTITY CASCADE;/g' outputfile.sql
sed -i 's/LOCK TABLES/TRUNCATE/g' outputfile.sql
sed -i "s/'0000\-00\-00 00\:00\:00'/NULL/g" outputfile.sql
sed -i "1i SET standard_conforming_strings = 'off';\n" outputfile.sql
sed -i "1i SET backslash_quote = 'on';\n" outputfile.sql
sed -i "1i update pg_cast set castcontext='a' where casttarget = 'boolean'::regtype;\n" outputfile.sql
echo "\nupdate pg_cast set castcontext='e' where casttarget = 'boolean'::regtype;\n" >> outputfile.sql
psql -h localhost -d $PDB -U $PUSER -f outputfile.sql
You will get a lot of warnings you can safely ignore like this:
psql:outputfile.sql:82: WARNING: nonstandard use of escape in a string literal
LINE 1: ...,(1714,38,2,0,18,131,0.00,0.00,0.00,0.00,NULL,'{\"prospe...
^
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
With pgloader
Get a recent version of pgloader; the one provided by Debian Jessie (as of 2019-01-27) is 3.1.0 and won't work since pgloader will error with
Can not find file mysql://...
Can not find file postgres://...
Access to MySQL source
First, make sure you can establish a connection to mysqld on the server running MySQL using
telnet theserverwithmysql 3306
If that fails with
Name or service not known
log in to theserverwithmysql and edit the configuration file of mysqld. If you don't know where the config file is, use find / -name mysqld.cnf.
In my case I had to change this line of mysqld.cnf
# By default we only accept connections from localhost
bind-address = 127.0.0.1
to
bind-address = *
Mind that allowing access to your MySQL database from all addresses can pose a security risk, meaning you probably want to change that value back after the database migration.
Make the changes to mysqld.cnf effective by restarting mysqld.
Preparing the Postgres target
Assuming you are logged in on the system that runs Postgres, create the database with
createdb databasename
The user for the Postgres database has to have sufficient privileges to create the schema, otherwise you'll run into
permission denied for database databasename
when calling pgloader. I got this error although the user had the right to create databases according to psql > \du.
You can make sure of that in psql:
GRANT ALL PRIVILEGES ON DATABASE databasename TO otherusername;
Again, this might be privilege overkill and thus a security risk if you leave all those privileges with user otherusername.
Migrate
Finally, the command
pgloader mysql://theusername:thepassword#theserverwithmysql/databasename postgresql://otherusername#localhost/databasename
executed on the machine running Postgres should produce output that ends with a line like this:
Total import time ✓ 877567 158.1 MB 1m11.230s
It is not possible to import an Oracle (binary) dump to PostgreSQL.
If the MySQL dump is in plain SQL format, you will need to edit the file to make the syntax correct for PostgreSQL (e.g. remove the non-standard backtick quoting, remove the engine definition for the CREATE TABLE statements adjust the data types and a lot of other things)
Here is a simple program to create and load all tables in a mysql database (honey) to postgresql. Type conversion from mysql is coarse-grained but easily refined. You will have to recreate the indexes manually:
import MySQLdb
from magic import Connect #Private mysql connect information
import psycopg2
dbx=Connect()
DB=psycopg2.connect("dbname='honey'")
DC=DB.cursor()
mysql='''show tables from honey'''
dbx.execute(mysql); ts=dbx.fetchall(); tables=[]
for table in ts: tables.append(table[0])
for table in tables:
mysql='''describe honey.%s'''%(table)
dbx.execute(mysql); rows=dbx.fetchall()
psql='drop table %s'%(table)
DC.execute(psql); DB.commit()
psql='create table %s ('%(table)
for row in rows:
name=row[0]; type=row[1]
if 'int' in type: type='int8'
if 'blob' in type: type='bytea'
if 'datetime' in type: type='timestamptz'
psql+='%s %s,'%(name,type)
psql=psql.strip(',')+')'
print psql
try: DC.execute(psql); DB.commit()
except: pass
msql='''select * from honey.%s'''%(table)
dbx.execute(msql); rows=dbx.fetchall()
n=len(rows); print n; t=n
if n==0: continue #skip if no data
cols=len(rows[0])
for row in rows:
ps=', '.join(['%s']*cols)
psql='''insert into %s values(%s)'''%(table, ps)
DC.execute(psql,(row))
n=n-1
if n%1000==1: DB.commit(); print n,t,t-n
DB.commit()
As with most database migrations, there isn't really a cut and dried solution.
These are some ideas to keep in mind when doing a migration:
Data types aren't going to match. Some will, some won't. For example, SQL Server bits (boolean) don't have an equivalent in Oracle.
Primary key sequences will be generated differently in each database.
Foreign keys will be pointing to your new sequences.
Indexes will be different and probably will need tweaked.
Any stored procedures will have to be rewritten
Schemas. Mysql doesn't use them (at least not since I have used it), Postgresql does. Don't put everything in the public schema. It is a bad practice, but most apps (Django comes to mind) that support Mysql and Postgresql will try to make you use the public schema.
Data migration. You are going to have to insert everything from the old database into the new one. This means disabling primary and foreign keys, inserting the data, then enabling them. Also, all of your new sequences will have to be reset to the highest id in each table. If not, the next record that is inserted will fail with a primary key violation.
Rewriting your code to work with the new database. It should work but probably won't.
Don't forget the triggers. I use create and update date triggers on most of my tables. Each db sites them a little different.
Keep these in mind. The best way is probably to write a conversion utility. Have a happy conversion!
I had to do this recently to a lot of large .sql files approximately 7 GB in size. Even VIM had troubling editing those. Your best bet is to import the .sql into MySql and then export it as a csv which can be then imported to Postgres.
But, the MySQL export as a csv is horrendously slow as it runs the select * from yourtable query. If you have a large database/table I would suggest using some other method. One way is to write a script that reads the sql inserts line by line and uses string manipulation to reformat it to "Postgres-compliant" insert statements and then execute these statements in Postgres
I could copy tables from MySQL to Postgres using DBCopy Plugin for SQuirreL SQL Client.
This was not from a dump, but between live databases.
Use your xxx.sql file to set up a MySQL database and make use of FromMysqlToPostrgreSQL. Very easy to use, short configuration and works like a charm. It imports your database with the set primary keys, foreign keys and indices on the tables. You can even import data alone if you set appropriate flag in the config file.
FromMySqlToPostgreSql migration tool by Anatoly Khaytovich, provides an accurate migration of table data, indices, PKs, FKs... Makes an extensive use of PostgreSQL COPY protocol.
See here too: PG Wiki Page
If you are using phpmyadmin you can export your data as CSV and then it will be easier to import in postgres.
Take a dump file of mysql database.
use this tool for converting local mysql database to local postgresql database.
take a clone in new folder or root directory:
git clone https://github.com/AnatolyUss/nmig.git
cd nmig
git checkout v5.5.0
nano config/config.json open this file after checkout.
Add souce database and destination database and also username, password
"source": {
"host": "localhost",
"port": 3306,
"database": "test_db",
"charset": "utf8mb4",
"supportBigNumbers": true,
"user": "root",
"password": "0123456789"
}
"target": {
"host" : "localhost",
"port" : 5432,
"database" : "test_db",
"charset" : "UTF8",
"user" : "postgres",
"password" : "0123456789"
}
After modification of config/config.json file run:
npm install
npm run build
npm start
After all this command you notice you mysql database is transferred to postgresql database.
I've got a situation where I need to copy several tables from one SQL Server DB to a separate SQL Server DB. The databases are both on the same instance. The tables I'm copying contain a minimum of 4.5 million rows and are about 40GB upwards in size.
I've used BCP before but am not hugely familiar with it and have been unable to find any documentation about whether or not you can use BCP to copy direct from table to table without writing to file in between.
Is this possible? If so, how?
EDIT: The reason we're not using a straightforward INSERT is because we have limited space on the log drive on the server, which disappears almost instantly when attempting to INSERT. We did try it but the query quickly slowed to snail's pace as the log drive filled up.
from my answer at Table-level backup
I am using bcp.exe to achieve table-level backups
to export:
bcp "select * from [MyDatabase].dbo.Customer " queryout "Customer.bcp" -N -S localhost -T -E
to import:
bcp [MyDatabase].dbo.Customer in "Customer.bcp" -N -S localhost -T -E -b 10000
as you can see, you can export based on any query, so you can even do incremental backups with this.
BCP is for dumping to / reading from a file. Use DTS/SSIS to copy from one DB to another.
Here are the BCP docs at MSDN
SQL Import/Export wizard will do the job ... just connect twice to same database (source and destination) and copy one table onto other (empty and indexed), you might want to instruct to ignore autonumeric Id key field if exists. This approach works for me with tables over 1M+ records.
I am using MySQL 5.0.51b on microsoft windows xp. I am trying to load data from zoneinfo files(generated by library downloaded from here) to database tables as described here.
Now i am not able to find where would i get this "mysql_tzinfo_to_sql" program for windows. I tried executing it on mysql command line client but no success.
On linux you can directly execute this command on the shell.
Any help is appreciated.
You don't need to run mysql_tzinfo_to_sql on Windows.
For Windows just do this:
Download the files. Links here
Move them to your MySQL directory.
Example: C:\ProgramData\MySQL\MySQL Server 5.5\data\mysql
Restart your server.
Now, if you want, you can change your timezone like this: SET time_zone = 'America/Costa_Rica';
Check it with SELECT NOW();
More information here: MySQL
And take a look at this: Answer
The command "mysql_tzinfo_to_sql" doesn't work on Windows.
You have to download the timezone packages wich contains SQL statements and populate the timezone tables using the "source" command, like this:
mysql> use mysql ;
mysql> source /path/to/file/timezone_posix.sql ;
Check the following links for reference:
Blog: https://discourse.looker.com/t/cannot-connect-time-zone-tables-dont-appear-to-be-loaded-in-mysql/208/6
Scripts sql: http://downloads.mysql.com/general/timezone_2016a_posix_sql.zip , http://downloads.mysql.com/general/timezone_2016a_leaps_sql.zip
None of the 'populate file' methods worked for me with mysql 8.
A lot of answer contains this link: http://dev.mysql.com/downloads/timezones.html
There's downloadable zip files that contain sql files. Putting it to any directory didn't help.
One thing helped me: I issued a "use mysql;" and executed the content of the downloadable sql file as a script.
based on Francisco Corrales Morales answer.
For MySQL 5.7+ on Windows 10 machine, my procedure is
download latest POSIX Standard time zone script under 5.7+ section from https://dev.mysql.com/downloads/timezones.html
Extract the file then there would be a single SQL file named timezone_posix.sql
run the sql script, in my case, use command line below
bin\mysql.exe --host=localhost --port=3306 --user=USERNAME -p mysql < c:\...\Downloads\timezone_2020d_posix_sql\timezone_posix.sql
Note make sure you run the time zone script under mysql database/schema.
For Windows, MySQL supplies an already loaded database for you to download and stick in your data directory: http://dev.mysql.com/downloads/timezones.html
Copied from the user comments on the MySQl docs:
Posted by Jyotsna Channagiri on
November 20 2008 6:28pm
Hi,
I thought this information will helps
somebody who are looking for changing
mysql timezone.
The steps are:
Download the timezone table structure and data from
http://dev.mysql.com/downloads/timezones.html
Copy and paste the data in your Mysql/data/mysql folder
Restart your mysql server.
mysql> SET GLOBAL time_zone = 'America/Toronto';
mysql> SET SESSION time_zone = 'America/Toronto';
Check SELECT # #global.time_zone , # #session.time_zone ; It should give
you the time zone you set just before.
Comment:
Yes, but the tables provided by MySQL are outdated (generated by 2006p version of D olson's timezone library). I need the latest timezones data, hence i downloaded the latest library and generated the binaries. Now i need a way to load these tables in mysql.But i don't know how to do it on windows.
Ah, I see. Then you're going to need to do one of two things.
1) get the tool that does this and compile it (or whatever) on Windows. If you're lucky, it's a perl script.
2) fill the database on linux, then copy it to Windows. [This guy][http://it-idiot.einsamsoldat.net/2008/01/moving-mysql-database-from-windows-to-linux-redhat/comment-page-1/2] says it can be done, at least for MyIsam.
mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -uadmin -ppassword mysql
I am using XAMPP and PHP 7.4.27 on Windows 10 and had some difficulties getting other solutions to work.
Here are the steps I took to get it working was what worked for me.
Download the latest MySQL Community Downloads POSIX Standard or the Non POSIX with leap seconds(if you need the leap seconds included) time zone script under the section that states:
Each file contains SQL statements to fill the tables
Extract the file, which should be a single SQL file named timezone_posix.sql.
Open the extracted SQL file in the code editor of your choice and copy its content.
Open your DB administration tool of choice, select the "mysql" table and under the "SQL" tab paste the contents of the extracted file.
Note: #4 Instructs on the basis of PHPMyAdmin, other administration tools might have a different process.
Click "Go" and follow any prompts after that.
All needed time_zone tables should be populated with timezone data