How to translate a MySQL database dump to new PG database? - mysql

I have a MySQL database with over 40,000 records I want to import into a new PostgreSQL database; I want to be able to map the values from the old table and column names into new table and column names... how do I do this?
For instance, I want to take this:
Table name: Horribly_Named_Table
=> Horribly_Named_Column: value1
(MySQL)
... and translate it to this:
Table name: better_named_table
=> better_named_column: value1
(PostgreSQL)
I've never done a move like this before, so any help is appreciated!

I recommend using a simple transformation within Pentaho Data Integration: setup is very simple and there is a wizard for loading database base data from one database to another:
See a similar answer here:
Migrate from Oracle to MySQL

If you are only referring to the difference in UPPER/lowercase names, then you don't really need to do something.
Just make sure you are not quoting the table names and they will not be case sensitive.
This_Table_Name is the same as this_table_name and that is the same as THIS_TABLE_NAME.
But "this_table_name" is something different then "This_Table_Name"

mysqldump has a compatibility mode, check "ansi" and "postgresql".

Related

How to move Hive data table to MySql?

I would like to know how I can move date from Hive to MySQL?
I have seen example on how to move hive data to Amazon DynamoDB but not for a RDBMS like MySQL. Here is the example that I saw with DynamoDB:
CREATE EXTERNAL TABLE tbl1 ( name string, location string )
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES ("dynamodb.table.name" = "table",
"dynamodb.column.mapping" = "name:name,location:location") ;
I would like to do the same but with MySQL instead. I wonder if I need to code my own StorageHandler? I also to do not want to use sqoop. I want to be able to do my query directly in my HiveQL script.
You'd currently need a JDBC StorageHandler, which one has not been created just yet, but you could certain build your own.
There is currently an issue report for this which you can follow here:
https://issues.apache.org/jira/browse/HIVE-1555
Have you tried using Sqoop?. Its a good tool to do such kind of stuff.
There are many options. You can download the files in hive as csv file and then try bulk insert into mysql tables. You can use Sqoop. Or you can use some of the popular ETL tools like
Pentaho and many others.

MySQL Replace table from another table

I have 2 active database connections, I need to replace a number of tables from 'connection1' with that of connection2. The structures may, or may not be same, (depending if we make changes to the connection1 table.
I would assume I should do a complete table dump and replace keys where neccesary, but I really have no idea how to do this :)
Any help?
Have a look at Schema and Data sync tools in dbForge Studio for MySQL. It will help you to compare two databases on different servers, map tables and fields, generate and run synchronization script.
I ended up using the build in system command in PHP and mysqldump to first dump the data (export) to a file, then used system() again with mysql to import it into the new table and replace the old one.
Works like a charm :)

How to accomplish "MySQL cross database reference" with PostgreSQL

We will migrate the database from mysql to postgresql in our product(through java). So we need to change the mysql query to postgresql query in java application. How to create the table i.e., databasename.tablename in postgresql.
For mysql, we can directly create the table e.g create table information.employee.
Here database name is "information" and table name is "employee" . Is it possible to achieve same query in postgresql.
I searched google it says cross database reference is not possible. Please help me.
I saw pg_class table it contains the table names in the specific database, like wise databse and tables relationships are stored in any other table.
This is normally done using schemas rather than databases, which is more or less like how MySQL organizes it anyway.
Instead of
create database xyz
use
create schema xyz
When you create tables, create them:
create table xyz.myTable
you will need to update your search path to see them on the psql command line tool, or if you want to query them without using the schema explicitly. The default schema is public, so when you create a table without a schema name, it ends up in public. If you modify your search_path as below, the default schema becomes the first in the list: xyz.
set search_path=xyz,public,pg_catalog;
and you must not have spaces in that statement. You can do it globally for a user/role too:
alter role webuser set search_path=xyz,public,pg_catalog;
Also, don't forget that postgresql string matches are case sensitive by default (this one catches people out a lot).
If you want to have different physical locations for the files for each schema, you can do that with tablespaces. If you have a look at the postgresql documentation page, they have info on how to do it, it's pretty easy.
database in MySQL == schema in PostgreSQL. So you will most probably want to migrate all your mysql dbs into one postgres db. Then you will be able to do "cross-database" queries.
See my answer to this question: Relationship between catalog, schema, user, and database instance

Importing records from PostgreSQL to MySQL

Was wondering if anyone had any insight or recommended tools for exporting the records from a PostgreSQL database and importing them into a MySQL database. I believe the table structure is 100% identical.
Thoughts? Thanks!
The command
pg_dump --data-only --column-inserts <database_name>
will generate SQL-standard-compliant INSERT statements with all column names listed and one VALUES clause per INSERT. This is the most portable way of moving data from PostgreSQL to any other SQL database.
Check out SquirrelSQL, it can pump data from one database brand into another via the DBCopy plugin. When the table structures are really identical it works quite well.
There is a ruby app called Taps that will do it. I've used it before with great success:
http://adam.heroku.com/past/2009/2/11/taps_for_easy_database_transfers/

Shell file to import SQLITE into MYSQL

Ok, let me start off by saying that I'm don't have the slightest clue how to start off with this. I have an sqlite database. For simplicity lets just say that the table that I want to read is 'data' and data contains two fields, say (id, name). How could I go about creating a shell script to read the information from the 'data' sqlite table and insert it into a MYSQL table with the exact same table structure? I realise that it would be simpler to just insert the data into MYSQL to begin with and cut out the sqlite step all together, but this is not possible (unfortunately). I really appreciate any help!
http://web.archive.org/web/20121018070614/http://sqlite.phxsoftware.com/forums/p/941/4725.aspx
[corrected dead link. there is a C# script which accomplishes the objective]