How to import data to an in-memory database? - mysql

Are there any ways to import data in databases such as MS SQL, MySQL into in-memory databases like HSQLDB, H2 etc ?

H2 supports a special database URL that initialized the database from a SQL script file:
"jdbc:h2:mem;INIT=RUNSCRIPT FROM '~/create.sql'"
HSQLDB and Apache Derby don't support such a feature as far as I know.

I think you need to do
query the data out from MS SQL
import the data into in-memory DB with its API
Either SQL expressions or DB related APIs

In Hibernate: Adding import.sql to the class path works great, hbm2dll checks if the file exists and executes it. The only details is that every sql command most be on one row, otherwise it will fail to execute

You could dump the data as SQL INSERT statements then read it back.
You could read to a temporay object (like a struct) then write back to the internal db.

Look at the free "universal database converter" http://eva-3-universal-database-converter-udc.optadat-com.qarchive.org/ -- it does claim to support MySQL, MS-SQL, and HSQLDB, among others.

It really depends on what ways you think about.
Is there a tool that could do it automatically without programming? Maybe.
Do you want to develop it? Then find out whether your favorite language supports both database engines(standard and in memory) and if it does, just write a script that does it.
Process everything in chunks(fetch n rows at a time then insert them; repeat). How big the chunk size? It's up to you, try different sizes(say 100, 500, 1k etc.) see which one performs better on your hardware, fine tune to the sweet spot.
If your favorite language on the other hand doesn't support both of them, try using something that does.

You can use dbunit for dumping the database to xml files and importing it back to another rdbms.

Latest versions of HSQLDB allow you to open a CSV (comma separated values) or other delimiter separated data file as a TEXT TABLE in HSQLDB even with mem: databases, which can then be copied to other tables.
As others have pointed out, there are also capable and well maintained third party tools for this purpose.

Related

Is there any open-source or freeware to convert entire Visual Foxpro Database into mysql?

I have tried some online converters but they are only converting like 50 records from one table. Is there any tool or a proper way to migrate Visual Foxpro Data into mysql data?
There are a couple of utilities on the Leafe.com site that might help.
VFPData2MariaScript Use this script to create a set of INSERT records that detects the structure of your VFP tables. This works well with the utility of STRU2MYSQL_2.PRG which scripts a CREATE TABLE file. This may work for PostgreSQL as well as MariaDB and MySQL. Note that if your table is really large, you may run into execution limits, or file size limits. There is another utility that opens up a connection from VFP and upsizes the data. This utility just creates a script of INSERT statements that you run through phpMyAdmin or the like. Author: Kevin Cully

Importing .sql into MS Access using OBDC

I currently have a database in MySQL, which I'd like to import in MS Access.
Is it possible to do this while keeping all relationships intact (i.e. without exporting to .csv, or by using ODBC)?
I'm a noob in this area so any help is greatly appreciated.
Thanks.
You need to solve two different problems:
Creating an empty MS Access database with a structure that matches the MySQL database structure.
Extracting the data from MySQL and loading it into MS Access.
This is not easy because different SQL databases offer different structural features, different datatypes, and so on. The more complex your use of MySQL is the more likely you'll run into some show-stopper during the conversion (for instance, Access doesn't support triggers at all). Conversely if you're using MySQL as a simple data store you may find the conversion fairly easy.
To get an MS Access database with the same structure as your MySQL database, your best bet is to find a database definition / diagramming tool that offers reverse engineering and supports both MySQL and MS Access. Use it to reverse engineer your MySQL database into a database diagram, then change the underlying database to MS Access and use the tool to generate a database.
Check out Dezign For Databases which (on paper, anyway) offers the features you would need to do this.
To pump the data across, there are any number of tools. This kind of operation is generically referred to as ETL (Extract, Translate, Load).
Do you mean SQL Server? A good starting point might be to check out SQL Server Integration Services (SSIS), which can be used for transferring data around like that.
Google will also be helpful, check out the first result:
http://support.microsoft.com/kb/237980
By the way, you said ".sql" in your question: a .SQL file is a script file, which could do anything from create a database, insert data, drop table, delete data, or given the right permissions, call system procedures and reboot a machine, format a drive, send an email.. Just for ref, .SQL files aren't the storage format used by SQL Server.
While you can script your database's schema into script files via something like SQLyog, you will find that the syntax varies enough from database to database (MySQL to Access, in your case) that you can't directly apply the scripts.
With much effort a conversion script could be created by editing the script (perhaps automated with a program, depending on the resulting script size). I think you would be better served using ODBC to copy the tables (and data) and then extracting and re-applying the relationships from the generated script by hand. Time consuming, but also a one time operation I would hope.
When both systems are the same database, there are tools that can do the comparison and script generation (TOAD for MySQL and RedGate Compare for Microsoft SQL), but they don't do cross database work (at least not the ones I am aware of).
If you create a ODBC DSN, you can use TransferDatabase to import from your MySQL database. You can do it manually with the GET EXTERNAL DATA command (or whatever it is in A2007/A2010) and see how well it works. It won't get all data types exactly right, but you could do some massaging and likely get it closer to what will work best.
Is there some reason you can't just link to the MySQL tables and use them directly? That is, why do you need to import into Access at all?
Access: run query. Just make sure to adapt the SQL code since every RDMS has its own sintaxis (despite SQL being an ANSI standard).

General Questions about MySQL and MySQLite

I am going to be writing to a MySQLite database file, using Perl's DBD:SQLite module, and I wondering if it is possible for this file to be read by any distribution of MySQL? Is there a better way to create a simple MySQL database (using Perl)?
If it means anything, I'm only going to be using the database to store key-value pairs based on unique ID numbers for the keys. I tried BerkeleyDB but there is little support for it on Perl and I could not get it to work correctly in the past on certain versions of Windows.
Edit: I am aware that BerkeleyDB is a better way to do this, but when I was writing scripts for it, most of the methods have TODO, and I've had mysterious bugs on Windows Server 2003 using the same airtight code that ran for 2 weeks straight on my Win7 machine at home.
MySQL and SQLite are completely separate RDBMS systems. There is no such thing as MySQLite. To the best of my knowledge, MySQL cannot read SQLite databases.
If all you really want is a key-value store, perhaps look at Redis: http://code.google.com/p/redis/
I use Perl's DBI module which I can use to read databases using either MySQL or SQLite. All you need is the correct driver. In fact, if you write your program correctly, the backend database (either SQLite or MySql) is irrelevant. Your program will work with either one.
However, you can't use a SQLite database and then treat it as a MySQL database. They're two different creatures. Your program can be database agnostic, but once you chose a database, you can't switch back and forth. It'd be like opening an Oracle database as a MySQL database.
See This posting on Perl Monks for more info.
BerkeleyDB is well supported by perl. You have a choice between the older DB_File and the more fully featured BerkeleyDB module.
But there are tons of choices. If you don't want to have to run a separate server process, use DBI and DBD::SQLite or BerkeleyDB or any of the AnyDBM_File modules. For a simple server-based key-value store, there's redis or the older memcached.

Migration from MySQL to Postgresql with auto-increments - how?

I'm considering a MySQL to Postgresql migration for my web application, but I'm having a really hard time converting my existing MySQL database to Postgresql.
I tried :
mysldump with --compatible=postgresql
migration wizard from EnterpriseDB
Postgresql Data Wizard from EMS
DBConvert from DMSoft
and NONE of the above programs do a good job converting my database!
I saw some Perl and Python scripts for converting mysql to postgresql, but I can't figure out how to use them....(I installed ActivePerl and don't understand what I'm supposed to do next to run that script!)
I use Auto Increment fields (as a primary key) all the time, and these are just ignored... I understand that Postgresql does auto-increments in another way (with sequences), but it can't be THAT hard for MIGRATION software to implement that, or is it?
Did anybody have better luck converting a MySQL database with auto-increments as primary keys?
I know this is probably not the answer you are looking for, but: I don't believe in "automated" migration tools.
Take your existing SQL Scripts that create your database schema, do a search and replace for the necessary data types (autonumber maps to serial which does all the sequence handling automagically for you), remove all the "engine=" stuff and then run the new script against Postgres.
Dump the old database into flat files and import them into the target.
I have done this several times with sample databases that were intended for MySQL and it really doesn't take that long.
Probably just as long as trying all the different "automated" tools.
Why not use an ETL Tool? you dont have to worry about dumps or stuff like that.
I have migrated to PostgresSQL and MySQL and have had no problems with the auto increment fields.
You just need to know the connection credentials and thats it. I personally use Pentaho ( it's open source ).
Download Pentaho ETL from http://kettle.pentaho.org/
Unzip and run Pentaho (using .bat file spoon.bat)
Create a new Job:
Create DB connection for source data base (PostgreSQL) - using menu: Tools→Wizard→Create DataBase Connection (F3) Create DB connection for destination data base (Mysql) - using technique described above.
Run the Wizard: Tools → Wizard → Copy Tables (Ctrl-F10).
Select source (left dialog panel), and destination (left dialog panel). Click Finish.
The Job will be generated - Run the job.
If you need any help let me know.
Even when you familiar with all "PostgreSQL gotchas", doing every step by hand may take a lot of time, especially when your db is "big".
Try some other scripts/tools.
I know this is an old question but I just ran into the same problem migrating from MySQL to Postgres. After trying several migration tools out the very best one I could find, which will migrate your database structure as cleanly as possible, was Pgloader https://github.com/dimitri/pgloader/ it will take care of changing the Auto Increment to Postgres sequences no problem and it's super fast.

How to synchronize development and production database

Do you know any applications to synchronize two databases - during development sometimes it's required to add one or two table rows or new table or column.
Usually I write every sql statement in some file and during uploading path I evecute those lines on my production database (earlier backing it up).
I work with mySQL and postreSQL databases.
What is your practise and what applications helps you in that.
You asked for a tool or application answer, but what you really need is a a process answer. The underlying theme here is that you should be versioning your database DDL (and DML, when needed) and providing change scripts to be able to update any version of your database to a higher version.
This set of links provided by Jeff Atwood and written by K. Scott Allen explain in detail what this ought to look like - and they do it better than I can possibly write up here: http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
For PostgreSQL you could use Another PostgreSQL Diff Tool . It can diff two SQL Dumps very fast (a few seconds on a db with about 300 tables, 50 views and 500 stored procedures). So you can find your changes easily and get a sql diff which you can execute.
From the APGDiff Page:
Another PostgreSQL Diff Tool is simple PostgreSQL diff tool that is useful for schema upgrades. The tool compares two schema dump files and creates output file that is (after some hand-made modifications) suitable for upgrade of old schema.
Have scripts (under source control of course) that you only ever add to the bottom off. That combined with regular restores from your production database to dev you should be golden. If you are strict about it, this works very well.
Otherwise I know lots of people use redgate stuff for SQLServer.
Another vote for RedGate SQL Compare
http://www.red-gate.com/products/SQL_Compare/index.htm
Wouldn't want to live without it!
Edit: Sorry, it seems this is only for SQL Server. Still - if any SQL Server users have the same question I'd definitely recommend this tool.
If you write your SQL statements for your development database (which are, I imagine, series of DDL instructions such as CREATE, ALTER and DROP), why don't you keep track of them by recording them in a table, with a "version" index? You will then be able to:
track your version changes
make a small routine allowing the "automatic" update of your production database by sending the recorded instructions to the database.
I really like the EMS tools.
There tools are available for all popular DB's and you have the same user experience for every type of DB.
One of the tools is the DB Comparer.
TOAD
saved many an ass several times in the past. Why do people run sql with no exit strategy?
the redgate one is good also.
Siebel (CRM, Sales, etc. management product) has a built-in tool to align the production database with the development one (dev2prod).
Otherwise, you've got to stick with manually executed scripts.
Navicat has a structure synchronisation wizard that handles this.
I solve this by using Hibernate. It can detect and autocreate missing tables, columns, etc.
You could add some automation to your current way of doing things by using dbDeploy or a similar script. This will allow you to keep track of your schema changes and to upgrade/rollback your schema as you see fit.
Here's a straight linux bash script I wrote for syncing Magento databases... but you can easily modify it for other uses :)
http://markshust.com/2011/09/08/syncing-magento-instance-production-development
DBV - "Database version control, made easy!" (PHP)