SQLite migration and administration - mysql

I wanted to migrate my production MySQL database to any other RDBMS. Someone suggested me to use SQLite. I have following queries:
Is there any tool to migrate MySQL to SQLite?
Any GUI tool to manage SQLite databases?
How reliable it is for large production databases?

(I'm not sure about the MySQL to SQLite migration tools. As always with SQL, there are SQL dialects changes that may have to be taken into account, it really depends on your existing databases.)
MySQL and SQLite are fundamentally different in that MySQL is server-based, intended to be used by a client, whereas SQLite is file-based, intended to be used via an API that accesses the underlying files directly. As such, you don't need to manage a SQLite in the same way as you would manage MySQL, because SQLite is an embedded database. There are useful tools for connecting to SQLite databases, one of them is SQLite Manager (it doesn't have to run within Firefox).
This may be an issue for large production databases if you need concurrent access (see this SQLite FAQ.

Old stuff but I needed to convert a MySQL database. I have developed a small snippet in lua to do the basic job of converting the CREATE and INSERT statements. I don't guarantee that it will work in all cases. Just report if it doesn't.
And, by the way, I did the same script in GNU awk some time ago. Lua is about two times faster! As I am a big supporter of gawk that came as a surprise to me.
Lua version: https://gist.github.com/985257
Gawk version: https://gist.github.com/943776

Related

What is the difference between mysql when run from the console (eg cmd.exe) and mysqlsh?

On Windows I can run
C:\WINDOWS\system32>mysql -u root -pxxxx
or alternatively there is a shell-like program at "MySQL Shell x.y\bin\mysqlsh.exe"
What is the difference, in high-level terms, between these two? When would you use one over the other, and why? I have a lot of experience with sqlplus on Oracle; is the mysqlsh like sqlplus, in the sense that it provides much more functionality than console access that is possible through cmd.exe?
Thanks in advance.
MySQL Shell is a superset of the functionality of the mysql client. It has several features that the old client doesn't have, for example:
You can write statements in Python or Javascript in addition to SQL statements, so you can write scripts that do any kind of loop or conditional code you can imagine in Python or Javascript.
You can connect to multiple MySQL sessions concurrently, for example to connect to multiple servers.
You can output query results in different formats, including JSON.
You can use the "X Protocol" which allows you to query MySQL like a NoSQL server instead of an SQL server.
MySQL Shell is sort of being presented as a successor to the traditional mysql client tool. But it's a relatively new tool and probably has some undiscovered bugs (any new software does).
The old mysql tool is not going away. There are too many apps and scripts that make extensive use of that old client tool, and while the new MyQL Shell can do the same tasks, the usage is different, so for backward compatibility, I assume the old tool will exist forever.
For more information, see: https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-features.html

Delvering a modified database from the local env to the production one

I'm working with MySQL databases.
To simplify the problem, let's say I have two environments : the local one (development) and the remote one (production mode).
In the database, I have some tables that contain configuration data.
How can I automate cleanly the delivery from the development mode to the production mode when I modify the database schema and the configuration tables content ?
For instance, I dot it manually by doing a diff between the local and remote databases. But, I find that method not so clean and I believe there is good practice allowing that.
This might be helpful in cases where you have multiple environments and multiple developers making schema changes very often and using php.. https://github.com/davejkiger/mysql-php-migrations
Introduce parameter "version" for your database. This version should be written somewhere in your code and somewhere in your database. Your code will work with database only if they have equal versions
Create a wrapper around your MySQL connection. This wrapper should check versions and if versions are not compatible, it should start upgrade.
"Upgrade" is a process of sequential applying the list of *.sql files with SQL commands, which will move your database from one state to another. It can be schema changes or data manipulation commands.
When you do something with database, do it only through adding new *.sql file and incrementing version.
As a result, when you deploy your database from development enviroment to production, your database will be upgraded automatically in the same way as it was upgraded during development.
I've seen LiquiBase http://www.liquibase.org/ a lot in Java environments.
In most of my projects I use sqlalchemy(a Python tool to manage db plus an ORM). If you have some experience(little more than beginner) with Python I highly recommend using it.
You can check this tool with a little help of that. This is also very useful for migrating your db to other rdbms(for example mysql to postgres or oracle).

Migration from SQL Server to MySQL

I need to migrate data from SQL Server 2000 to MySQL. Currently I am using MySQL workbench. Can any one tell how to do this?
If you are searching for a tool MySQL has a tool called "MySQL Migration Toolkit" that can be used to migrate the data from SQL to mysql. But the first thing as mentioned above is to do a backup. The next thing to check would be whether there are any datatypes that cannot be converted?
Exactly, what have you tried? You can quickly migrate data from MSSQL to MySQL if they are in any of the following data file formats:
Paradox (.db)
DBase (.dbf)
Delimited Text (.txt)
Excel (.xls)
XML (.xml)
MS Access Database (.mdb)
ODBC
If its a one-to-one, exact same database architecture on both the new and the old servers, you might want to try using database tools meant to make this an easier process for GUI based administrators. Just go to download.com and find some software that may assist you in that migration, Navicat is a good one. The most important thing is to always BACK IT UP! BACK IT UP! BACK IT UP! Never do anything without mirroring drives and doing whatever it takes to make sure if you don't destroy any data, but if you do you'll have backup copies of it. Also how fast your machines are will be a sizable factor when it comes to migrating very large databases.
All in all you have many options to choose from, yet the most important thing is to back it up! Can't stress that enough, yeah it might seem like meaningless extra work especially on humongous database systems, but trust me, its better to be safe than sorry. Also, I always like rebooting machines prior to making database changes to them, cutting off any connection to the outside world or any processes depending or updating its data. Turning off the ODBC will do much of that for you on Windows as well, but as always better safe than sorry. Many a corruption can be avoided by simply booting the machine and having all data in memory finalized on the active database, before backing it up or appending to it.
Check out etlalchemy. It is a free, open-sourced Python tool capable of migrating between any of the following SQL databases: PostgreSQL, MySQL, Oracle, SQL Server, and SQLite.
To install: pip install etlalchemy
To run:
from etlalchemy import ETLAlchemySource, ETLAlchemyTarget
# Migrate from SQL Server onto MySQL
src = ETLAlchemySource("mssql+pyodbc://user:passwd#DSN_NAME")
tgt = ETLAlchemyTarget("mysql://user:passwd#hostname/dbname",
drop_database=True)
tgt.addSource(src)
tgt.migrate()

Remote backup of MySQL database

Our Java server application logs data to a SQL database, which may or may not be on the same machine. Currently we use MS SQL Server, and we're now porting to MySQL. A user configures database backup parameters on our app server, e.g. time of day to run a backup, and the app server executes SQL Server's BACKUP DATABASE command at the appropriate time, via a sproc. It does incremental backups daily and full backups weekly.
MySQL lacks an equivalent feature to tell the database from a client connection to back itself up. Options we're considering are:
Create a UDF to shell out to mysqldump (or copy database files), which can be called from our app server via a sproc. Essentially we'd be implementing a version of BACKUP DATABASE for MySQL.
Create a service to run on the MySQL box that can get the backup settings from the app server and run mysqldump (or file copy) locally.
Create a backup sproc to mimic mysqldump, e.g. SHOW CREATE TABLES and SELECT INTO OUTFILE for each table.
Setting up a cron job, Perl script, third-party app or other tricks that'd work great in a data center aren't preferred; this is a shrink-wrap package that needs to be pretty robust and hands off.
Database sizes can range from roughly 10MB to 10GB.
I'm aware of the binary logs for the incremental piece. I figure the general solution will probably apply to them as well, if we decide to use them.
This is all on Windows 2003 32-bit or 2008R2 64-bit, MySQL 5.1.
The UDF option seems the best to me. The UDF Repository (http://www.mysqludf.org/) has mysqludf_sys, which may be all we need, but I thought I'd ask for opinions since after extensive googling it doesn't seem like others have reached the same conclusion, or maybe our needs are just out of the ordinary. Our app is the only thing in MySQL, so I'm not worried about other users having access to our UDF.
Any solutions I'm overlooking? Any experience with using UDFs in such a way?
Thanks,
Eric
For this an other reasons we decided to collocate our application with the database, so this problem became moot.

Is there an efficient and free method of migrating a 6GB Interbase DB to MySql?

That's about it. I can always just dump it to csv and read it in, but I was hoping to avoid that.
Since both Interbase and MySql have ODBC drivers, how about using your favorite development environment to write an app that opens each table in the IB database and copies it into the MySql database? There are various languages and IDE's that support data access using odbc.
This would be nicer than using csv because your code could copy the schema during the process of copying each table.
You can use Database Workbench
Cross database development
Use the Schema Compare and Migration
Tools to compare testing and deployed
databases, migrate existing databases
to different database systems.
ps: I don't know why you want to migrate from Interbase to MySQL but you can also take a look to Firebird