I've exported a MySQL v4.0.25 script to a sql file and since I can't find an installer for 4.0 anymore the only option left is to use 4.1..
Now, I'm getting the common 1064 error since v4.0 doesn't have utf-8 (only latin-1) and v4.1 gives me a syntax error.
I'd be okay with editing the files manually but, one of the scripts is a file 12GB big and the other one is 5GB so I can't even find an editor able to open a file that large and a problem at hands with this migration (the files are that big because they are a copy of 2 production DBs with over 10years use).
How can I fix or bypass this problem? Any chance I can tell the import script to ignore the lines with errors (and I don't even know how many are there..)?
If it's still possible, dump the data structures in sql and the data tables in csv format using mysqldump --tab=path. This way, any modifications you will need to do will be on the much smaller sql file, keeping the large data files untouched. They you could later import the whole thing using mysqlimport command.
Alternatively, you could always use the mysql --force option for importing your sql file.
More information:
MySQL Reference Manual: mysqldump --tab=path option
MySQL Reference Manual: mysqlimport
MySQL Reference Manual: mysql --force
For manually editing the files:
If you are using Linux as your operating system, then there is a big variety of commands in your hand: more, less, sed, etc. sed is good for substitutions, similar to your question. A nice tutorial can be found at http://www.grymoire.com/Unix/Sed.html
In Windows, I sometimes use PowerShell. I had similar post on StackOverflow about "mysqldump without database name" where there is an example of how to replace a string in a dump file.
Related
During development, how our local WAMP servers get up-to-date data from the test server is that a dump of the database is made and we upload that dump using the source command to load the .sql file.
Recently, at the very end of the import we have been getting errors about the #old variables which stored the original settings like foreign key constraints before they’re changed (so turning off foreign key constraints so that the import doesn’t throw errors when it recreates tables and attempts to create foreign keys when one of the tables has yet to be created). I have worked out that the cause is that the product table is getting more and more data and at a point the session has timed out during the import.
I’m wondering what setting can I set (either as part of the SQL query on in the my.ini file) that will stop all timeouts, in effect making a session last forever while we are signed in.
Strategies for importing large MySQL databases
PHPMyAdmin Import
Chances are if you’re reading this, PHPMyAdmin was not an option for your large MySQL database import. Nonetheless it is always worth a try, right? The most common cause of failure for PHPMyAdmin imports is exceeding the import limit. If you’re working locally or have your own server, you can try changing the MySQL ini settings usually found in the my.ini file located in the MySQL install folder. If you’re working with WAMP on Windows, you can access that file using the WAMP control panel under MySQL > my.ini. Remember to restart WAMP so your new settings will be used. Settings you may want to increase here include:
max_allowed_packet
read_buffer_size
Even with enhanced MySQL import settings you may still find that imports time out due to PHP settings. If you have access to PHP.ini, you can make edits to the maximum execution time and related settings. In WAMP, access the PHP.ini file under the WAMP control panel at PHP > php.ini. Consider raising the limits on the following settings while trying large MySQL imports:
max_execution_time
max_input_time
memory_limit
Using Big Dump staggered MySQL dump importer
If basic PHPMyAdmin importing does not work, you may want to try the Big Dump script from Ozerov.de for staggered MySQL imports. What this useful script does is run your import in smaller blocks, which is exactly what is often needed to successfully import a large MySQL dump. It is a free download available at http://www.ozerov.de/bigdump/.
The process of using Big Dump is fairly simple: you basically position your SQL import file and the Big Dump script together on the server, set a few configs in the Big Dump script and then run the script. Big Dump handles the rest!
One key point about this otherwise great option, is that it will not work at all on MySQL exports that contain extended inserts. So if you have the option to prevent extended inserts, try it. Otherwise you will have to use another method for importing your large MySQL file.
Go command line with MySQL console
If you’re running WAMP (and even if you’re not) there is always the option to cut to the chase and import your large MySQL database using the MySQL console. I’m importing a 4GB database this way as I write this post. Which is actually why I have some time to spend writing, because even this method takes time when you have a 4GB SQL file to import!
Some developers (usually me) are intimidated by opening up a black screen and typing cryptic commands into it. But it can be liberating, and when it comes to MySQL databases it often the best route to take. In WAMP we access the MySQL console from the WAMP control panel at MySQL > MySQL Console. Now let’s learn the 2 simple MySQL Console commands you need to import a MySQL database, command-line style:
use `db_name`
Command use followed by the database name will tell the MySQL console which database you want to use. If you have already set up the database to which you are importing, then you start by issuing the use command. Suppose your database is named my_great_database. In this case, issue the following command in the MySQL Console. Note that commands must end with a semi-colon.
mysql-> use my_great_database;
mysql-> source sql_import_file.sql
Command source followed by the location of a SQL file will import the SQL file to the database you previously specified with the use command. You must provide the path, so if you’re using WAMP on your local server, start by putting the SQL file somewhere easy to get at such as C:\sql\my_import.sql. The full command with this example path would be:
mysql-> source C:\sql\my_import.sql;
After you run that command, the SQL file should begin to be imported. Let the queries run and allow the import to complete before closing the MySQL console.
Further documentation for MySQL command line can be found here: http://dev.mysql.com/doc/refman/5.5/en/mysql.html.
Another solution is to use MySQL Workbench.
This solution worked for me:
max_allowed_packet <-- --> upped size to 8M
read_buffer_size <-- --> upped from 256 to 512
Using Xampp control panel on localhost. After making the changes to the my.ini file in MySQL config, don’t forget to quit Xampp (or Wamp) and restart it for changes to take effect.
(Four days of head-banging and I finally got it fixed!)
Symptoms were on Import: #2006 MySql server went away. However, only 10 table rows were being imported out of 87 table rows.
Consider using MySQL Workbench, it's free and handles very large script very well (from the menu choose: File -> Open SQL Script - if it's large, it will ask you if you'd like run it). Has served me well over the years when working with large SQL dumps.
I need suggestions on how to create a SQLite database from an old MySQL database using Windows 7. I have MySQL's .FRM and .IDB files, and a SQL File (which I believe to be a dump, not a script, unfortunately). I tried to just use sqlite's .read and got a bunch of syntax errors about lock and unlock, which is why I'm guessing it's a dump file. This is a 30 gig database, so recreating by hand isn't really an option.
Is there any way for me to do something like export to a CSV, then import it into SQLite? I tried to use mysql2sqlite with Cygwin to convert it, and got a ./mysql2sqlite.sh: line 2: $'\r': command not found.
Any ideas?
There is a collection of tools in the official SQLite website.
Wiki Link
I backed up my db with mysqldump from phpMyAdmin. Using MySQL 5.0.22. Made no changes to database file. Import fails. Found many instances of extra spaces using notepad, but now cannot find any other such extraneous spaces. Error is 1064.
Any suggestions on how to import file properly?
Thanks.
I encountered problems with mysql dumps of entire databases including views. So now I dump the tables and data as a separate dump, and export the views, stored routines and functions separately. I restore the tables first then the views etc.
Having come from MS SQL Server and Oracle I would like to know if there are any totally reliable tools out there for MySQL database backups and restores.
You have done several things wrong here
Using PHPMyAdmin for anything critical, especially backups. It is not production-ready, in my experience. Feel free to use it for unimportant read-only work on noncritical servers however.
Editing mysqldump files with notepad (or any other editor). Despite appearances, mysql dump files are not text files and should not be edited with any editor. They contain binary data which is not valid in most character encodings, and therefore can probably not be loaded/saved without introducing errors.
Make a fresh dump using mysqldump, which is the only reliable way of making them, and import that. Do not edit mysql dump files using notepad or any other text tool (this includes the likes of grep, sed etc).
If you need to edit a mysql dump file, then restore it into another (i.e. non-production) database instance, make the necessary changes using SQL commands and re-dump the database. This may be slow but it's reliable.
Sorry for a noob question regarding MySQL. I downloaded FlightStats to learn about mysql but I can't figure out how to register it with my localhost mysql db. I know in MS SQL you can simply register any sql db using sql studio. I tried to google but come up with no result. Perhaps, my search phrase is wrong. I'm searching with "how to register a mysql database, register a mysql database...etc.". How do you register or setup an database from existing database like FlightStats? I'm using DBVisualizer. Is there a way in dbVis that I'm not aware of to regsiter a database?
Thanks
edit: sorry for the bad wording. I found this. I have the .myd, .myi and .frm and I want to get it to restore(?) with my local mysql instance. I look at all the answers but I'm still confuse as how you restore the database from those 3 files.
A little background first. The FlightStats download page linked to in the original question appears to provide zipped tarballs of the binary table storage files from the MySQL data directory. Given that this is considered a viable means of distribution, and combined with the use of MERGE tables, I would surmise that this tarball contains a bunch of MyISAM data files (.myi, .myd). Jack's edit confirms that this is the situation.
This is an atypical means of distributing a MySQL data set, although not at all uncommon when backing up MyISAM storage, and probably not all that unheard of for moving large data sets around; it likely works out considerably more space-efficient than a corresponding dump file. Of course, in SQL Server land, it's pretty common to attach database files into an instance.
Broadly speaking, you'd recover the database as follows:
Locate the MySQL data directory; typically /var/mysql or similar
Create a new directory with the desired database name e.g. flightdata
Extract the .myi, .myd and other files from the tarball into this directory
Make sure the entire directory is owned by the user MySQL runs as (usually mysql) - use chmod -R to make sure you get everything
Open a MySQL console
USE <database-name>
SHOW TABLES
You should see some tables listed. In addition, the downloads page linked includes a couple of SQL scripts, which contain SQL commands that you need to run against your database once it's in place. These will cause the merge definitions and table indexes to be rebuilt. You can pipe these into the command-line client, e.g. mysql -u<username> -p<password> <database-name> < <sql-file>.
It may be a good idea to shut down the MySQL server while you're doing this; use e.g. /etc/init.d/mysql stop or similar, and restart once the files are extracted in place.
There's generally a way to import sql files using a GUI database tool. I'm not familiar with DBVisualizer, but as long as you have a MySQL command line client installed you can do it there as well. It's pretty easy:
Create a blank schema. You can do this in your GUI tool or on the command line client. Just use CREATE DATABASE flightstats;, or whatever name you want.
Use the following command line syntax to import/run an sql file on the new schema: mysql -u <username> -p flightstats < /path/to/file.sql
The -p option prompts for a password. I generally set up the database using step 1 as the root user, then GRANT some permissions on it to a new user id, then use that user id to run the SQL file.
This process is pretty much what a GUI tool will do in the background.
Registering a database? dont know what that means however mysql gui tools can help you creating a database. Have a look at it or better you download phpmyadmin.
Google WAMP for Windows.
Google MAMP for Mac.
Google LAMP for Linux.
Any questions?
The title is self explanatory. Is there a way of directly doing such kind of importing?
The .BAK files from SQL server are in Microsoft Tape Format (MTF) ref: http://www.fpns.net/willy/msbackup.htm
The bak file will probably contain the LDF and MDF files that SQL server uses to store the database.
You will need to use SQL server to extract these. SQL Server Express is free and will do the job.
So, install SQL Server Express edition, and open the SQL Server Powershell. There execute sqlcmd -S <COMPUTERNAME>\SQLExpress (whilst logged in as administrator)
then issue the following command.
restore filelistonly from disk='c:\temp\mydbName-2009-09-29-v10.bak';
GO
This will list the contents of the backup - what you need is the first fields that tell you the logical names - one will be the actual database and the other the log file.
RESTORE DATABASE mydbName FROM disk='c:\temp\mydbName-2009-09-29-v10.bak'
WITH
MOVE 'mydbName' TO 'c:\temp\mydbName_data.mdf',
MOVE 'mydbName_log' TO 'c:\temp\mydbName_data.ldf';
GO
At this point you have extracted the database - then install Microsoft's "Sql Web Data Administrator". together with this export tool and you will have an SQL script that contains the database.
MySql have an application to import db from microsoft sql.
Steps:
Open MySql Workbench
Click on "Database Migration" (if it do not appear you have to install it from MySql update)
Follow the Migration Task List using the simple Wizard.
I did not manage to find a way to do it directly.
Instead I imported the bak file into SQL Server 2008 Express, and then used MySQL Migration Toolkit.
Worked like a charm!
In this problem, the answer is not updated in a timely. So it's happy to say that in 2020 Migrating to MsSQL into MySQL is that much easy. An online converter like RebaseData will do your job with one click. You can just upload your .bak file which is from MsSQL and convert it into .sql format which is readable to MySQL.
Additional note: This can not only convert your .bak files but also this site is for all types of Database migrations that you want.
Although my MySQL background is limited, I don't think you have much luck doing that. However, you should be able to migrate over all of your data by restoring the db to a MSSQL server, then creating a SSIS or DTS package to send your tables and data to the MySQL server.
hope this helps
I highly doubt it. You might want to use DTS/SSIS to do this as Levi says. One think that you might want to do is start the process without actually importing the data. Just do enough to get the basic table structures together. Then you are going to want to change around the resulting table structure, because whatever structure tat will likely be created will be shaky at best.
You might also have to take this a step further and create a staging area that takes in all the data first n a string (varchar) form. Then you can create a script that does validation and conversion to get it into the "real" database, because the two databases don't always work well together, especially when dealing with dates.
The method I used included part of Richard Harrison's method:
So, install SQL Server 2008 Express
edition,
This requires the download of the Web Platform Installer "wpilauncher_n.exe"
Once you have this installed click on the database selection ( you are also required to download Frameworks and Runtimes)
After instalation go to the windows command prompt and:
use sqlcmd -S \SQLExpress (whilst
logged in as administrator)
then issue the following command.
restore filelistonly from
disk='c:\temp\mydbName-2009-09-29-v10.bak';
GO This will list the contents of the
backup - what you need is the first
fields that tell you the logical names
- one will be the actual database and the other the log file.
RESTORE DATABASE mydbName FROM
disk='c:\temp\mydbName-2009-09-29-v10.bak' WITH MOVE 'mydbName' TO
'c:\temp\mydbName_data.mdf', MOVE
'mydbName_log' TO
'c:\temp\mydbName_data.ldf'; GO
I fired up Web Platform Installer and from the what's new tab I installed SQL Server Management Studio and browsed the db to make sure the data was there...
At that point i tried the tool included with MSSQL "SQL Import and Export Wizard" but the result of the csv dump only included the column names...
So instead I just exported results of queries like "select * from users" from the SQL Server Management Studio
SQL Server databases are very Microsoft proprietary. Two options I can think of are:
Dump the database in CSV, XML or similar format that you'd then load into MySQL.
Setup ODBC connection to MySQL and then using DTS transport the data. As Charles Graham has suggested, you may need to build the tables before doing this. But that's as easy as a cut and paste from SQL Enterprise Manager windows to the corresponding MySQL window.
For those attempting Richard's solution above, here are some additional information that might help navigate common errors:
1) When running restore filelistonly you may get Operating system error 5(Access is denied). If that's the case, open SQL Server Configuration Manager and change the login for SQLEXPRESS to a user that has local write privileges.
2) #"This will list the contents of the backup - what you need is the first fields that tell you the logical names" - if your file lists more than two headers you will need to also account for what to do with those files in the RESTORE DATABASE command. If you don't indicate what to do with files beyond the database and the log, the system will apparently try to use the attributes listed in the .bak file. Restoring a file from someone else's environment will produce a 'The path has invalid attributes. It needs to be a directory' (as the path in question doesn't exist on your machine).
Simply providing a MOVE statement resolves this problem.
In my case there was a third FTData type file. The MOVE command I added:
MOVE 'mydbName_log' TO 'c:\temp\mydbName_data.ldf',
MOVE 'sysft_...' TO 'c:\temp\other';
in my case I actually had to make a new directory for the third file. Initially I tried to send it to the same folder as the .mdf file but that produced a 'failed to initialize correctly' error on the third FTData file when I executed the restore.
The .bak file from SQL Server is specific to that database dialect, and not compatible with MySQL.
Try using etlalchemy to migrate your SQL Server database into MySQL. It is an open-sourced tool that I created to facilitate easy migrations between different RDBMS's.
Quick installation and examples are provided here on the github page, and a more detailed explanation of the project's origins can be found here.