I'm trying to create a dump of my RDS data and use it locally.
I've used the command:
mysqldump -h myhostname.rds.amazonaws.com -u my_username -p my_dbname > ~/Downloads/dump.sql
When I try to view this data in a tool like DB Browser for sqlite, I get a prompt saying it's encrypted, asking for a password.
I thought maybe it needed to be converted into sqlite first, so I've done this in RazorSQL -> But I still get the same issue. Also, when I try to load the DB into NodeJS's sqlite module I get:
not able to query Table in SQLite DB Error: SQLITE_NOTADB: file is encrypted or is not a database
I've checked my RDS settings, and it says:
Encryption details:
Encryption enabled
No
So I have no idea what's going on here. Any tips? Does the file extension (.sql, .db etc) make a difference here?
When I try to view this data in a tool like DB Browser for sqlite
Sqlite is an entirely different thing than MySQL. There is very little overlap in tools that can work with both.
You're using a tool that can't be used for the purpose to which you're applying it, so you're getting a confusing error:
file is encrypted or is not a database
In other words, the tool is unable to make sense of the file, so one of two things is has happened: the file is encrypted or is not a [sqlite] database [at all].
The problem is the latter.
The file is not encrypted. Even if the RDS instance is encrypted, the generated dump file would still not be encrypted, because encryption in RDS is storage-level encryption of the data, at rest, on the disk volume backing the RDS instance. Encryption in RDS is transparent to the user.
The problem is that what you have here is a dump file -- a series of SQL statements that can be used to reconstruct your database on another MySQL server.
Your file is plain text. You can view it will a text editor. What you can't do is use the file as a database -- that's something Sqlite can do, because Sqlite stores the database inside a single, transportable file. MySQL is a different architecture.
You'll need to have the same version (e.g. 5.7.x) of MySQL Server installed locally, and then load this file onto it.
shell> mysql [options] < my_dump_file.sql
To reload a dump file written by mysqldump that consists of SQL statements, use it as input to the mysql client.
https://dev.mysql.com/doc/refman/5.7/en/reloading-sql-format-dumps.html
You can also use query browser tools for MySQL like Toad or Workbench, but a local MySQL Server is required.
Related
Quick help here...
I have these 2 mysql instances... We are not going to pay for this service anymore; so they will be gone... How can I obtain a backup file that I can keep for the future?
I do not have much experience with mysql, and all threads talk about mysqldump, which I don't know if its valid for this case. I also see the option to take a snapshot but I want a file I can save (like a .bak).
See screenshot:
Thanks in advance!
You have several choices:
You can replicate your MySQL instances to MySQL servers running outside AWS. This is a bit of a pain, but will result in a running instance. http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
You can use the commandline mysqldump --all-databases to generate a (probably very large) .sql file from each database server instance. Export and Import all MySQL databases at one time
You can use the commandline mysqldump to export a database at a time. This is what I would do.
You can use a gui MySQL client -- like HeidiSQL -- in place of the commandline to export your databases one at a time. This might be easier.
You don't need to, and should not, export the mysql, information_schema, or performance_schema databases; these contain system information and will already exist on another server.
In order to connect from outside AWS, you'll have to set the AWS protections appropriately. And you'll have to know the internet address, username, and password (and maybe port) of the MySQL server at AWS.
Then you can download HeidiSQL (if you're on windows) or some appropriate client software, connect to your database instance, and export your databases at your leisure.
I own a machine running third party software. I input data into this software and it stores that data into its own mysql database. I'd like query the mysql database directly, but I don't know the credentials that the application is using.
I have read and write access for all files in the machine, including the files in the mysql data directory. Theoretically, I should be able to read the data directly from these files (.ibd and .frm files). But practically, I don't know where to start. I'm thinking that these data files are somewhat readable since encrypting them would destroy their index-ability.
Is this feasible? Or would I have to reverse engineer the data file format in order to read it?
Or even better - is there some config file that I can change which would implicitly trust all local connections similar to postgres?
You could read the mysql files directly, but even if they're now encrypted, the columns names might be weird and you could have to spend some time reading them.
Another point could be looking for config files from that software, that could have the login/password (very very low probability, but who knows?)
And the best would be:
make a backup of the mysql files
in another mysql instalation / computer (to not break your software), follow the reset mysql password guide
Try accessing it via the command line on the local machine:
shell> mysql db_name
(from MySQL documentation)
From here, you can create yourself an account if you need to connect from other client software.
Or have you already tried that?
If you have root access to the machine that MySQL is running on, then you can reset the MySQL root password by following the procedure at: http://www.cyberciti.biz/tips/recover-mysql-root-password.html. Once you've reset the root password, you can then login to MySQL as the root MySQL user, and access any of the databases, and query them. The only caveat to keep in mind is that changing the MySQL root password could potentially prevent your application from accessing the MySQL database, but that would be surprising as the application should be designed to connect to the database using a MySQL user account (with limited privileges) other than the root MySQL user.
I'm very surprised that it seems impossible to upload more than a few megabytes of data to mysql database through PHPMyAdmin whereas I can upload a msaccess table easily up to 2 Gigabytes.
So is there any script in php or anything that can allow to do so unlike phpmyadmin ?
PhpMyAdmin is based on HTML and PHP. Both technologies were not built and never intended to handle such amounts of data.
The usual way to go about this would be transferring the file to the remote server - for example using a protocol like (S)FTP, SSH, a Samba share or whatever - and then import it locally using the mysql command:
mysql -u username -p -h localhost databasename < infile.sql
another very fast way to exchange data between two servers with the same mySQL version (it doesn't dump and re-import the data but copies the data directories directly) is mysqlhotcopy. It runs on Unix/Linux and Netware based servers only, though.
No. Use the command line client.
mysql -hdb.example.com -udbuser -p < fingbigquery.sql
Sorry for a noob question regarding MySQL. I downloaded FlightStats to learn about mysql but I can't figure out how to register it with my localhost mysql db. I know in MS SQL you can simply register any sql db using sql studio. I tried to google but come up with no result. Perhaps, my search phrase is wrong. I'm searching with "how to register a mysql database, register a mysql database...etc.". How do you register or setup an database from existing database like FlightStats? I'm using DBVisualizer. Is there a way in dbVis that I'm not aware of to regsiter a database?
Thanks
edit: sorry for the bad wording. I found this. I have the .myd, .myi and .frm and I want to get it to restore(?) with my local mysql instance. I look at all the answers but I'm still confuse as how you restore the database from those 3 files.
A little background first. The FlightStats download page linked to in the original question appears to provide zipped tarballs of the binary table storage files from the MySQL data directory. Given that this is considered a viable means of distribution, and combined with the use of MERGE tables, I would surmise that this tarball contains a bunch of MyISAM data files (.myi, .myd). Jack's edit confirms that this is the situation.
This is an atypical means of distributing a MySQL data set, although not at all uncommon when backing up MyISAM storage, and probably not all that unheard of for moving large data sets around; it likely works out considerably more space-efficient than a corresponding dump file. Of course, in SQL Server land, it's pretty common to attach database files into an instance.
Broadly speaking, you'd recover the database as follows:
Locate the MySQL data directory; typically /var/mysql or similar
Create a new directory with the desired database name e.g. flightdata
Extract the .myi, .myd and other files from the tarball into this directory
Make sure the entire directory is owned by the user MySQL runs as (usually mysql) - use chmod -R to make sure you get everything
Open a MySQL console
USE <database-name>
SHOW TABLES
You should see some tables listed. In addition, the downloads page linked includes a couple of SQL scripts, which contain SQL commands that you need to run against your database once it's in place. These will cause the merge definitions and table indexes to be rebuilt. You can pipe these into the command-line client, e.g. mysql -u<username> -p<password> <database-name> < <sql-file>.
It may be a good idea to shut down the MySQL server while you're doing this; use e.g. /etc/init.d/mysql stop or similar, and restart once the files are extracted in place.
There's generally a way to import sql files using a GUI database tool. I'm not familiar with DBVisualizer, but as long as you have a MySQL command line client installed you can do it there as well. It's pretty easy:
Create a blank schema. You can do this in your GUI tool or on the command line client. Just use CREATE DATABASE flightstats;, or whatever name you want.
Use the following command line syntax to import/run an sql file on the new schema: mysql -u <username> -p flightstats < /path/to/file.sql
The -p option prompts for a password. I generally set up the database using step 1 as the root user, then GRANT some permissions on it to a new user id, then use that user id to run the SQL file.
This process is pretty much what a GUI tool will do in the background.
Registering a database? dont know what that means however mysql gui tools can help you creating a database. Have a look at it or better you download phpmyadmin.
Google WAMP for Windows.
Google MAMP for Mac.
Google LAMP for Linux.
Any questions?
The title pretty much says it all, but to elaborate: If I build a mySQL database on my local dev machine, populate it with data, and subsequently want to migrate the database to a shared host (in this case, Siteground,) how do I do so in a way that keeps structure and data intact?
In this case, I don't have file access to the database server.
use mysqldump (doc) and dump your database (mysqldump [databasename] for a simple configuration) on your development machine to a dump (a file containing sql statements needed to recover both schema and data). Now insert the dump on your shared-host using the provided utilities (normaly you get phpMyAdmin preinstalled from your hoster, which can import dumps)
In addition to the response made by theomega (namely, do a dump of your development database and then insert the dump into your production database), be aware that you may need to enable large SQL insert statements if you have a lot of data. I would recommend you first FTP the file to the host, and then do the insert from a file. Each host has their own way of doing it, but if you can connect to the remote server using SSH, there is likely the ability to run the insert using the command line.
also in addition to theomega: most tools for mysql has dump / execute functions for sql files.
if you're using navicat, for an example, you're just a right-click away:
right-click on the database you want to export, and choose "dump sql file". this will allow you to save the .sql file on your local drive in the folder of your choosing.
then, right click on the destination database and choose "execute batch file". browse to the newly-created .sql file and it will execute all sql commands from that file in the destination database. namely, creating a copy of the exported db.