mysqldump is dumping undesired system tables - mysql

It is also dumping the correct ones, but they are at the end of a bunch of undesired ones.
The dump includes a bunch of "system" tables such as:
| column_stats |
| columns_priv |
| func |
...
I am using the root user to do the dump, like this:
"C:\wamp\bin\mysql\mysql5.6.12\bin\mysqldump.exe" -u [user]-p [password] --databases "my-db-name" > dump.sql
I haven't been able to find any related info, I've mainly used mysqldump and column_stats as keywords.

Finally I realised what's wrong here. Your parameter -p followed by blank space implies that you will type a password by the prompt "Enter password:", and your [password] is interpreted as a database name. Since there is no database named like your password, everything is dumped. From documentation:
--password[=password], -p[password]
The password to use when connecting to the server. If you use the
short option form (-p), you cannot have a space between the option and the
password. If you omit the password value following
the --password or -p option on the command line, mysqldump prompts for one.
So, your command should be:
"C:\wamp\bin\mysql\mysql5.6.12\bin\mysqldump.exe" -u [user] -ppassword "my-db-name" > dump.sql
(notice that here is no blank space between -p and your password),
or like this:
"C:\wamp\bin\mysql\mysql5.6.12\bin\mysqldump.exe" -u [user] -p "my-db-name" > dump.sql
(here you input password from keyboard after pressing Enter).

The tables you mention all belongs to the mysql database, which is a system database. Is it perfectly acceptable to use mysqldump on that database, but an backup incomplete backup of that database might turns out to cause authentication/authorization/functional issues if you later you that dump to restore the database.
These tables should not appears inside a regular database. If they do exists there, it certainly indicates some prior mistake, and you should simply delete these tables.
If you simply want to perform that dump and don't want to investigate the root problem, It is also possible to tell mysqldump to ignore tables that exists but that you would like to exclude from a dump file. The option syntax is: --ignore-table=db_name.tbl_name. To exclude multiple tables, you can repeat that argument several time.

Related

Importing a MySQL Database on Localhost

So I wanted to format my system and I had a lot of works that I have done on my localhost that involves databases. I followed the normal way of backing up the database by exporting it into an SQL file but I think I made a mess by making a mistake of backing up everything in one SQL file (I mean the whole localhost was exported to just one SQL file).
The problem now is: when I try to import the backed up file I mean the (localhost.sql), I get an error like
tables already exist.
information_schema
performance_schema
an every other tables that comes with Xampp, which has been preventing me from importing the database.
These tables are the phpmyadmin tables that came with Xampp. I have been trying to get past this for days.
My question now is that can I extract different databases from the same compiled SQL database file?
To import a database you can do following things:
mysql -u username -p database_name < /path/to/database.sql
From within mysql:
mysql> use database_name;
mysql> source database.sql;
The error is quite self-explanatory. The tables information_schema and performance_schema are already in the MySQL server instance that you are trying to import to.
Both of these databases are default in MySQL, so it is strange that you would be trying to import these into another MySQL installation. The basic syntax to create a .sql file to import from the command line is:
$ mysqldump -u [username] -p [database name] > sqlfile.sql
Or for multiple databases:
$ mysqldump --databases db1 db2 db3 > sqlfile.sql
Then to import them into another MySQL installation:
$ mysql -u [username] -p [database name] < sqlfile.sql
If the database already exists in MySQL then you need to do:
$ mysqlimport -u [username] -p [database name] sqlfile.sql
This seems to be the command you want to use, however I have never replaced the information_schema or performance_schema databases, so I'm unsure if this will cripple your MySQL installation or not.
So an example would be:
$ mysqldump -uDonglecow -p myDatabase > myDatabase.sql
$ mysql -uDonglecow -p myDatabase < myDatabase.sql
Remember not to provide a password on the command line, as this will be visible in plain text in the command history.
The point the previous responders seem to be missing is that the dump file localhost.sql when fed into mysql using
% mysql -u [username] -p [databasename] < localhost.sql
generates multiple databases so specifying a single databasename on the command line is illogical.
I had this problem and my solution was to not specify [databasename] on the command line and instead run:
% mysql -u [username] -p < localhost.sql
which works.
Actually it doesn't work right away because of previous attempts
which did create some structure inside mysql, and those bits in localhost.sql
make mysql complain because they already exist from the first time around, so
now they can't be created on the second time around.
The solution to THAT is to manually edit localhost.sql with modifications like
INSERT IGNORE for INSERT (so it doesn't re-insert the same stuff, nor complain),
CREATE DATABASE IF NOT EXISTS for CREATE DATABASE,
CREATE TABLE IF NOT EXISTS for CREATE TABLE,
and to delete ALTER TABLE commands entirely if they generate errors because by then
they've already been executed ((and INSERTs and CREATEs perhaps too for the same reasons). You can check the tables with DESCRIBE TABLE and SELECT commands to make sure that the ALTERations, etc. have taken hold, for confidence.
My own localhost.sql file was 300M which my favorite editor emacs complained about, so I had to pull out bits using
% head -n 20000 localhost.sql | tail -n 10000 > 2nd_10k_lines.sql
and go through it 10k lines at a time. It wasn't too hard because drupal was responsible for an enormous amount, the vast majority, of junk in there, and I didn't want to keep any of that, so I could carve away enormous chunks easily.
unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Best way to import database in localhost has simple 5 steps:
zip sql file first to compress databse size.
go to termianl.
create empty database.
Run Command unzip databse With Import database: unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Enter Password

Export mysql schema (data only) except for one table

I have a script to export the schema of my MySQL database based on which I can generate my migrations. For this process I only require the database schema, not the data itself. This is what I currently use:
mysqldump -u root --p[pass] -h localhost mydb_prod --add-drop-table --no-data > mydb_prod-`date +"%Y-%m-%d-%H:%M:%S"`.sql
The --no-data option does the trick.
However, my migrations history is also kept in a database table. This means that I do want to export the data for my migrations table. I know about the --ignore-table option to explicitly ignore specific tables, however, this would mean that I would have to explicitly list all of my tables which might lead to problems in the future since we only do migrations every once in a while.
Is there a way to export the schema of the database without table data except one (or multiple) explicitly specified tables?
I hate answering my own questions but I found a workaround so I might as well share it.
I basically first export the schema of all tables without the data and then I export just my single migrations table with the data and append this to my .sql output file:
#/bin/bash
now=$(date +%Y-%m-%d-%H:%M:%S)
output_file="mydb_prod-$now.sql"
mysqldump -u root -p[pass] -h localhost mydb_prod --add-drop-table --no-data > "$output_file"
mysqldump -u root -p[pass] -h localhost mydb_prod migration_table --add-drop-table >> "$output_file"
This gives me exactly what I need without having to manually specify every single table explicitly.
You could use a script to edit out whatever table from the dump file itself. I sometimes use sed to rename tables, etc.

How to generate DDL for all tables in a database in MySQL

How to generate the DDL for all tables in a database of MySQL at once. I know that the following query will output the DDL for a table. But I want DDL of all tables at once because I am having hundreds of tables in my database.
show create table <database name>.<table name>;
For example:
show create table projectdb.customer_details;
The above query will result in DDL of customer_details table.
I am using MySQL with MySQL workbench on Windows OS.
You can do it using the mysqldump command line utility:
mysqldump -d -u <username> -p -h <hostname> <dbname>
The -d option means "without data".
#tftdias above made a comment that is partially correct:
The issue was the space between the -p and 'pps' as the password. You can absolutely freetext your password on the command line. You just shouldn't, as a general security rule. In linux, with a proper configuration, you can add a space (" ") BEFORE the first character on your command line, and that line will not enter into 'history'. I would recommend that whenever you cleartext your password (don't do it!), that you at least put a space first so that the password is not in visible history.
You must get list of all the tables in database and then run this query.
check this link to get list of all tables:
http://php.net/manual/en/function.mysql-list-tables.php

SQL Query to Remove All Product Information Magento

So I am looking for the ability to remove all products and their attributes from magento. Selecting all and then removing them from within the admin panel takes too long. Right now I am doing massive bulk imports and tweaking data until it looks right. How can I do this?
DELETE FROM catalog_product_entity;
This table is linked to by dozens of other tables, but the foreign key "CASCADE" feature built in to MySQL InnoDB should automatically delete the corresponding entries in those tables as well.
If you have SSH (or Telnet) access to a server console, I'd recommend to dump the "clean" (no products/only base attributes) database version to a .sql file:
$ mysqldump -h dbhost -u dbuser -p dbname > dump.sql
Then you can test and tweak whatever you want, without any worries. At any point you want to restore your "clean" version, you can do this by simply executing:
$ mysql -h dbhost -u dbuser -p dbname < dump.sql

backing up/copying mysql database

I have a DB which is a live one, what I'm looking to do, is to make a copy.
I have access to MySQl via SSH and phpMyAdmin.
Is there a command where I can copy/backup the DB, in a single command/action, without using export/import?
Thanks
mysqldump -u USERNAME -pPASSWORD databaseName > SAVETOFILE.sql
see this http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html for various options available.
you can do via PHPMyAdmin as well see here http://php.about.com/od/learnmysql/ss/mysql_backup_3.htm
Login to phpMyAdmin
Click on your database name
Click on the tab labeled EXPORT
Select all tables you want to backup (usually all)
Default settings usually work, just make sure SQL is checked
Check the SAVE FILE AS box
Hit GO
If you want to create DB that is a copy of above sqldump you need to do run the following command
mysql -u USERNAME -pPASSWORD < SAVEDFILE.sql
But, I feel you are looking for something like replication. In that case you need to set-up master-slave configuration where data gets replicated on slave. See this guide for replication
http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
Ok, so I found a command that would take a dump of one database and then insert it into another DB using a single command:
mysqldump -u username -ppassword live_db | mysql -u username -ppassword backup_db