I Have an access to MYSQL Server with more than 2000 Databases in it.
I want to scan all databases to get all email addresses that saved in the tables of databases.
So would you please give me a solution to extract email address from all of databases !?
I already have a root privileges and phpmyadmin.
Thank you
If you have access to all tables (i.e. as root), you can dump all tables and grep email address, like this:
mysqldump -u root -p --all-database | egrep -i "\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b"
The regular expression I used is taken from here:
http://www.regular-expressions.info/email.html
Edit:
the command above will print the whole rows containing an email address regardless of the column.
If you have email dedicated columns you can print only email with a little modification:
mysqldump -u root -p --all-database | perl -pe "s/,/\n/g; s/'//g;" | egrep -i "\b[A-Z0-9._%+-]+#[A-Z0-9.-]+\.[A-Z]{2,4}\b"
This will remove also surrounding quotes.
Related
I have a gpg-encrypted dump of a MySQL database.
I'm trying to restore it decrypting in a single command, but it doesn't work - I think - because I need to input two passwords, one for decrypting and one for accessing db, and it seems to mess things up.
What I do is:
gpg --decrypt dump.sql.gpg | mysql -u user -p db_name
It ask me the DB password and the gpg password "together", so that I can't type both.
Is it possible to have the password-requirement separated?
Thanks
Some parameters are missing:
mute gpg output to terminal: --quiet --no-tty
batch mode: --batch
--decrypt outputs the decription to console instead of to a file
remove spaces between -u and the username and -p and the password
provide the database you want the data (it may be in the dump though)
Something like:
gpg --quiet --no-tty --batch --decrypt dump.sql.gpg | mysql -uUSER -pPASSWORD [database]
Hope it helps!
It is also dumping the correct ones, but they are at the end of a bunch of undesired ones.
The dump includes a bunch of "system" tables such as:
| column_stats |
| columns_priv |
| func |
...
I am using the root user to do the dump, like this:
"C:\wamp\bin\mysql\mysql5.6.12\bin\mysqldump.exe" -u [user]-p [password] --databases "my-db-name" > dump.sql
I haven't been able to find any related info, I've mainly used mysqldump and column_stats as keywords.
Finally I realised what's wrong here. Your parameter -p followed by blank space implies that you will type a password by the prompt "Enter password:", and your [password] is interpreted as a database name. Since there is no database named like your password, everything is dumped. From documentation:
--password[=password], -p[password]
The password to use when connecting to the server. If you use the
short option form (-p), you cannot have a space between the option and the
password. If you omit the password value following
the --password or -p option on the command line, mysqldump prompts for one.
So, your command should be:
"C:\wamp\bin\mysql\mysql5.6.12\bin\mysqldump.exe" -u [user] -ppassword "my-db-name" > dump.sql
(notice that here is no blank space between -p and your password),
or like this:
"C:\wamp\bin\mysql\mysql5.6.12\bin\mysqldump.exe" -u [user] -p "my-db-name" > dump.sql
(here you input password from keyboard after pressing Enter).
The tables you mention all belongs to the mysql database, which is a system database. Is it perfectly acceptable to use mysqldump on that database, but an backup incomplete backup of that database might turns out to cause authentication/authorization/functional issues if you later you that dump to restore the database.
These tables should not appears inside a regular database. If they do exists there, it certainly indicates some prior mistake, and you should simply delete these tables.
If you simply want to perform that dump and don't want to investigate the root problem, It is also possible to tell mysqldump to ignore tables that exists but that you would like to exclude from a dump file. The option syntax is: --ignore-table=db_name.tbl_name. To exclude multiple tables, you can repeat that argument several time.
I have a ton of users on many different MySQL servers with this type
myuser#localhost
myotheruser#localhost
now, I want to create new users, that should have the same password as the user above, and have access to the same databases, but from a different host like this:
myuser#127.0.0.1
myotheruser#127.0.0.1
does anyone have a quick and easy way to do this?
I have tried a different approach to this problem, but running these two commands:
mysqldump --skip-extended-insert mysql user | grep 'localhost' | egrep '^INSERT INTO' | sed 's/localhost/127.0.0.1/g' > add-local-ip-user.sql
mysqldump --skip-extended-insert mysql db | grep 'localhost' | egrep '^INSERT INTO' | sed 's/localhost/127.0.0.1/g' > add-local-ip-db.sql
Then, I just output the content:
cat add-local-ip*
then I open mysql:
mysql mysql
and finally just paste the output from above, and run flush privileges does anyone have any objection to doing it this way? e.g. is this a stupid solution?
I want to export a list of tables starting with a certain prefix using a wild card.
Ideally I would do:
mysqldump -uroot -p mydb table_prefix_* > backup.sql
Obviously that doesn’t work. But what is the correct way to do this?
If it has a prefix, make a user with select and lock permissions to the table with GRANT like this
GRANT SELECT, LOCK TABLES ON `table\_prefix\_%` . * TO 'backup-user'#'localhost';
then instead of running mysql dump as root, run it as backup-user with the option --all-databases
as backup user only has select and lock permission on these tables, They are the only ones which will be there.
its also safer using a user like this rather than the root account for everything
Ok. Here is a solution using a Bash script (I've tested it in Linux).
Create a text file named "dumpTablesWithPrefix.script" and put this in it:
tableList=$(mysql -h $1 -u $2 -p$3 $4 -e"show tables" | grep "^$5.*" | sed ':a;N;$!ba;s/\n/ /g')
mysqldump -h $1 -u $2 -p$3 $4 $tableList
Save it, and make it executable:
$ chmod +x dumpTablesWithPrefix.script
Now you can run it:
$ ./dumpTablesWithPrefix.script host user pwd database tblPrefix > output.sql
Now let me explain each piece:
The command-line arguments
Bash scripts store command-line arguments in the variables $1, $2, $3 and so on. So the first thing I need to tell you is the order of the arguments you need:
The host (if the mysql server is in your machine, write localhost)
Your user
Your password
The name of your database
The prefix of the tables you want to export
The first line of the script
I'll split this line in three pieces:
mysql -h $1 -u $2 -p$3 $4 -e"show tables"
This piece retrieves the full table list of your database, one table per line.
| grep "^$5.*"
This piece filters the table list, using the prefix you specified.
| sed ':a;N;$!ba;s/\n/ /g'
This final piece (a gem I found here: How can I replace a newline (\n) using sed? ) replaces all the "new line" characters with spaces.
This three pieces, strung together, throw the filtered table list and store it in the tableList variable.
The second line of the script
mysqldump -h $1 -u $2 -p$3 $4 $tableList
This line is simply the MySQL Dump command that does what you need.
Hope this helps
I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.