MySQL Optimize Table script - mysql

I am in the process of upgrading our AWS Aurora RDS cluster from MySQL 5.7 to MySQL 8.0. During the upgrade of the cluster, the process failed, and according to AWS, they recommended we run Optimize Table on a number of DB tables before proceeding again. I was able to generate a list of tables that need to be optimized by running a script from AWS, but there are over 1200 tables that need this to be run on. I am not very experienced with SQL scripting, but I have used Powershell and Bash scripting before many times.
What is the best way to script this out, so I can provide a list of tables in a text file or a db table then run Optimize Table on each row or line.
I have done similar scripts in Bash or Powershell as For Each loops and have the tables listed in a text file, but I am not sure how to do a similar process in SQL. Any help is appreciated.
Thanks!

Output list of tables formatted into OPTIMIZE TABLE statements:
mysql -B -N -e "select concat('optimize table ',table_schema,'.',table_name,';')
from information_schema.tables where table_type='BASE TABLE' AND ..." > myscript.sql
Add any conditions you want where I put ..., to filter the tables you want to list. For example: AND TABLE_SCHEMA='myschema'.
Run that result as an SQL script:
mysql -e "source myscript.sql"
(I omitted options for --hostname, --user, and --password or any other options you may need to use to connect to your instance. I usually put these into ~/.my.cnf anyway.)

Related

CentOS MySQL Batch run SQL

This may have been answered elsewhere, but I can't seem to locate it, so please accept my sincere apologies if this is a duplicate question.
Complete newbie to CentOS command line operation of MySQL.
I'm trying to migrate 200,000,000 + records from a MSSQL database to MySQL and the Workbench migration tool fails. Given up trying to sort that so I've written a migration package in VB.Net to get all of the other 7-800 tables migrated directly, and they work great, but I have a few very large tables with around 15,000,000 records or more in each and my migration method would take several days to complete!
So - brainwave... I have the migration program create "insert into..." SQL statements in a single sql file, ftp this to my CentOS box and execute it locally on the CentOS machine.
Works fine, using:
mysql --user=user --password=password
to log in to MySQL, then executing the script as
source mysqlscript.sql
...but I will have a lot of scripts, such as
script1.sql
script2.sql
script3.sql
...
script27.sql
Is there a way within MySQL to batch process all these SQL scripts so I can just leave it running without having to manually set each of the 27 scripts off manually?

MYSQL Optimize Statement

I am trying to use the optimize statement on the Shell of the MYSQL system and have tried using
mysqlcheck -o --all-databases;
mysqlcheck -o <databasename>;
However, it does not work and showed me an error,
Is there any other command that could make it work? I am running the script on xampp shell for mysql and would like to just check the optimization for the table. I know that there would errors for it however I would like to view it.
mysqlcheck is a command to give to the OS via its "shell". It is not a command to use inside the "commandline tool" mysql; that would be OPTIMIZE TABLE as mentioned in a Comment.
But... Why do you think you want to Optimize all the tables. While the name "optimize" is tempting, it is rarely worth bothering with.

Backup mysql table data selectivly from remote server into SQL file

What I need to do, is to make a sql backup file ("Generate Insert Statements") of certain table data. I want to do this monthly.
Sample select statement:
SELECT * FROM table WHERE date >= "01-01-2012" AND date < "01-02-2012"
The case is I need to backup from remote sql server. Seems, I can't use mysqldump then (?)
So, shall I just write a PHP script to generate those statements?
It seems a little barbaric solution ;)
Sorry, this matter seems pretty basic, but I'm quite confused with it already.
btw. tables are partially InnoDB and partially MyISAM
You ask a range of questions...
If the remote machine is unix, write a little bash script like this:
#!/bin/bash
/usr/bin/mysqldump -h db_server -u user_name -ppassword --WHERE 'date >= "01-01-2012" AND date < "01-02-2012"' database_name table_name | gzip > output.sql.gz
Things to note:
The password is hardcoded in the script. Use a less-privileged user account.
Your date range is also hardcoded, but that's easy enough to make generic.
There's no space between '-p' and the password.
For innodb, use --single-transaction. For myisam, I guess you're stick with --lock-tables.
Then run this script in a cron job.
If you have trouble connecting, check your firewall and mysql user permissions.
And my $0.02... writing php scripts that run shell commands is really lazy. Write shell scripts for those. You'll sleep better.
Good luck.

mysql client "use database name" taking too long to execute

My database contains large no of tables (more than 300 tables ) . when I execute " use database name " command on mysql command line client , its taking very long time to execute. Is there any way we can make it execute faster. ?
You can pass the -A argument to the mysql command-line tool to make it not load database metadata when using a database.
That being said, what you're describing is usually a sign that either you have too many tables and/or columns, or your database server is overloaded. Often, it's both. Either one should be fixed.
I know its very old post but thought of writing about it as i also had the same problem in past and found this all
you would be having this problem when using cli while connecting mysql remotely this problem generally doesn't occur on localhost. As while using "use" command mysql check metadata of table and for loading it, it confirms host and credentials while connecting to mysql cli remotely and may be that slows down select DB, you could skip dns resolving but i don't think that will solve the problem completely
Hence "-A" tag/attribute have to pass with mysql command on connect remotely which will not load metadata while selecting DB using "USE" command.
for an example :-
mysql -A -h HOST -u USER -p

Can BCP copy data directly from table to table?

I've got a situation where I need to copy several tables from one SQL Server DB to a separate SQL Server DB. The databases are both on the same instance. The tables I'm copying contain a minimum of 4.5 million rows and are about 40GB upwards in size.
I've used BCP before but am not hugely familiar with it and have been unable to find any documentation about whether or not you can use BCP to copy direct from table to table without writing to file in between.
Is this possible? If so, how?
EDIT: The reason we're not using a straightforward INSERT is because we have limited space on the log drive on the server, which disappears almost instantly when attempting to INSERT. We did try it but the query quickly slowed to snail's pace as the log drive filled up.
from my answer at Table-level backup
I am using bcp.exe to achieve table-level backups
to export:
bcp "select * from [MyDatabase].dbo.Customer " queryout "Customer.bcp" -N -S localhost -T -E
to import:
bcp [MyDatabase].dbo.Customer in "Customer.bcp" -N -S localhost -T -E -b 10000
as you can see, you can export based on any query, so you can even do incremental backups with this.
BCP is for dumping to / reading from a file. Use DTS/SSIS to copy from one DB to another.
Here are the BCP docs at MSDN
SQL Import/Export wizard will do the job ... just connect twice to same database (source and destination) and copy one table onto other (empty and indexed), you might want to instruct to ignore autonumeric Id key field if exists. This approach works for me with tables over 1M+ records.