Mysql slow.log user specific - mysql

I use a slow.log logging on my MySQL server to catch bottlenecks in my scripts, but at the same time I use phpmyadmin on this server. My scripts and phpmyadmin has different MySQL user accounts and now, when I analyzing the slow.log file, I see a lot of spam from phpmyadmin queries, is it possible, to configure the MySQL to logging a slow queries only from specific users?

If using MySQL 5.6, you can use the performance schema, and look at the different statements summaries there.
There are summaries by account (someuser#somehost), by username alone (someuser) or by host alone (somehost).
See the following tables:
performance_schema.events_statements_summary_by_account_by_event_name
performance_schema.events_statements_summary_by_user_by_event_name
performance_schema.events_statements_summary_by_host_by_event_name
performance_schema.events_statements_summary_global_by_event_name
http://dev.mysql.com/doc/refman/5.6/en/performance-schema.html

Related

How to confirm mysql-mariadb database migration is OK?

I've recently migrated databases (from a Ubuntu server) to a mariadb database (on a CentOS7 server) using 'mysqldump' and them importing with the 'mysql' command. I have this setup a a phpmyadmin environment and although the migration appears to have been successful, I've noticed phpmyadmin is reporting different disk space used and also showing slightly different row numbers for some of the tables.
Is there any way to determine if anything has been 'missed' or any way to confirm the data has all been copied across with the migration?
I've run a mysqlcheck on both servers to check db consistency but I don't think this really confirms the data is the same.
Cheers,
Tim
Probably not a problem.
InnoDB, when using SHOW TABLE STATUS, gives you only an approximation of the number of rows.
The dump and reload rebuilt the data and the indexes. This is very likely to lead to files of different sizes, even if the logical contents is identical.
Do you have any clues of discrepancies other than what you mentioned?

Efficiently restoring one database to another using AWS RDS

I have a MySQL database called latest, and another database called previous, both running on the same server. Both databases have identical content. Once per day, an application runs that updates latest. Later on, towards the end of the applications execution, a comparison is made between latest and previous for certain data. Differences that are found, if any, will trigger certain actions e.g. notification emails to sent. After that, a copy of latest is dumped to a file using mysqldump and restored to previous. Both databases are now in sync again and the process repeats the following day.
I would like to migrate the database(s) to AWS RDS. I'm open to using Aurora, but the MySQL engine is fine too. Is there a simpler or more efficient way of performing the restore process so that both databases are in sync using RDS? A way that avoids having to use mysqldump and feeding the result into previous?
I understand that I could create a read replica of an instance running latest to act as previous, but I think that updates the read replica as the source DB is updated (well, asynchronously anyway) which would ruin the possibility of performing a comparison between the two later on.
I don't have any particular problem with using mysqldump for the restore process, but I'm just not sure If I'm missing a trick.
If you don't want a read replica, your option using mysqldump is good but probably you could use it with mysqlimport as suggested in the MySQL Docs:
Copying MySQL Databases to Another Machine
You can also use mysqldump and mysqlimport to transfer the database. For large tables, this is much faster than simply using mysqldump.

how to recover deleted rows from MYSQL database

Today morning I surprisingly found that some data is deleted form my MYSQL database, I am unable to find that data in any table.
Please suggest how to recover data from MySQL database, DOES MYSQL keeps any Log if yes where it is located?
Please suggest any query to get all records.
I am using MYSQL Workbench 5.
If you have binary logs active on your server then you can use mysqlbinlog
You can use the following:
mysqlbinlog binary_log_file > query_log.sql
If you don't have this, then you have probably lost it.
You can look here for more information on how to convert the binary logs to sql.
You can check if binary logging is enabled or not by running the following command:
SHOW VARIABLES LIKE 'log_bin';
For InnoDB tables, if binary logs are not enabled, there is a recovery tool provided by Percona which may help.
Created a script to automate the steps from the Percona tool which recovers the deleted rows (if exist) and provides SQL queries to load the data into database.
Please note:
The time between the deletion of rows and the database stop is
crucial. If pages are reused you can’t recover the data.

proxy in front of mysql for redundancy removal

I'm trying to implement a proxy layer in front of MySQL server, that will catch redundant SQL queries and send them only once to the server. In other words, I have many clients (in PHP, Perl, on different web nodes) that talk to the MySQL and very often repeat the same SELECT queries. When traffic goes up MySQL, very often, goes down.
The question is - are you aware of any open source (or commercial) tool that can help? I tried MySQL Proxy, but looks like it can't help.
Two suggestions:
MySQL Proxy
This is a front end proxy from MySQL which does what you want as far as I know
vtocc
From the vitess project, used in the YouTube mysql environment, also does a similar thing. Query consolidation: The ability to reuse the results of an in-flight query to any subsequent requests that were received while the query was still executing.
You may want to look into HAProxy and how it works.
Here two additional suggestions
SUGGESTION #1 Setup a Cluster
If your data is all InnoDB, you should try Percona XtraDB Cluster and use HAProxy in conjunction with it. You can load balance across all server in the Cluster including the Write Master.
SUGGESTION #2 Setup a Cluster via MySQL Replication to 1 or more DB Servers
Use HAProxy to load balance your reads across the Read Slaves
If you are on a budget and your data is relatively small, setup multiple MySQL Instances on one server

MySQL database replication

This is the scenario:
I have a MySQL server with a database, let's call it consolidateddb. This database a consolidation of several tables from various databases
I have another MySQL server, with the original databases, these databases are production databases and are updates daily.
The company wants to copy each update/insert/delete on each table in the production databases to the corresponding tables in consolidateddb.
Would replication accomplish that? I know that replication is done on a databas to database, but not on tables that belong to different databases to one target database.
I hope my explanation was clear. Thanks.
Edit: Would a recursive copy of all tables inn each database to the single slave work? Or is it an ugly solution?
To clear up some things, let's name things accordingly to current mysql practice. A database is a database server. A schema is a database instance. A database server can have multiple schemas. Tables live within a schema.
Replication will help you if you want to duplicate schemas or tables as they are defined on the master/production server. The replication works by shipping a binary log of all the sql statements that are run on the master to the slave which dutifully runs them as if they run sequentially on itself.
You can choose to replicate all data, or you can choose some of the schemas or even just some of the tables.
You can not choose tables from different schemas and have them replicated into one schema, a table belongs to a specific schema.
By the way, important notice. A replication server can not be a slave to multiple masters. You could mimic this using federated tables, but that would never copy the data to the consolidation server, just show them as if the data from different servers were on one server.
The bonus of replication is that your consolidation server will more or less have updated data all the time.
You could take the binary logs from each of the masters, parse them with mysqlbinlog and then run that into the consolidated machine.
Something very approximately like:
mysqlbinlog [binary log files] | mysql -h consolidated
you'd need some kind of simple application (I suspect it could be done in bash if you needed) to wrap the logic.
Check out Replicating Different Databases to Different Slaves, see if it helps you in any way.
MySQL statement-based replication (basic replication) works by running the exact same statements that were run on the master on the slave. This includes information about what database the table was in.
I don't think MySQL provides any built-in way to move replication statements between databases (i.e. "insert into db1.table1 ..." -> "insert into db2.table1"). You may be able to trick it by manually altering the replication logs on the fly, but it wouldn't be out-of-the-bod MySQL replication.
You might be able to pull it off with MySQL Proxy
You may want to check out the maatkit toolkit. It's a free download and has a host of tools that specialize in optimizing things like archiving tables. I've used it on past projects to duplicate certain data to another DB, etc. You can do it based on time or any other number of factors.
To the best of my knowledge you can set up replication (MySQL 4+) and in the my.cnf file have the slave either only process certain tables or have the master log only certain tables, either way will solve your problem.
Here is a guide to some techniques:
http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html
I have very few problems with replication set-up, all my problems came trying to sync DBs, especially after a reboot etc.