SQOOP export HDFS to MYSQL db - mysql

I'm trying to export a HDFS to MYSQL database. I found various different solution but none of them worked, I even tried to remove the WINDOWS-1251 chars from the file.
As a small summary - I'm using virtualbox with Hortonworks image for this operations.
My HIVE in the default database:
CREATE EXTERNAL TABLE `airqualitydata`(
`sensor_id` VARCHAR(100),
`sensor_type` VARCHAR(100),
`location` VARCHAR(100),
`lat` VARCHAR(100),
`lon` VARCHAR(100),
`timestamp` timestamp,
`p1` VARCHAR(100),
`durp1` VARCHAR(100),
`ratiop1` VARCHAR(100),
`p2` VARCHAR(100),
`durp2` VARCHAR(100),
`ratiop2` VARCHAR(100))
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\073'
LOCATION 'hdfs://sandbox-hdp.hortonworks.com:8020/hadoop/airqualitydata'
TBLPROPERTIES ("skip.header.line.count"="1");
The file contained in /hadoop/airqualitydata HDFS (removed the win1251 chars just to be sure).
Note that this data can be visualized by querying SELECT * FROM airqualitydata in the hive.
sensor_id;sensor_type;location;lat;lon;timestamp;P1;durP1;ratioP1;P2;durP2;ratioP2
9710;SDS011;4894;43.226;27.934;2021-09-09T00:00:12;70;;;20;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:02:41;83;;;0.93;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:05:14;0.80;;;0.73;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:07:42;0.50;;;0.50;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:10:10;57;;;0.80;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:12:39;0.40;;;0.40;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:15:07;0.70;;;0.70;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:17:35;2;;;0.47;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:20:04;90;;;0.63;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:22:34;0.57;;;0.57;;
9710;SDS011;4894;43.226;27.934;2021-09-09T00:25:01;0.73;;;0.60;;
MYSQL DB & TABLE:
CREATE DATABASE airquality CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
CREATE TABLE `airqualitydata`(
`sensor_id` VARCHAR(100),
`sensor_type` VARCHAR(100),
`location` VARCHAR(100),
`lat` VARCHAR(100),
`lon` VARCHAR(100),
`timestamp` timestamp,
`p1` VARCHAR(100),
`durp1` VARCHAR(100),
`ratiop1` VARCHAR(100),
`p2` VARCHAR(100),
`durp2` VARCHAR(100),
`ratiop2` VARCHAR(100)
);
SQOOP CLI call:
sqoop export --connect "jdbc:mysql://localhost:3306/airquality?useUnicode=true&characterEncoding=WINDOWS-1251" --username root --password hortonworks1 --export-dir hdfs://sandbox-hdp.hortonworks.com:8020/hadoop/airqualitydata --table airqualitydata --input-fields-terminated-by "\073" --input-lines-terminated-by "\n" -m 1
I removed the ?useUnicode=true&characterEncoding=WINDOWS-1251 with no success.
I also cannot access the log from the URL given in the terminal, so I got only this as failure:
Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
21/09/12 04:04:40 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292
21/09/12 04:04:40 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
21/09/12 04:04:40 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
21/09/12 04:04:40 INFO tool.CodeGenTool: Beginning code generation
Sun Sep 12 04:04:40 UTC 2021 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
21/09/12 04:04:40 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `airqualitydata` AS t LIMIT 1
21/09/12 04:04:40 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `airqualitydata` AS t LIMIT 1
21/09/12 04:04:40 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.6.5.0-292/hadoop-mapreduce
Note: /tmp/sqoop-raj_ops/compile/41fba9933b913b974b70403656a13287/airqualitydata.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
21/09/12 04:04:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-raj_ops/compile/41fba9933b913b974b70403656a13287/airqualitydata.jar
21/09/12 04:04:42 INFO mapreduce.ExportJobBase: Beginning export of airqualitydata
21/09/12 04:04:43 INFO client.RMProxy: Connecting to ResourceManager at sandbox-hdp.hortonworks.com/172.18.0.2:8032
21/09/12 04:04:43 INFO client.AHSProxy: Connecting to Application History server at sandbox-hdp.hortonworks.com/172.18.0.2:10200
21/09/12 04:04:50 INFO input.FileInputFormat: Total input paths to process : 1
21/09/12 04:04:50 INFO input.FileInputFormat: Total input paths to process : 1
21/09/12 04:04:50 INFO mapreduce.JobSubmitter: number of splits:1
21/09/12 04:04:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1631399426919_0028
21/09/12 04:04:51 INFO impl.YarnClientImpl: Submitted application application_1631399426919_0028
21/09/12 04:04:51 INFO mapreduce.Job: The url to track the job: http://sandbox-hdp.hortonworks.com:8088/proxy/application_1631399426919_0028/
21/09/12 04:04:51 INFO mapreduce.Job: Running job: job_1631399426919_0028
21/09/12 04:04:59 INFO mapreduce.Job: Job job_1631399426919_0028 running in uber mode : false
21/09/12 04:04:59 INFO mapreduce.Job: map 0% reduce 0%
21/09/12 04:05:03 INFO mapreduce.Job: map 100% reduce 0%
21/09/12 04:05:04 INFO mapreduce.Job: Job job_1631399426919_0028 failed with state FAILED due to: Task failed task_1631399426919_0028_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
21/09/12 04:05:04 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=1
Launched map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2840
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=2840
Total vcore-milliseconds taken by all map tasks=2840
Total megabyte-milliseconds taken by all map tasks=710000
21/09/12 04:05:04 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
21/09/12 04:05:04 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 21.2627 seconds (0 bytes/sec)
21/09/12 04:05:04 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
21/09/12 04:05:04 INFO mapreduce.ExportJobBase: Exported 0 records.
21/09/12 04:05:04 ERROR mapreduce.ExportJobBase: Export job failed!
21/09/12 04:05:04 ERROR tool.ExportTool: Error during export: Export job failed!
Any directions will be helpful, Thanks!
EDIT #1:
As per the comments above, using:
sqoop export --connect jdbc:mysql://localhost:3306/airquality --table airqualitydata --username root --password hortonworks1 --hcatalog-database default --hcatalog-table airqualitydata --verbose
or basically (for people reproducing)
sqoop export --connect jdbc:mysql://<host:port>/<mysql db> --table <mysql table> --username <mysql_user> --password <mysqlpass> --hcatalog-database <hive_db> --hcatalog-table <hive_table> --verbose
I got it to put the data in the MYSQL. However it is putting the header row as well. Also when ran twice (I believe it should overwrite the data) it results in the data been in the table twice.
+-----------+-------------+----------+--------+--------+---------------------+------+-------+---------+------+-------+---------+
| sensor_id | sensor_type | location | lat | lon | timestamp | p1 | durp1 | ratiop1 | p2 | durp2 | ratiop2 |
+-----------+-------------+----------+--------+--------+---------------------+------+-------+---------+------+-------+---------+
| sensor_id | sensor_type | location | lat | lon | 2021-09-12 05:55:49 | P1 | durP1 | ratioP1 | P2 | durP2 | ratioP2 |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 70 | | | 20 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 83 | | | 0.93 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 0.80 | | | 0.73 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 0.50 | | | 0.50 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 57 | | | 0.80 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 0.40 | | | 0.40 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 0.70 | | | 0.70 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 2 | | | 0.47 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 90 | | | 0.63 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 0.57 | | | 0.57 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:55:49 | 0.73 | | | 0.60 | | |
| sensor_id | sensor_type | location | lat | lon | 2021-09-12 05:58:02 | P1 | durP1 | ratioP1 | P2 | durP2 | ratioP2 |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 70 | | | 20 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 83 | | | 0.93 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 0.80 | | | 0.73 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 0.50 | | | 0.50 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 57 | | | 0.80 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 0.40 | | | 0.40 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 0.70 | | | 0.70 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 2 | | | 0.47 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 90 | | | 0.63 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 0.57 | | | 0.57 | | |
| 9710 | SDS011 | 4894 | 43.226 | 27.934 | 2021-09-12 05:58:02 | 0.73 | | | 0.60 | | |
+-----------+-------------+----------+--------+--------+---------------------+------+-------+---------+------+-------+---------+
The data in HIVE is okay (no header row in there). What might cause this?
Also I have an exception but it completed overall, is this important?
21/09/12 05:57:41 INFO mapreduce.Job: Running job: job_1631399426919_0035
21/09/12 05:57:55 INFO mapreduce.Job: Job job_1631399426919_0035 running in uber mode : false
21/09/12 05:57:55 INFO mapreduce.Job: map 0% reduce 0%
21/09/12 05:58:03 INFO mapreduce.Job: map 100% reduce 0%
21/09/12 05:58:05 INFO mapreduce.Job: Job job_1631399426919_0035 completed successfully
21/09/12 05:58:06 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=345759
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2597
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Launched map tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=4966
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=4966
Total vcore-milliseconds taken by all map tasks=4966
Total megabyte-milliseconds taken by all map tasks=1241500
Map-Reduce Framework
Map input records=12
Map output records=12
Input split bytes=1800
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=211
CPU time spent (ms)=3490
Physical memory (bytes) snapshot=217477120
Virtual memory (bytes) snapshot=1972985856
Total committed heap usage (bytes)=51380224
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
21/09/12 05:58:06 INFO mapreduce.ExportJobBase: Transferred 2.5361 KB in 62.3328 seconds (41.6635 bytes/sec)
21/09/12 05:58:06 INFO mapreduce.ExportJobBase: Exported 12 records.
21/09/12 05:58:06 INFO mapreduce.ExportJobBase: Publishing HCatalog export job data to Listeners
21/09/12 05:58:06 WARN mapreduce.PublishJobData: Unable to publish export data to publisher org.apache.atlas.sqoop.hook.SqoopHook
java.lang.ClassNotFoundException: org.apache.atlas.sqoop.hook.SqoopHook
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:46)
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:457)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
21/09/12 05:58:06 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader#4232c52b

Solution to your first problem -
--hcatalog-database mydb --hcatalog-table airquality and remove --export dir parameter.
Sqoop export cannot replace data. Pls issue a sqoop eval statement before loading main table to truncate it.
sqoop eval --connect conn_parameters --username xx --password yy --query "truncate table mytab;"
You can also use update statement to update the table too. https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html
Now, for your header issue, i think the original table may have header row. I am not sure about the data in original table. Check if the source table is properly defined in hive.

Related

Etcd: how to check that each node can see each other

I have 3 etcd nodes on VMs (not k8s).
There was such problem that nodes are alive but can't see each other, error "connection timeout" during health check. But every single node has "alive" status and zabbix with "etcd by http" template doesn't generate any alerts.
Is there any way to check nodes visibility and to monitor it using zabbix?
Depending upon the version you run, here's an example to do this with 3.5.2
Command
ETCDCTL_API=3 ./bin/etcdctl endpoint status --cluster -w table --endpoints="member1.etcd:2384,member2.etcd:2384,member3.etcd:2384"
Output:
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| http://member1.etcd:2384 | 17ef476d9d7fec5f | 3.5.2 | 1.5 MB | false | false | 7 | 20033 | 20033 | |
| http://member2.etcd:2384 | 31e0ca30ec3c9d94 | 3.5.2 | 1.5 MB | false | false | 7 | 20033 | 20033 | |
| http://member3.etcd:2384 | 721948abbb0522bd | 3.5.2 | 1.5 MB | false | false | 7 | 20033 | 20033 | |
+--------------------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

mysqlbinlog doesn't work in Google Cloud SQL MySql

I have MySql instance on Google Cloud SQL. I have enabled binary logs. I can check the log files, as shown below.
mysql> SHOW BINARY LOGS;
+------------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+------------------+-----------+-----------+
| mysql-bin.000001 | 1375216 | No |
| mysql-bin.000002 | 7336055 | No |
+------------------+-----------+-----------+
I am able to check the events also in logs file.
mysql> SHOW BINLOG EVENTS IN 'mysql-bin.000001' limit 5;
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysql-bin.000001 | 4 | Format_desc | 883641454 | 124 | Server ver: 8.0.18-google, Binlog ver: 4 |
| mysql-bin.000001 | 124 | Previous_gtids | 883641454 | 155 | |
| mysql-bin.000001 | 155 | Gtid | 883641454 | 234 | SET ##SESSION.GTID_NEXT= 'd635d876-06de-11eb-b2ab-42010a9d0043:1' |
| mysql-bin.000001 | 234 | Query | 883641454 | 309 | BEGIN |
| mysql-bin.000001 | 309 | Table_map | 883641454 | 367 | table_id: 81 (mysql.heartbeat) |
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
But it gives me error when I use the mysqlbinlog command.
mysql> mysqlbinlog mysqld-bin.000001;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqlbinlog mysqld-bin.000001' at line 1
I don't understand what is going wrong here. Please help.
On cloud shell install :
sudo apt-get install mysql-server
Install the Cloud SQL Proxy client on your local machine
Then run :
mysqlbinlog -R --protocol TCP --host localhost --user root --password --port 3306 mysqld-bin.000001;

xtrabackup does not write gtid information in xtrabackup_binlog_info

Software versions:
xtrabackup 8.0.12
percona-xtradb-cluster-server 8.0.18-9
I am running xtrabackup with this options:
--defaults-file=/etc/mysql/my.cnf --backup --user=backup
--password=**** --parallel=4 --no-timestamp --target-dir=/my-backup-dir
Some of server options:
binlog_format | ROW
gtid_mode | ON_PERMISSIVE
enforce_gtid_consistency | ON
File xtrabackup_binlog_info has only binlog file name and position:
mysql-bin.000159 251
No GTID, so I can not create GTID-based replication restoring a slave from this backup.
What should I do to make xtrabackup include this information?
UPDATE:
Check if GTIDs are enabled:
show global variables like '%gtid%';
| Variable_name | Value |
| binlog_gtid_simple_recovery | ON |
| enforce_gtid_consistency | ON |
| gtid_executed | c0e3de06-a2a6-11ea-913c-c7b046cf5782:1-3399594,
de211648-2642-ee18-628d-dc48283b005c:1-3697598:3877279-10141440 |
| gtid_executed_compression_period | 1000 |
| gtid_mode | ON_PERMISSIVE |
| gtid_owned | |
| gtid_purged | c0e3de06-a2a6-11ea-913c-c7b046cf5782:1-2661056,
de211648-2642-ee18-628d-dc48283b005c:1-3697598:3877279-10141440 |
| session_track_gtids | OFF |
show master status;
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
| mysql-bin.000166 | 15285372 | | | c0e3de06-a2a6-11ea-913c-c7b046cf5782:1-3358798,
de211648-2642-ee18-628d-dc48283b005c:1-3697598:3877279-10141440 |
(pavel.selivanov#localhost) [kassa_prod]> show binlog events in 'mysql-bin.000166' limit 7;
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info
| mysql-bin.000166 | 4 | Format_desc | 4315 | 124 | Server ver: 8.0.18-9, Binlog ver: 4 |
| mysql-bin.000166 | 124 | Previous_gtids | 4315 | 251 | c0e3de06-a2a6-11ea-913c-c7b046cf5782:1-3347159,
de211648-2642-ee18-628d-dc48283b005c:3697598:3877279-10141440 |
| mysql-bin.000166 | 251 | Gtid | 4315 | 330 | SET ##SESSION.GTID_NEXT= 'c0e3de06-a2a6-11ea-913c-c7b046cf5782:3347160' |
| mysql-bin.000166 | 330 | Query | 4315 | 411 | BEGIN |
| mysql-bin.000166 | 411 | Table_map | 4315 | 499 | table_id: 150 (db.table) |
| mysql-bin.000166 | 499 | Update_rows | 4315 | 3475 | table_id: 150 flags: STMT_END_F |
| mysql-bin.000166 | 3475 | Xid | 4315 | 3506 | COMMIT /* xid=9611331 */
From the manual:
ON_PERMISSIVE: New transactions are GTID transactions. Replicated transactions can be either anonymous or GTID transactions.
Check with show global variables like 'gt%'; or in your binary logs, if you actually have GTID transactions.
You actually don't have anything special to do, to have xtrabackup include GTIDs in the xtrabackup_binlog_info file: How to create a new (or repair a broken) GTID based slave

RDS High CPU utilization

I am facing high CPU utilization issue, is too many concurrent create temporary table statement cause high CPU utilization?
Is there any query through that we can capture queries which causing high CPU utilization?
Variable we set:-
tmp_table_size = 1G
max_heap_table_size = 1G
innodb_buffer_pool_size = 145 G
innodb_buffer_pool_instance = 8
innodb_page_cleaner = 8
Status Variables:-
mysql> show global status like '%tmp%';
+-------------------------+-----------+
| Variable_name | Value |
+-------------------------+-----------+
| Created_tmp_disk_tables | 60844516 |
| Created_tmp_files | 135751 |
| Created_tmp_tables | 107643364 |
+-------------------------+-----------+
mysql> show global status like '%innodb_buffer%';
+---------------------------------------+--------------------------------------------------+
| Variable_name | Value |
+---------------------------------------+--------------------------------------------------+
| Innodb_buffer_pool_dump_status | Dumping of buffer pool not started |
| Innodb_buffer_pool_load_status | Buffer pool(s) load completed at 170917 19:11:45 |
| Innodb_buffer_pool_resize_status | |
| Innodb_buffer_pool_pages_data | 8935464 |
| Innodb_buffer_pool_bytes_data | 146398642176 |
| Innodb_buffer_pool_pages_dirty | 18824 |
| Innodb_buffer_pool_bytes_dirty | 308412416 |
| Innodb_buffer_pool_pages_flushed | 122454921 |
| Innodb_buffer_pool_pages_free | 188279 |
| Innodb_buffer_pool_pages_misc | 377817 |
| Innodb_buffer_pool_pages_total | 9501560 |
| Innodb_buffer_pool_read_ahead_rnd | 0 |
| Innodb_buffer_pool_read_ahead | 585245 |
| Innodb_buffer_pool_read_ahead_evicted | 14383 |
| Innodb_buffer_pool_read_requests | 304878851665 |
| Innodb_buffer_pool_reads | 10537188 |
| Innodb_buffer_pool_wait_free | 0 |
| Innodb_buffer_pool_write_requests | 14749510186 |
+---------------------------------------+--------------------------------------------------+
Step 1 -
show processlist
Find is any process is locking table if yes than change it to myisam.
Step 2 -
Check Ram and your db size
Step 3 -
Explain complex queries and check if file sort or maximum number od rows are getting scan remove it either by making table flat , not more than 4 sub queries
Step 4 -
Use of joins efficiently

How to filter mysql audit log by user account

My issue is even i disable the root user from audit logging but still logging for these user. Anyone please help. Here is i did step by step.
[Setp -1] Check the audit log variable.
mysql> SHOW VARIABLES LIKE 'audit_log%';
+-----------------------------+--------------+
| Variable_name | Value |
+-----------------------------+--------------+
| audit_log_buffer_size | 1048576 |
| audit_log_connection_policy | ALL |
| audit_log_current_session | ON |
| audit_log_exclude_accounts | |
| audit_log_file | audit.log |
| audit_log_flush | OFF |
| audit_log_format | OLD |
| audit_log_include_accounts | |
| audit_log_policy | ALL |
| audit_log_rotate_on_size | 0 |
| audit_log_statement_policy | ALL |
| audit_log_strategy | ASYNCHRONOUS |
+-----------------------------+--------------+
12 rows in set (0.00 sec)
[Setp-2]
The following statement is disable audit logging for root account.
-- audit_log_include_accounts to NULL
SET GLOBAL audit_log_include_accounts = NULL;
SET GLOBAL audit_log_exclude_accounts = root#%;
Note: I used the root#% instead root#localhost because of this database server can access from another ip address.
[Setp-3] I call the select statement SELECT * FROM SSVR_AUDIT_LOG from remote PC.
[Step-4] I checked the audit log in DB server.
<AUDIT_RECORD TIMESTAMP="2016-04-22T03:49:11 UTC" RECORD_ID="593_2016-04-22T01:28:17" NAME="Query" CONNECTION_ID="6" STATUS="0" STATUS_CODE="0" USER="root[root] # [162.16.22.48]" OS_LOGIN="" HOST="" IP="162.16.22.48" COMMAND_CLASS="show_create_table" SQLTEXT="SHOW CREATE TABLE `SSVR_AUDIT_LOG`"/>
<AUDIT_RECORD TIMESTAMP="2016-04-22T03:49:12 UTC" RECORD_ID="594_2016-04-22T01:28:17" NAME="Query" CONNECTION_ID="7" STATUS="0" STATUS_CODE="0" USER="root[root] # [162.16.22.48]" OS_LOGIN="" HOST="" IP="162.16.22.48" COMMAND_CLASS="select" SQLTEXT="SELECT * FROM `SSVR_AUDIT_LOG` LIMIT 0, 1000"/>
<AUDIT_RECORD TIMESTAMP="2016-04-22T03:49:12 UTC" RECORD_ID="595_2016-04-22T01:28:17" NAME="Query" CONNECTION_ID="7" STATUS="0" STATUS_CODE="0" USER="root[root] # [162.16.22.48]" OS_LOGIN="" HOST="" IP="162.16.22.48" COMMAND_CLASS="show_fields" SQLTEXT="SHOW COLUMNS FROM `tldssvr`.`SSVR_AUDIT_LOG`"/>
<AUDIT_RECORD TIMESTAMP="2016-04-22T03:49:13 UTC" RECORD_ID="596_2016-04-22T01:28:17" NAME="Quit" CONNECTION_ID="7" STATUS="0" STATUS_CODE="0" USER="root" OS_LOGIN="" HOST="" IP="162.16.22.48" COMMAND_CLASS="connect"/>
Here is my reference link enter link description here
I got the answer for my question. Here is correct answer. When you facing like that issue, you can follow below the steps.
Audit Log Filtering by Account
List all ‘audit log’ configuration items
> mysql -u root -p
> SHOW VARIABLES LIKE ‘audit_log%’;
+-----------------------------+--------------+
| Variable_name | Value |
+-----------------------------+--------------+
| audit_log_buffer_size | 1048576 |
| audit_log_connection_policy | ALL |
| audit_log_current_session | OFF |
| audit_log_exclude_accounts | |
| audit_log_file | audit.log |
| audit_log_flush | OFF |
| audit_log_format | OLD |
| audit_log_include_accounts | |
| audit_log_policy | ALL |
| audit_log_rotate_on_size | 0 |
| audit_log_statement_policy | ALL |
| audit_log_strategy | ASYNCHRONOUS |
+-----------------------------+--------------+
To add the remote application server host name and ip address in database server.
> cat /etc/hosts
> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
162.16.22.48 App_PC
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
To disable audit logging only for the application database user (root) local host and remote host accounts.
> mysql –u root –p
>SET GLOBAL audit_log_include_accounts = NULL;
>SET GLOBAL audit_log_exclude_accounts = 'root#localhost,root#App_PC';
List all ‘audit log’ configuration items and check the audit_log_exclude_account value.
> SHOW VARIABLES LIKE 'audit_log%';
> +-----------------------------+----------------------------+
| Variable_name | Value |
+-----------------------------+----------------------------+
| audit_log_buffer_size | 1048576 |
| audit_log_connection_policy | ALL |
| audit_log_current_session | OFF |
| audit_log_exclude_accounts | root#localhost,root#App_PC |
| audit_log_file | audit.log |
| audit_log_flush | OFF |
| audit_log_format | OLD |
| audit_log_include_accounts | |
| audit_log_policy | ALL |
| audit_log_rotate_on_size | 0 |
| audit_log_statement_policy | ALL |
| audit_log_strategy | ASYNCHRONOUS |
+-----------------------------+----------------------------+