php pdo construct mysql call extremely slow on drupal 7 site using nginx and php-fpm - mysql

I have a drupal website that is calling the static PDO __contruct for a mysql connection, on a remote server at first and then changed to a local server to remove network latency for sanity checking.
On the local db call I am getting the following generated from xhprof and an extremely slow page load.
Function Name | Calls | Calls% | Incl. Wall Time(ms) | IWall% | Excl. Wall Time(ms) | EWall%
PDO::__construct | 6 | 0.0% | 120,084,724 | 91.6% | 120,084,724 | 91.6%
The php version is 5.4 on debian wheezy. The website is on a nginx and php5-fpm stack. The MySQL version is 5.5.
The tables are MyISAM but were originally InnoDB and had the same issue.
Does anyone know what could be causing this delay in the connection?

Related

While query using hive metastore using apache drill getting error unknown host exception

I have successfully connected Remote hive metastore with apache Drill. I am able to show databases of remote hdfs and also able to see the table structure from database. But while querying on the database it is giving error
Error: SYSTEM ERROR: UnknownHostException: remotename
Here is my configuration of apache hive
{
"type": "hive",
"enabled": true,
"configProps": {
"hive.metastore.uris": "thrift://myremoteIP:PortofThrift",
"hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
"fs.default.name": "hdfs://IP address of remote:port of hdfs from /",
"hive.metastore.sasl.enabled": "false"
}
}
here are the successful queries
jdbc:drill:zk=local> describe data_mcsc_mcsc_bill_info;
and result.
COLUMN_NAME | DATA_TYPE | IS_NULLABLE |
+------------------------+--------------------+--------------+
| tran_dt | CHARACTER VARYING | YES |
| tran_tm | CHARACTER VARYING | YES |
| bill_id | CHARACTER VARYING | YES |
| policy_number | CHARACTER VARYING | YES |
| policy_start_date | CHARACTER VARYING | YES |
| policy_end_date | CHARACTER VARYING | YES |
More details will be required to provide a complete answer to your question. I can provide some debugging tips here.
Verify that the machines running your namenode and metastore are accessible from the machine you are running Drill on. You can use the telnet command to verify that a socket can be opened. If this fails you have a firewall / connectivity issue.
Validate that Drill can talk to your HDFS cluster by putting a csv file on HDFS and adding a storage plugin configuration for your HDFS cluster to Drill. Validate that you can query the file from Drill. If this fails you may have not specified the correct port for your HDFS namenode or there may be some other HDFS related issue.
If these debugging tips are insufficient please subscribe to the Apache Drill dev and user lists. You can look at the information here on how to do this http://drill.apache.org/mailinglists/. You can then send your question to the user list, and the Drill team can provide more interactive help with your issue there. Also please include the following information if you send your question to the user list:
Your version of Drill.
Whether you are running a drill cluster or you are running a simple standalone node.
The version of Hive.
The Distribution of HDFS you are using (e.g. Big Top, Hortonworks, Cloudera).

SHOW PROCESSLIST in MySQL command: sleep

When I run SHOW PROCESSLIST in MySQL database, I get this output:
mysql> show full processlist;
+--------+------+-----------+--------+---------+-------+-------+-----------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+------+-----------+-------+---------+-------+-------+-----------------------+
| 411665 | root | localhost | somedb | Sleep | 11388 | | NULL |
| 412109 | root | localhost | somedb | Query | 0 | NULL | show full processlist |
+--------+------+-----------+-------+---------+-------+-------+------------------------+
I would like to know the process "Sleep" that is under Command. What does it mean? Why it is running since a long time and showing NULL? It is making the database slow and when I kill the process, then it works normally. Please help me.
It's not a query waiting for connection; it's a connection pointer waiting for the timeout to terminate.
It doesn't have an impact on performance. The only thing it's using is a few bytes as every connection does.
The really worst case: It's using one connection of your pool; If you would connect multiple times via console client and just close the client without closing the connection, you could use up all your connections and have to wait for the timeout to be able to connect again... but this is highly unlikely :-)
See MySql Proccesslist filled with "Sleep" Entries leading to "Too many Connections"? and https://dba.stackexchange.com/questions/1558/how-long-is-too-long-for-mysql-connections-to-sleep for more information.
"Sleep" state connections are most often created by code that maintains persistent connections to the database.
This could include either connection pools created by application frameworks, or client-side database administration tools.
As mentioned above in the comments, there is really no reason to worry about these connections... unless of course you have no idea where the connection is coming from.
(CAVEAT: If you had a long list of these kinds of connections, there might be a danger of running out of simultaneous connections.)
I found this answer here: https://dba.stackexchange.com/questions/1558. In short using the following (or within my.cnf) will remove the timeout issue.
SET GLOBAL interactive_timeout = 180;
SET GLOBAL wait_timeout = 180;
This allows the connections to end if they remain in a sleep State for 3 minutes (or whatever you define).
Sleep meaning that thread is do nothing.
Time is too large beacuse anthor thread query,but not disconnect server,
default wait_timeout=28800;so you can set values smaller,eg 10.
also you can kill the thread.

MySQL connection using ODBC (5.1) with SSL

We've got a client application that connects to our online MySQL database (5.1.44-community-log) thru a ODBC connector (the server is a managed* dedicated webserver). This works very nice. However I can't get it to work using SSL. This is what I've done so far:
1. MySQL server
I've got the server manager* set up MySQL with SSL, this is 'proven by':
mysql> SHOW VARIABLES LIKE '%ssl%';
which results is this response:
+---------------+---------------------------------+
| Variable_name | Value |
+---------------+---------------------------------+
| have_openssl | YES |
| have_ssl | YES |
| ssl_ca | /***/mysql-cert/ca-cert.pem |
| ssl_capath | |
| ssl_cert | /***/mysql-cert/server-cert.pem |
| ssl_cipher | |
| ssl_key | /***/mysql-cert/server-key.pem |
+---------------+---------------------------------+
Question: is the server configured right? I'm guessing it is...
2. Certificates
I've purchased real certificates (via my server manager). These are in the directory shown above. I've also downloaded the client-cert.pem, client-key.pem and ca-cert.pem from that directory.
3. MySQL user with REQUIRE [SSL|X509]
I've created a new user and then granted it access from any location (for testing) with SSL:
GRANT USAGE ON *.* TO 'somevaliduser'#'%' IDENTIFIED BY PASSWORD 'somevalidpass' REQUIRE X509
4. ODBC Client
I've (just downloaded and) installed : mysql-connector-odbc-5.1.8-winx64.msi (64-bit) as my machine is a 64-bit Windows 7 machine (so that's not what's wrong).
And I've created a User DSN configuring it like this (no options set on tabs), which shows it connecting to the server (however not using - nor requesting to do so - SSL) successfully (using some valid user which doesn't requires SSL):
So the connection is able to establish, now try using SSL.
This is configured like this, which is like I've read about on MySQL.com. So I'm not 100% sure the options set are right.
As you can see it results in a error HY000. Turning on tracing (within the ODBC configuration) also shows this error.
Can anyone give me a hint on how to make this work? Even if you know about just a part of the solution?
I solved the problem. Because I tried several things at a time I don't know what did the trick:
I've had the server manager re-create the certificates: I bought some but I found out that those couldn't be used to SSL-encrypt the connection. So for now I'm using OpenSSL certificates. I've had them re-create the certificates with 4) Create your client .... server. They must be unique. (as mentioned here) in mind.
I guess the checkbox 'Verify SSL Certificate' only applies when you buy a certificate and a thrid party service should check the validity of the certificate. Uncheck that box!
Only fill out the fields:
'SSL Key' (c:\path_to\client-key.pem)
'SSL Certificate' (c:\path_to\client-cert.pem)
'SSL Certificate Authority' (c:\path_to\ca-cert.pem)
Please note:
The port is still the same (for me).
The logs - as Michal Niklas proposed - didn't show any usefull information.
I've toggled on 'Use compression' which is said to improve performance.
I am using Ubuntu 12.04 LTS with MySQL
Ver 5.5.22-0ubuntu1 for debian-linux-gnu on x86_64 ((Ubuntu)) and OpenSSL OpenSSL 1.0.1 14 Mar 2012
I created the certificates following the tutorial on
http://www.thomas-krenn.com/de/wiki/MySQL_Verbindungen_mit_SSL_verschl%C3%BCsseln
(The tutorial is in German, but this is not important here).
When trying to connect with
mysql -u root -p --ssl-ca=/etc/mysql/ca-cert.pem --ssl-cert=/etc/mysql/client-cert.pem --ssl-key=/etc/mysql/client-key.pem --protocol=tcp
I always got an error message SSL connection error: protocol version mismatch
This lead me to the site
http://bugs.mysql.com/bug.php?id=64870
which confirms (for me) that there is a bug.
To make a long story short. In the end I created all certificates using my MacOS X Lion, copied the certificates to the server and client and it worked immediately!
When I got the Linux side working, Windows worked immediatly, too!
As mentioned above, you just have to set client-key, client-cert and ca-cert!

Mysql show processlist lists many processes sleep and info = null?

I'm injecting a stress test into my web app that connects to a mysql server and I'm monitoring the show processlist of mysql.
When the load is high (high swap i/o) I get many processes like that:
| 97535 | db| localhost | userA | Sleep | 515 | | NULL
| 97536 | db| localhost | userA | Sleep | 516 | | NULL
| 97786 | db| localhost | userA | Sleep | 343 | | NULL
| 97889 | db| localhost | userA | Sleep | 310 | | NULL
But I can't understand why are they still there and are not killed? This eventually leads to my app using all max_connections and stop processing incoming requests...
Any idea what are those processes and what are they doing there :) ?
Those are idle connections being held by a client. You should make sure that whatever client library you are using (JDBC, ...) is configured to not keep unused connections open so long, or that your # clients * max # of connections isn't too big.
My guess is that you are using persistent connections, e.g. pconnect in php:
[..] when connecting, the function would first try to find a (persistent) link that's already open with the same host, username and password. If one is found, an identifier for it will be returned instead of opening a new connection
and
[..] the connection to the SQL server will not be closed when the execution of the script ends. Instead, the link will remain open for future use
I had a similar situation, and was using Codeigniter with pconnect turned on. After turning it to off (see how) every connection was closed down properly after use, and my MySQL processlist was empty.
Performance: The above does not argue about performance, but simply tries to explain why you might see a lot of Sleeping connections in MySQL. It might not be negative, with regard to performance, to have the connections stay active.
More info at: http://www.mysqlperformanceblog.com/2006/11/12/are-php-persistent-connections-evil/

Tracking down MySQL connection leaks

I have an application server (jetty 6 on a linux box) hosting 15 individuals applications (individual war's). Every 3 or 4 days I get an alert from nagios regarding the number of open TCP connections. Upon inspection, I see that the vast majority of these connections are to the MySQL server.
netstat -ntu | grep TIME_WAIT
Shows 10,000+ connections on the MySQL server from the application server (notice the state is TIME_WAIT). If I restart jetty the connections drop to almost zero.
Some interesting values from a show status:
mysql> show status;
+--------------------------+-----------+
| Variable_name | Value |
+--------------------------+-----------+
| Aborted_clients | 244 |
| Aborted_connects | 695853860 |
| Connections | 697203154 |
| Max_used_connections | 77 |
+--------------------------+-----------+
A "show processlist" doesn't show anything out of the ordinary (which is what I would expect since most of the connections are idle - remember the TIME_WAIT state from above).
I have a TEST env for this server but it never has any issues. It obviously doesn't get much traffic and the application server is constantly getting restarted so debugging there isn't much help. I guess I could dig into each individual app and write a load test which would hit the database code, but this would take a lot of time / hassle.
Any ideas how I could track down the application that is grabbing all these connections and never letting go?
The answer seems to be adding the following entries in my.cnf under [mysqld]
:
wait_timeout=60
interactive_timeout=60
I found it here (all the way at the bottom): http://community.livejournal.com/mysql/82879.html
The default wait time to kill a stale connection is 22800 seconds.
To verify:
mysql> show variables like 'wait_%';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 60 |
+---------------+-------+
EDIT: I forgot to mention, I also added the following to my /etc/sysctl.conf:
net.ipv4.tcp_fin_timeout = 15
This is supposed to help lower the threshold the OS waits before reusing connection resources.
EDIT 2: /etc/init.d/mysql reload won't really reload your my.cnf (see the link below)
Possibly the connection pool(s) are misconfigured to hold on to too many connections and they're holding on to too many idle processes.
Aside from that, all I can think of is that some piece of code is holding onto a result set, but that seems less likely. To catch if it's a slow query that's timing out you can also set MySQL to write to a slow query log in the conf file, and it'll then write all queries that are taking longer than X seconds, default is 10 seconds.
Well, one thing that comes to mind (although I'm not an expert on this) is to increase the logging on mySQL and hunt down all the connect/close messages. If that doesn't work, you can write a tiny proxy to sit in between the actual mySQL server and your suite of applications which does the extra logging and you'll know who is connecting/leaving.
SHOW PROCESSLIST shows the user, host and database for each thread. Unless all of your 15 apps are using the same combination, then you should be able to differentiate using this information.
I had the same problem with +30,000 TIME_WAIT on my client server. Fixed the problem by adding, in /etc/sysctl.conf :
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30
Then :
/sbin/sysctl -p
After 2 or 3 minutes, TIME_WAIT connections went from 30 000 to 7 000.
/proc/sys/net/ipv4/tcp_fin_timeout was 60 in RHEL7.tcp_tw_reuse and tcp_tw_recycle was changed to 1 and the performance improved.