I have a JSP page. It used to work well but it becomes very slow (15 seconds to load) after the Systems team upgraded the Solaris 10 OS server.
I checked all the queries in that page and every query works fine. In fact they are all very simple queries. And there are only about 300 entries in each related table.
The only special thing, there were about 60 connections in that page. I managed to decrease the connections by 30 after I found it is very slow. After this optimization the loading time decreased to 6 seconds. But, still very slow! And what's worse, I am unable to decrease the connections any more if I don't want to re-construct half of the application.
Another JSP page (not in the same application) worked well but now it becomes slow too. It has only 1 connection but this page is very time sensitive. Thus I can see it becomes slower.
Can anyone tell me how to configure mysql or/and tomcat to decrease mysql connection time?
Are you saying "connection" when you mean "query"? Or are you making a new database connection for each query? You should not do that. In a Tomcat app server, you should use connection pooling, which will reduce the overhead of connections a lot.
Another common issue is that the MySQL Server tries to resolve your client's hostname from its IP address. This is called DNS hostname resolution, and it can be the source of a lot of overhead when making a new connection. You can configure your MySQL Server to skip DNS hostname resolution without much downside.
Why are you making more than one database connection per page load? That should not be the general case. Make one connection and use it for all the rendering of that page.
How are you referencing the server on your connection string? Are you using "localhost" or something? Try to replace that by "127.0.0.1" for instance, which will skip the name resolution.
Related
In the below image it shows current activity as 99 Connections.
How exactly it is counted.
RDS is accessed through node.js webservices, php website. Every time I do some operations I close the connection. So once after closing it doesn't decrease rather it keeps increasing. Later I got the too many connections error message once the connections became 608. I restarted then it works. I never seen it decreasing.
So what is the best way I can handle it.
Below is the image which is showing when I run SHOW FULL PROCESSLIST;
PHP-based web pages that use a MySQL connection generally exit as soon as they're done rendering page content, so the connection gets closed whether you explicitly call a mysqli or PDO close method or not.
The same is not true of Node services, which run for a long time and can therefore easily leak resources. It's probable that you're opening connections, but not closing them, in your Node service, which would produce the sort of behavior you're seeing here. (This is an easy mistake to make, especially for those of us whose background is largely in more ephemeral PHP scripts.)
One good way to identify the problem is to connect to the MySQL instance via Workbench or the console monitor and issue SHOW FULL PROCESSLIST; to get a list of currently active connections, their originating hosts, and the queries (if any) they are executing. This may help you narrow down the source of the leaking connections, so that you can identify the code at fault and repair it.
Sorry for the newb factor, but I was reading about "Too many connections" to mysql.
http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
How are "simultaneous client connections" quantified in mysql?
For example if 20 million people are on gmail (let's say they use mysql with only 1 table to store everything just for sake of example) and all those people simultaneously all click on an email to open up, does that mean there are 20 million simultaneous connections or just one connection since all the users are connecting to the same table?
EDIT: I'm trying to understand what the term 'client' means. Is a 'client' someone who is using the application, or is a 'client' the part of the application (ex. php script) that is connecting to the database?
When a visitor goes to your website and the server-side script connects to the database it is 1 connection - you can make as many queries as necessary during that connection to any number of tables/databases - and on termination of the script the connection ends. If 31 people request a page (and hence a db connection) and your limit is 30, then the 31st person will get an error.
You can upgrade server hardware so MySQL can efficiently handle loads of connections or spread the load across multiple database servers. It is possible to have your server-side scripting environment maintain a persistent connection to MySQL in which case all scripts make queries through that single connection. This will probably have adverse effects on the correct queuing of queries and their order to maintain usable speeds under high load, and ultimately doesn't solve the CPU/memory/disk bottlenecks with handling large numbers of queries.
In the case of a webmail application, the query to check for new messages runs so fast (in the milliseconds) that hitting server limits isn't likely unless it's on a large scale.
Google's applications scale on a level previously unheard of. Check out the docs on MapReduce, GoogleFS, etc. It's awesome.
In answer to your edit - anything that connects directly to MySQL is considered a client in this case. Each PHP script that connects to MySQL is a client, as is the MySQL console on the command line, or anything else.
Hope that helps
The connections mentioned are server connection. Every client has one or more. For example if your php script connects mysql, there may be more web requests at a time and thus more connections to db.
Sometimes you can ran out of them, because they are not closed properly after they become useless.
And I thing Gmail is stored different way than in one large mysql db :]
I'm doing my first foray with mysql and I have a doubt about how to handle the connection(s) my applications has.
What I am doing now is opening a connection and keeping it alive until I terminate my program. I do a mysql_ping() every now and then and the connection is started with MYSQL_OPT_RECONNECT.
The other option (I can think of), would be to start a new connection before doing anything that requires my connection to the database and closing it after I'm done with it.
What are the pros and cons of these two approaches?
what are the "side effects" of a long connection?
What is the most used method of handling this?
Cheers ;)
Some extra details
At this point I am keeping the connection alive and I ping it every now and again to now it's status and reconnect if needed.
In spite of this, when there is some consistent concurrency with queries happening in quick succession, I get a "Server has gone away" message and after a while the connection is re-established.
I'm left wondering if this is a side effect of a prolonged connection or if this is just a case of bad mysql server configuration.
Any ideas?
In general there is quite some amount of overhead incurred when opening a connection. Depending on how often you expect this to happen it might be ok, but if you are writing any kind of application that executes more than just a very few commands per program run, I would recommend a connection pool (for server type apps) or at least a single or very few connections from your standalone app to be kept open for some time and reused for multiple transactions.
That way you have better control over how many connections get opened at the application level, even before the database server gets involved. This is a service an application server offers you, but it can also be rolled up rather easily if you want to keep it smaller.
Apart from performance reasons a pool is also a good idea to be prepared for peaks in demand. When a lot of requests come in and each of them tries to open a separate connection to the database - or as you suggested even more (per transaction) - you are quickly going to run out of resources. Keep in mind that every connection consumes memory inside MySQL!
Also you want to make sure to use a non-root user to connect, because if you don't (I think it is tied to the MySQL SUPER privilege), you might find yourself locked out. MySQL reserves at least one connection for an administrator for problem fixing, but if your app connects with that privilege, all connections would already be used up when you try to put out the fire manually.
Unless you are worried about having too many connections open (i.e. over 1,000), you she leave the connection open. There is overhead in connecting/reconnecting that will only slow things down. If you know you are going to need the connection to stay open for a while, run this query instead of pinging periodically:
SET SESSION wait_timeout=#
Where # is the number of seconds to leave an idle connection open.
What kind of application are you writing? If it's a webscript: keep it open. If it's an executable, pool your connections (if necessary, most of the times a singleton will do).
I have read every possible answer to this question and searched via Google in order to find the correct answer to the following question, but I am rather a novice and don't seem to get a clear understanding.
A lot I've read has to do with web servers, but I don't have a web server, but an intranet database.
I have a MySQL dsatabase in a Windows server at work.
I will have many users accessing this database constantly to perform simple queries and writting back to it new records.
The read/write will not be that heavy (chances are 50-100 users will do so exactly at the same time, even if 1000's could be connected).
The GUI will be either via Excel forms and/or Access.
What I need to know is the maximum number of active connections I can have at any given time to the database.
I know I can change the number on Mysql Admin however I really need to know what will really work...
I don't want to put 1000 users if the system will really handle 100 correctly (after that, although connected, the performance will be too slow, for example)
Any ideas or own experiences will be appreciated
This depends mainly on your server hardware (RAM, cpu, networking) and server load for other processes if not dedicated to the database. I think you won't have an absolute answer and the best way is testing.
I think something like 1000 should work ok, as long as you use 64 bit MySQL server. With 32 bit, too many connections may create virtual memory pressure - a connection has an own thread, and every thread needs a stack, so the stack memory will reduce possible size of the buffer pool and other buffers.
MySQL generally does not slow down if you have many idle connections, however special commands e.g "show processlist" or "kill", that enumerate every connection will be somewhat slower.
If idle connection stays idle for too long (idle time exceeds wait_timeout parameter), it is dropped by the server. If this is the case in your possible scenario, you might want to increase wait_timeout (its default value is 8 hours)
Right now we are dealing with a bit of a conundrum in my corporate environment where we are being blamed for a server crash, but I'm not 100% sure we are the culprit. Here's the server environment: We have a primary Coldfusion and its MSSQL database. We then also have a secondary database (MySQL) hosted on a different cloud which is used for miscellaneous tasks. The main reason the system is structured this way is because the primary server is operated by our Content Management System thus we are not allowed to modify it, add tables, or any operations like that, so we use the alternate database for that. By design there are no mission critical items on it, and pages are built in a way such that if the alternate DB returns no rows, the pages will continue to render properly.
Basically, I am being told that when the alternate MySQL server goes down, or stops accepting connections, that it is taking the entire primary cloud with it, including 5 other sites hosted on it. I do not have access to the primary Coldfusion or database logs, because the CMS provider will not give them to me. Thus I can only judge based on the validity of the explanation they are giving me.
The explanation for this behavior coming from our CMS provider is that when Coldfusion queries a database it creates a thread, and that if the DB doesn't respond the threads continue to stack. Eventually the processor is capped, and the server goes down. Is that an accurate explanation of how Coldfusion operates? If so, is there anyway to prevent it, possibly with shorter DB timeouts and the like? Or is the entire explanation posed by our CMS a red herring and something else is really causing the crashes.
Any guidance would be greatly appreciated.
Question answered - Documents found
http://kb2.adobe.com/cps/180/tn_18061.html
http://www.adobe.com/devnet/server_archive/articles/cf_timeouts_and_unresponsive_requests.html
Setting timeout requests globally does not timeout internal processes waiting on external resources (cfquery/cfhttp etc). The only way to time those out is by manually setting the timeout attribute. Not setting this could result in thread overload and a crashed server as was occurring with us.
http://kb2.adobe.com/cps/180/tn_18061.html
From reading bullet point 3 and depending on your traffic, your CMS guy might be right.
Also from the link above:
If the database is down and unresponsive, how many times will ColdFusion Server try to reconnect to the database? Will it eventually restart the ColdFusion Server?
If the database is down or the network link to the database goes down when a query request occurs, the connection will timeout (you can customize the timeout period with the timeout attribute in the cfquery tag) and return an error to the user. Please note that the ability to set the timeout for the connection depends on which driver you are using. You can trap this error and handle it programmatically with thecftry/cfcatch tags.
The catch here is that the timeout variable on the cfquery tags are not compatable with the MySQL ODBC driver. Could not find what the default timeout is. Let's say 5 minutes. If you get more than one request in those 5 minutes, it does appear that the connections will start to 'pile up'.