Can someone explain or tell me if some kind of 'setting' in Codeigniter will solve this?
System : Amazon AWS EC2 (AWS Linux Centos 6.9)
Database : Amazon AWS RDS (Mysql Compatible 5.6.34)
Framework : Codeigniter 3.0.6
PHP : 5.6
Issue : When running my website, I have random logouts that happen after 5 minutes. This means when I start my computer, after 5 minutes the session is cleared and I find myself logged out. (Random amount of time). This continues for the rest of the day. (Random Logouts)
This does NOT happen when setting sessions in codeigniter to files
Setting it to database causes this (Logouts happen within 24 hours).
My pages use active ajax which is hitting the codeigniter system all the time. I read that there is an Ajax Race condition that can cause it, BUT I've noticed that after reading the CI code, it's related to this command
if ($this->_db->query("SELECT GET_LOCK('".$arg."', 300) AS ci_session_lock")->row()->ci_session_lock)
It appears that this 'locks' during an AJAX race condition and that causes CI to drop my session cookie info (And of course logs me out under a `$this->fail()) call). We are using the RDS system, so I am suspecting that GET_LOCK on RDS is slightly different than GET_LOCK on a true MYSQL system.
Anyone have thoughts / ideas? And Yes I tried a ton of combinations of sess_expiration and sess_time_to_update and the only way to fix it is go back to files.
As I expect my system to run on multiple servers in the future, the files might not be desirable (If you know CodeIgniter you know why, too complicated to explain here).
Can anyone give some suggestions / answers on why RDS has an issue with GET_LOCK?
Related
I have a MySQL DB (ClearDB) serving my backend application hosted on Heroku. That said I have very limited ways to actually see the logs (no access to the logs apparently) to even know what these queries are. I know for sure nobody is using the system right now. On my heroku logs there's really nothing being executed by the backend to trigger any query.
What I can see from MySQL Workbench when looking at Status and Variables is that the values of Queries and Questions increase by the hundreds every second when I refresh it, which to me seems really odd. The value of Threads_connected is always between 120 and 140, although Threads_running is usually lower than 5.
The "Selects Per Second" keep jumping between 200 and 400.
I am mostly a developer with not much skills in being a DBA. Are these values normal? Even when there's no traffic why are they constantly increasing? If not, what are the means I can use to investigate what is actually running there when ClearDB does not give me access to logs?
'show processlist' can only raise my suspicion that something seems off, but then how to procedure from here?
I created an application that works perfect in my computer but when I uploaded it to start server tests it becomes very slow, specially after a couple of uses (the first minutes work fine)...It even becomes unresponsive, as I move through a treetable a form should be updated from the database but stops working after a while...
I'm using an Amazon EC2 Linux server and a MySQL database...I checked if the connections to the database is what failed, but I'm using no more than 7 out of 150 max connections to the database.
Is this a common problem?
Any ideas on how to solve this?
Thanks!!!
Note: This is a copy of an internal vaadin forum thread: https://vaadin.com/forum#!/thread/4816326 ...Hope is not against the forum rules to do this...
It sounds like you may have a memory leak in your application somewhere that your computer is able to sustain, but your server is not. I would suggest trying some load testing on another machine and see what actions are causing it to spin out.
You can have a look at this SO answer to see how to do that:
https://stackoverflow.com/a/46227692/460802
This is not the typical question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks.
Situation
We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching.
The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens.
Problem
In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour.
We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in:
Can you help me figure out what's wrong and maybe how to fix it?
Additional information
The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11.
If any configuration information would help analyze the problem, just comment and I will add it.
Graphs
info
phpinfo()
apc status
memcache status
collectd
Processes
CPU
Apache
Load
MySQL
Vmem
Disk
New Relic
Application performance
Server overview
Processes
Network
Disks
(Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)
The problem is almost certainly MySQL based. If you look at the final graph mysql/mysql_threads you can see the number of threads hits 200 (which I assume is your setting for max_connections) at 20:00. Once the max_connections has been hit things do tend to take a while to recover.
Using mtop to monitor MySQL just before the hour will really help you figure out what is going on but if you cannot install this you could just using SHOW PROCESSLIST;. You will need to establish your connection to mysql before the problem hits. You will probably see lots of processes queued with only 1 process currently executing. This will be the most likely culprit.
Having identified the query causing the problems you can attack your code. Without understanding how your application is actually working my best guess would be that using an explicit transaction around the problem query(ies) will probably solve the problem.
Good luck!
Sorry for the newb factor, but I was reading about "Too many connections" to mysql.
http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
How are "simultaneous client connections" quantified in mysql?
For example if 20 million people are on gmail (let's say they use mysql with only 1 table to store everything just for sake of example) and all those people simultaneously all click on an email to open up, does that mean there are 20 million simultaneous connections or just one connection since all the users are connecting to the same table?
EDIT: I'm trying to understand what the term 'client' means. Is a 'client' someone who is using the application, or is a 'client' the part of the application (ex. php script) that is connecting to the database?
When a visitor goes to your website and the server-side script connects to the database it is 1 connection - you can make as many queries as necessary during that connection to any number of tables/databases - and on termination of the script the connection ends. If 31 people request a page (and hence a db connection) and your limit is 30, then the 31st person will get an error.
You can upgrade server hardware so MySQL can efficiently handle loads of connections or spread the load across multiple database servers. It is possible to have your server-side scripting environment maintain a persistent connection to MySQL in which case all scripts make queries through that single connection. This will probably have adverse effects on the correct queuing of queries and their order to maintain usable speeds under high load, and ultimately doesn't solve the CPU/memory/disk bottlenecks with handling large numbers of queries.
In the case of a webmail application, the query to check for new messages runs so fast (in the milliseconds) that hitting server limits isn't likely unless it's on a large scale.
Google's applications scale on a level previously unheard of. Check out the docs on MapReduce, GoogleFS, etc. It's awesome.
In answer to your edit - anything that connects directly to MySQL is considered a client in this case. Each PHP script that connects to MySQL is a client, as is the MySQL console on the command line, or anything else.
Hope that helps
The connections mentioned are server connection. Every client has one or more. For example if your php script connects mysql, there may be more web requests at a time and thus more connections to db.
Sometimes you can ran out of them, because they are not closed properly after they become useless.
And I thing Gmail is stored different way than in one large mysql db :]
Right now we are dealing with a bit of a conundrum in my corporate environment where we are being blamed for a server crash, but I'm not 100% sure we are the culprit. Here's the server environment: We have a primary Coldfusion and its MSSQL database. We then also have a secondary database (MySQL) hosted on a different cloud which is used for miscellaneous tasks. The main reason the system is structured this way is because the primary server is operated by our Content Management System thus we are not allowed to modify it, add tables, or any operations like that, so we use the alternate database for that. By design there are no mission critical items on it, and pages are built in a way such that if the alternate DB returns no rows, the pages will continue to render properly.
Basically, I am being told that when the alternate MySQL server goes down, or stops accepting connections, that it is taking the entire primary cloud with it, including 5 other sites hosted on it. I do not have access to the primary Coldfusion or database logs, because the CMS provider will not give them to me. Thus I can only judge based on the validity of the explanation they are giving me.
The explanation for this behavior coming from our CMS provider is that when Coldfusion queries a database it creates a thread, and that if the DB doesn't respond the threads continue to stack. Eventually the processor is capped, and the server goes down. Is that an accurate explanation of how Coldfusion operates? If so, is there anyway to prevent it, possibly with shorter DB timeouts and the like? Or is the entire explanation posed by our CMS a red herring and something else is really causing the crashes.
Any guidance would be greatly appreciated.
Question answered - Documents found
http://kb2.adobe.com/cps/180/tn_18061.html
http://www.adobe.com/devnet/server_archive/articles/cf_timeouts_and_unresponsive_requests.html
Setting timeout requests globally does not timeout internal processes waiting on external resources (cfquery/cfhttp etc). The only way to time those out is by manually setting the timeout attribute. Not setting this could result in thread overload and a crashed server as was occurring with us.
http://kb2.adobe.com/cps/180/tn_18061.html
From reading bullet point 3 and depending on your traffic, your CMS guy might be right.
Also from the link above:
If the database is down and unresponsive, how many times will ColdFusion Server try to reconnect to the database? Will it eventually restart the ColdFusion Server?
If the database is down or the network link to the database goes down when a query request occurs, the connection will timeout (you can customize the timeout period with the timeout attribute in the cfquery tag) and return an error to the user. Please note that the ability to set the timeout for the connection depends on which driver you are using. You can trap this error and handle it programmatically with thecftry/cfcatch tags.
The catch here is that the timeout variable on the cfquery tags are not compatable with the MySQL ODBC driver. Could not find what the default timeout is. Let's say 5 minutes. If you get more than one request in those 5 minutes, it does appear that the connections will start to 'pile up'.