Here's a short back story for context:
- Had a site hosted on a solaris machine, had a script that generated a report (pulled data from mysql, generally took ~60 seconds). Everything worked fine.
- Migrated that site and db to Ubuntu machines. Now, the script times out after 30 seconds.
Steps I've already taken:
Increased the max_execution_time to 3600 (I know it's high as hell, but that's what it was set to on the old solaris box)
Set max_input_time = -1
set memory_limit = 64MB
I've checked through phpinfo() that these changes are being accepted and show as the current configuration.
As hard as I pray, as crossed as I can get my fingers, or as many times as I can run the script over and over again...I still get the same result - 30 seconds, it seemingly times out, and tosses a 500 error back at me.
Thoughts? Thanks!
Double check your configuration. If this is being run as a CLI script then some distros have taken to seperating config files in separate directories to have different configuration values for CLI vs web/mod_php. You can run php -i from the command line to see what the CLI environment is configured to do.
First you need to be sure if mysql or php is restricting the time. If it's PHP set_time_limit(0) does it well.
A mysql abortion would propably lead into an unexpected result. For your 500 PHP should be responsible.
Related
For an ongoing project I need to install a local server using XAMPP. The colleague exported the mySQL DB from his nGINX environment (I suppose it uses the same mySQL as my apache) but the size is an insane 11GB+. In order to prepare, I did the necessary php.ini modifications listed here, entered myisam_sort_buffer_size=16384M to my.ini and also followed a step-by-step tutorial from here. I have 8GB DDR4 RAM and 8th generation i3, so this should not be a problem. The SQL import was running from 0:30 to 14:30 when I noticed that it simply stopped.
Unfortunately the shell import command seems to stop at the 13149th line of the 18061 lines. I see no error messages, and I do not see the imported database in phpMyAdmin. I see the flashing underscore, but no more SQL commands are executed.
I am wondering if there is a solution to this - I want to ensure that the cc 14 hours of processing does not go to waste, so my question is:
If I terminate the ongoing but seemingly frozen CMD, can I continue importing the remaining 4912 lines from a separate SQL file?
16GB sort buffer size is absurdly high. I suspect you mean maximum_packet_size, and even for that you pobably men 16M rather than 16G.
If you are certain which line is stuck yes, you can resume from there, including the line that failed.
My development system has suddenly been afflicted with this weird problem where every single SQL script takes exactly 31 seconds to execute on my Classic ASP site's connection to a mySQL (MariaDB) database.
Connecting to either a local copy of the DB running off my system or even my live DB being hosted at a web host, it all the same.
Everything from a simple
adoconn.Execute("SELECT * FROM users;")
or even
adoconn.Execute("SET sql_mode''")
would take 31 seconds to execute. Each!
I can safely rule out any problems with the DB as connecting to it and running scripts from DBeaver shows no problems at all. The results come back instantly.
I can also rule out network problems as the local DB and the hosted DB have the same results and I have used WireShark to confirm that the MySQL packets are being responded to almost immediately from the hosted DB.
Debug stepping through my ASP code, everything runs fine right up until the .Execute() at which it will take 31 seconds, regardless of how complex the script is.
The strangest thing is, this problem just came out of the blue; when my system was powered down, disconnected and untouched over the weekend. No updates, installations or changes were done to the system. Friday I was doing my dev work perfectly fine. But Monday morning when I powered it back up, the DB connections there are stuffed.
I've already tried configuring mySQL to use the "skip-name-resolve" and "bind-address = ::" settings.
I have tried rebuilding my IIS websites and reinstalling IIS itself.
I've also reinstalled mySQL ODBC drivers on my system to no avail.
What is going on here?
As it turns out, the cause of this whole issue was the McAfee software that came pre-installed in my Dell laptop.
No, I did disable the firewall and antivirus, mind you.
Those were the first steps I did and triple-checked routinely during my testing. Both McAfee's firewall and auto-protection were all fully disabled.
But apparently, McAfee, ignores this setting and was screwing my DB connections over ODBC.
This problem finally only came to an end when I fully uninstalled this McAfee malware. There's no other way to describe it.
Let this post be a warning to anyone else naively believing this malware to be anything else.
We've just moved to a new server, both run ubuntu 14.04LTS, the only difference is basically that the old server ran mysql5.5, the new one has mysql5.6. Both servers are cloud machines hosted by digitalocean. Both operate on default my.cnf settings, not much has been tweaked.
An other important difference is, that the new server has double the RAM, and CPU power.
Still - while old server ran with an avg of 0.6 second response time for an api call we monitor server health with, the new one is 1.6-1.8 slower. Yes, they contain heavy joins, but that's not my point - the codebase is exactly the same, and the machine itself is supposed to be stronger. New server also shows peaks of CPU usage few times every hour, which never happened with mysql 5.5.
Does this make any sense? For me, not so much, but I'm no MySQL guru.
Ran MySQL Tuner, but unsure if there's anything relevant within:
mysqltuner output for OLD server:
http://pastebin.com/cqSSssW0
mysqltuner output for NEW server:
http://pastebin.com/uk3g1KZa
The only thing that has been tweaked in my.cnf is that it should log slow queries.
Any idea, why this could happen? MySQL5.6 clearly runs faster on benchmarks I saw online. Any help is very much appreciated.
I am presently using an application in the platform IBM Bluemix, that requires a MySQL database.
I decided to use a MySQL database (experimental support), supporting a max of 10 concurrent connections.
The problem is that if I restart my app 10 times (through cf restart, or using the dashboard), it will be impossible to run and the logs clearly say I am using the max amount (10) of connections.
The problem, thus, is that either connections are not closed when the app is stopped, or when it is (re)started, it does not use the already existing MySQL link.
At this point, I am not sure about what to do. Can anyone help?
EDIT
versions : I have used loopback-connector-mysql 2.2.0 and loopback-datasource-juggler#2.41.0
I have found a solution,
After contacting the support, the time before closing is 28800 (that means 8 hours), and they won't change it. However, I managed to go through this problem by changing the application, in files such as datasource.js where I set the "connectionLimit" to 3 instead of 9. Switching from mysql experimental to clearDB MySQL is also a valid option.
This is not exact answer you are looking for, but is workaround.
You can set up timeout in MySQL configuration, so MySQL close the connection if connection is idle for some time.
Please refer to this document
https://dev.mysql.com/doc/refman/5.0/en/gone-away.html
https://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout
Probably you will need to set something like:
wait_timeout = 120 # 2 minutes
interactive_timeout = 120 # 2 minutes
Following a recent upgrade to Windows 10 my XAMPP didn't seem to want to work (neither Apache or MySQL would start). So I upgraded that too to XAMPP for Windows 5.6.12. There were a few port issues initially (due to new? services in Windows 10) but once those were fixed I have both Apache and MySQL running.
However, now the php pages that I am working on, which do a great deal of reading and writing back to a MySQL database, run unbelievably slowly. A page that used to take a minute or two before any upgrade now takes about 30 minutes. I can see that writing to the database is very slow and the hard disc is always sitting at around 90 to 100%. I have tried many suggested changes: stopping various services, changing the page size etc but it still runs very slowly. I have checked the event log but there is nothing that stands out as an issue.
I am not sure if it the upgrade to Windows 10 or XAMPP that has done it, and I have run out of ideas. I realise this may sound a bit vague and I am happy to post logs etc but I am not sure whether there is a simple reason for this, or something simple for me to check.