Unresponsive MySQL database - mysql

I've setup a MySQL server on Ubuntu machine.
When running a heavy query (the table has over 1M rows), the database becomes unresponsive.
Upon connection it returns:
Can't connect to MySQL server on 'ip address' (110)
To make it responsive again, I need to kill the process of the heavy query (visible under "show processlist").
Is there a way of preventing this? I used to host the same database on Rackspace MySQL server. Even though it had the same amount of memory, it was more efficient and it never became unresponsive even when running similar queries.

Related

Node mysql query takes a great amount of time (during its first execution) but only on my local machine

Im using the official mysql node module in a node module.
The connecting to the database happens instantly. When I execute a query though, t takes almost ten/twenty seconds for a response. The next time the query (identical SQL) is executed it is near instantaneous.
This only happens on my local machine (speaking to the same mysql database) but not on my production server(where the mysql database is located).
I had to edit my timeout values on my local machine to allow the queries to not timeout.
Edit: to clarify, I am not running a local db. I'm speaking to the production DB when I'm running from my local machine.
What could be happening here?

Why MySQL Server shutdown after getting some hits

My website attracts more than 40 thousands unique visitors daily. When it gets 50 visitors simultaneously, sometimes the mySQL server shutdown itself and it gives error: "mysql cannot connect to database server"
the website on vps: its ram is 2 gb.
Is this due to low ram or low ranking features of vps?
When I restart the mySQL server, the problem is resolved.
The website is using wordpress as cms.
If there are some like me facing this issue and not know how they restart the mysql server, here it is: first login your whm and search MySQL, youll see restart MySQl button, click on it. http://www.saderesim.com/SxueS30

AWS RDS Failover - IIS Web Application

We are using Amazon RDS Multi-AZ mysql database and our web application uses IIS 8.5 and uses .net. Whenever there is a a failover with the mysql database, we have to manually "restart" the connection, so it refreshes to the new IP address of the mysql server. I'm sure it may eventually restart on it's own. Does anyone know the best approach to this issue? Was thinking about using "ConnectionLifeTime=...", but not sure what affect it would have on performance; or if the site was in the middle of a long query that happened to go over the time frame set in connectionlifetime, if it would drop/reset the connection in the middle of the query or if it would wait for the query to finish?

How to determine why a query is taking longer on a production server than on a development machine?

I have a a query which only takes 0.004s on my development machine (Windows 7 running WampServer on an HDD) but takes 0.057s on my production server (CENTOS 6.5 running on an SSD) -- a difference of 14x. Both MySQL versions are the same.
The explain results are identical on both servers, as are the databases (I exported the database from my production server and imported it into my development machine). I also ran optimize table on both servers, and tried putting in SQL_NO_CACHE, but that didn't make a difference on either one.
Navicat shows this under the Profile tab:
Production
Development
The execution times for the queries are consistent on both servers.
The database versions are the same, the content is the same, and the explain results are the same. Is there any way to determine why the query is taking 14x longer on my production server?
EDIT: In an attempt to determine if the MySQL server is under load, I found the Process List area in Navicat and can see that there are only a few processes, all of which are for "Sleep" commands. So I don't think the production server is under any load.
The production server seems to be slower in every parameter listed. There could be many factors involved, so you should check each one:
First of all, check if there is any other load on the production server. Is the server doing something else in the meanwhile? Use Linux command top to see running process and check if any of them is using a lot of computing power. Use the MySQL command SHOW STATUS to get info about the MySQL server status (memory, open tables, current connections, etc.)
Check the hardware: nowadays some desktop PCs are more powerful than cheap virtual servers (CPU, RAM frequency and access times, ...)
MySQL could use different settings in the two environments
Make sure you have the same indexes on both databases.

Two mysql servers using same database

I have a MySQL database running on our server at this location.
However, the internet connection at this location is slow (Especially when several users are connected remotely).
We also have a remote web server on a very fast internet connection.
Can I run another MySQL server on the remote server and still be able to run queries and updates on it?
I want to have two servers because
- Users at this location can connect via lan (fast)
- Users working remotely can connect to synced remote server (fast)
Is this possible? From what I understand replication does not work this way. What is replication used for then? Backups?
Thanks for your help!
[Edit]
After doing some more reading, I am a little worried about setting up multi-master replication due to the fact that I had not considered multi-master when designing the database and conflicts could be an issue.
The good news though is that most time consuming operations are queries not updates.
And, I found out that there is a driver that handles master-slave connections.
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-replication-connection.html
That way writes will be sent to the master and reads can come from the faster connection.
Has anyone tried doing this before? My one concern is that if I update to the master, then run a query expecting to see the update on the slave, will it be there right away? Or will the slow connection make this solution just as slow as using the master for both read and write?
What you're asking, I believe, is called Multi-Master Replication, by which both servers serve as replication masters to each other. Changes on either server become replicated back to the other as soon as possible. MySQL can be configured to do it, however I'm not sure how the differences in speed would affect your performance and data integrity.
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html