i have two servers (one mysql db server, one client server) with direct ethernet link. The "remote" access used to be lighting fast (neglectable latency for query), no matter using IP addr or DNS.
to fine tune the performance, i modified my.cnf (redhat, /etc/my.cnf) [mysqld] category on the db server (by changing the key and innodb buffer related size). since the test result was not good enough, i changed my.cnf back to its default status.
however since then, it became extremely slow to establish the connection to the db server from remote (local access seems fine). any idea what's the reason behind?
ps:
once connected, remote query seems working as fast as before. it's just slow to establish the connection.
the common DNS issue seems not valid here, given it cannot explain (a) why the connection through DNS was fast; (b) why [mysqld] changes in key/innodb buffer size would affect the DNS, even with my.cnf changed back to its default status; (c) connection establishment became slow through either IP or DSN after the change/change-back of my.cnf
UPDATE:
After hours of struggling, i restarted the db server and now it seems my.cnf functions as expected...
Related
The default port for MySQL connection is 3306. But can we set 2 different port for it? Maybe port 30 and 3306, thus we can have connections at localhost:30 and localhost:3306, assuming all ports are free. I try to run this using xampp in window 10.
It is not recommend by mysql for good reasons.
>
Warning
Normally, you should never have two servers that update data in the same databases. This may lead to unpleasant surprises if your operating system does not support fault-free system locking. If (despite this warning) you run multiple servers using the same data directory and they have logging enabled, you must use the appropriate options to specify log file names that are unique to each server. Otherwise, the servers try to log to the same files.
Even when the preceding precautions are observed, this kind of setup works only with MyISAM and MERGE tables, and not with any of the other storage engines. Also, this warning against sharing a data directory among servers always applies in an NFS environment. Permitting multiple MySQL servers to access a common data directory over NFS is a very bad idea. The primary problem is that NFS is the speed bottleneck. It is not meant for such use. Another risk with NFS is that you must devise a way to ensure that two or more servers do not interfere with each other. Usually NFS file locking is handled by the lockd daemon, but at the moment there is no platform that performs locking 100% reliably in every situation.
https://dev.mysql.com/doc/refman/8.0/en/multiple-data-directories.html
See here there you will find also, what you have to do
I am currently connecting my ec2 server to rds via the following:
self.conn = MySQLdb.connect (
host = settings.DATABASES['default']['HOST'],
port = 3306,
user = settings.DATABASES['default']['USER'],
passwd = settings.DATABASES['default']['PASSWORD'],
db = settings.DATABASES['default']['NAME'])
This connects via tcp and is much, much slower for me than locally when I connect on my own machine to mysql through a socket. How would I connect an ec2 instance to an rds database via a socket connection so it is much faster than using tcp/ip for long-running scripts (the difference for me is an update script will take 10 hours instead of one).
Short answer: You can't.
Aside: all connections to MySQL on a Linux server use "sockets," of course, whether they are Internet (TCP) sockets, or IPC/Unix Domain sockets. But in this question, as in common MySQL parlance, "socket" refers to an IPC socket connection, using a special file, such as /tmp/mysql.sock, though the specific path to the socket file varies by Linux distribution.
A Unix domain socket or IPC socket (inter-process communication socket) is a data communications endpoint for exchanging data between processes executing within the same host operating system.
https://en.m.wikipedia.org/wiki/Unix_domain_socket
So, you can't use the MySQL "socket" connection mechanism, because the RDS server is not on the same machine. The same holds true, of course, any time the MySQL server is on a different machine.
On a local machine, the performance difference between an IPC socket connection and a TCP socket connection (from/to the same machine) is negligible. There is no disagreement that TCP connections have more overhead than IPC simply because of the TCP/IP wrapper and checksums, the three-way handshake, etc... but again, these tiny fractions of milliseconds of difference that will be entirely lost on the casual observer.
To conclude that TCP connections are "slower" than IPC connections, and particularly by a factor of 10, is not correct. The quotes around "slower" reflect my conclusion that you have not yet defined "slower" with sufficient precision: Slow to connect? Slow to transfer large amounts of data (bandwidth/throughput issue)? Slower to return from each query?
Take note of the Fallacies of Distributed Computing, particularly this one:
Latency is zero.
I suspect your primary performance issue is going to be found in the fact that your code is not optimal for non-zero latency. The latency between systems in EC2 (including RDS) within a region should be under 1 millisecond, but that's still many hundreds of times the round-trip latency on a local machine (which is not technically zero but could easily be just a handful of microseconds).
Testing your code locally, using a TCP connection (using the host 127.0.0.1 and port 3306) instead of the IPC socket should illustrate whether there's really a significant difference or whether the problem is somewhere else... possibly inefficient use of the connections, or unnecessarily repeated disconnect/reconnect, though it's difficult to speculate further without a clearer understanding of what you mean by "slow."
I'm trying to setup replication for my database which is powered by mysql 5.6.
The master uses RDS and the slave is built on EC2 instance, so the MASTER_HOST has a pretty long hostname (62 characters).
When I use change master to command to specify MASTER_HOST and start slave, show slave status gives me an connection error, which looks like the hostname is overflowed and part of the hostname string (which is 61 characters) has been saved (to master.info also).
I have tried another hostname, which is shorter, and succeeded.
I have checked the document but nothing about MASTER_HOST hostname length limitation has been mentioned.
Is this a bug? Or have I done anything wrong?
Thanks in advance.
There is a limit of 60 characters for master host on MySQL side. But luckily you can create another Canonical Name (CNAME) that references to the original RDS URL. RFC 1034 mentioned that CNAME chain shouldn't break things.
So you get the chain: your (sub)domain CNAME -> RDS CNAME -> RDS IP.
Make sure you have nscd, pdnsd or alternative local DNS cache service is running to avoid too many frequent DNS lookups.
Can com.mysql.jdbc.ReplicationConnection and com.mysql.jdbc.ReplicationDriver be used directly to split a read-only connection to one of the replication data sources when using a master-slaves topology?
As the javadoc of Connector/J states, the ReplicationConnection is a
Connection that opens two connections, one to a replication master, and
another to one or more slaves, and decides to use master when the connection
is not read-only, and use slave(s) when the connection is read-only.
So, I just wonder if it really works as expected, since we cannot benefit from the master-slaves topology which releases the burden of many read-only connections to the master node.
When I looked inside the source code, I found that the connections to the master and the slave have been both established in the constructor, which means that each read-only operation will not only just connect to the slave, but also connect to the master without any communication, and to be short, it doesn't release the burden of the master.
So, is it correct to use the ReplicationConnection in this way? Or maybe it is just used in other scenario?
Why do you say that it doesn't release the burden of the master? It's true that it will do a connection but it will not issue any queries against the master while the connection is in read-only mode Connection.setReadOnly(true).
So if your application puts your connection in and out of read-only mode for all calls it will in face relieve the master of all those reads while still doing all the writes there.
You can make sure it does the right thing by turning on the general log (ie query log) on both machines and see where each query goes.
I have a database that uses the InnoDB engine on all his tables, running on a Windows Server 2008 r2 64bit in a VM farm. The policies of my organization promotes that every server has to have a replication that can run in the case of disaster in the primary (a DRP). So, in order to achieve that, I tried to activate the log-bin, (to deploy a slave server), I checked the CPU and Memory usage and everything seemed to be normal, but the log file wasn't created, also the DB started reject lots of connections and the app started to act odd. As soon as I deactivated the log-bin everything went back to normal (immediately).
The next time I try to activate the log;
What can I do to ensure the log-bin is activated, besides uncomment the Log-bin= line?
What parameters from the Data Base can be monitored to see what is wrong or need tuning?
Seems like I have a very busy database, I still don't know exactly why but these settings made the log bin work
log-bin=
binlog-format = ROW
transaction-isolation = READ-UNCOMMITTED