One of or customers has a MySQl back end as part of their solution.
They have configured it to have a common master database and a specific slave database per client (they have 10+ slaves). They are using MySQL proxy for this.
They are facing some performance issues including database inserts/updates being queued and taking quite some time to write to the slave databases.
Can you suggest how this can be improved? Are there tools that can be used to help identify where the problems are? Does this seem like a standard approach to you (common master with client specific slaves controlled via MySQL proxy)?
Any advice would be appreciated.
Thanks,
Andy
I had the same behavior, but my trouble was next:
one of my updates was finishing with error, mysql proxy (and rw-splitter.lua specific) treat this situation like connection might be reused by another client, and return connection to the pool. That is mean, that when client receive an error and tried to rollback transaction, it is was passed to another connection or new connection from pool and it is has no effect. Meanwhile failed UPDATE process which had an error in transaction was in lock, until transaction will not rolled back by timeout (but in my case and mysql-proxy by default it is 28800 seconds) so quite long.
Problem was resolved.
Patch:
find in rw-splitter.lua next block:
is_in_transaction = flags.in_trans
local have_last_insert_id = (res.insert_id and (res.insert_id > 0))
if not is_in_transaction and
not is_in_select_calc_found_rows and
not have_last_insert_id then
and change it to
if res.query_status == proxy.MYSQLD_PACKET_ERR and is_in_transaction then
if is_debug then
print ("(read_query_result) ERROR happened while transaction staying on the same backend")
end
return
end
is_in_transaction = flags.in_trans
local have_last_insert_id = (res.insert_id and (res.insert_id > 0))
if not is_in_transaction and
not is_in_select_calc_found_rows and
not have_last_insert_id then
Related
I have a MySQL database that I am running very simple queries against as part of a webapp. I have received reports from users starting today that they got an error saying that their account doesn't exist, and when they log in again, it does (this happened to only a few people, and only once to each, so clearly it is rare). Based on my backend code, this error can only occur if the same query returns 0 row the first time, and 1 row the second. My query is basically SELECT * FROM users WHERE username="...". How is this possible? My suspicion is that the hard disk is having I/O failures, but I am unsure because I would not expect MySQL to fail silently in this case. That said, I don't know what else it could be.
This could be a bug with your mysql client (Though I'm unsure as to how the structure of your code is, it could just be bad query). However let's assume that your query has been working fine up until now with no prior issues, so we'll rule out bad code.
With that in mind, I'm assuming it's either a bug in your mysql client or your max connection count is reached (Had this issue with my previous host - Hostinger).
Let's say your issue is a bug in your mysql client, set your sessions to per session basis by running this
SET SESSION optimizer_switch="index_merge_intersection=off";
or in your my.cnf you can set it globally
[mysqld] optimizer_switch=index_merge_intersection=off
As for max connection you can either increase your max_connection value (depending if your host allows it), or you'll have to make a logic to close the mysql connection after a query execution.
$mysqli->close();
I have one problem with FEDERATED table in MySQL. I have one server (MySQL version 5.0.51a), who serve to store client data and actually nothing more. The logic database are stored in another server (version 5.1.56), sometimes it should handle that data from first server. So the second server have one FEDERATED table, which connect to the first server.
Actually, it has worked without any problems, but recently I got strange errors with this solution. Some kind of queries on second server cannot be performed correctly.
For example SELECT * FROM table - doesn't work. It hangs exactly 3 minutes and then gives:
Error Code: 1159 Got timeout reading communication packets
Ok, I checked table on the first server and it's Ok. Then I tried some another queries to FEDERATED table and they work...
For example, query like SELECT * FROM table WHERE id=x returns the result. Probably it could have problem with size of result, so I tried query with dummy WHERE-clause like SELECT * FROM table WHERE id > 0 - and it also works...
Finally I found a "solution", which helped only for two days - on the first server I made a copy of table, and on second server I re-declared a new FEDERATED table with new connection string to this copy. And it works, but after two days the same problem with new copied table.
I've already talk with both server providers, they see no problems, everything seems to work and other hosting provider is the causer of problems.
I've checked all variables in MySQL and there is no timeout parameter with 3 minutes etc. So how can I deal so kind of problems? It seems to be something automatic on network or database side, but I don't know, how to detect the reason of problems.
Do You have any ideas?
You may try checking MTU size settings for network interfaces on both servers.
This warning is logged when idle threads are killed by wait_timeout.
Normally, the way to avoid threads getting killed by wait_timeout is to call mysql_close() in scripts when the connection is no longer needed. Unfortunately that doesn't work for queries made through federated tables because the query and the connection are not on the same server.
For example, when a query is executed on server A of a federated table (pointing to data on server B), it creates a connection on server B. Then when you run mysql_close() on server A it obviously can not close the connection that was created on server B.
Eventually the connection gets killed by mysql after the number of seconds specified in "wait_timeout" have passed (the default is 8 hours). This generates the warning in your mysqlerror.log "Got timeout reading communication packets"
I have 6 scripts/tasks. Each one of them starts a MySQL transaction, then do its job, which means SELECT/UPDATE/INSERT/DELETE from a MySQL database, then rollback.
So if the database is at a given state S, I launch one task, when the task terminates, the database is back to state S.
When I launch the scripts sequentially, everything works fine:
DB at state S
task 1
DB at state S
task 2
DB at state S
...
...
task 6
DB at state S
But I'd like to speed up the process by multiple-threading and launching the scripts in parallel.
DB at state S
6 tasks at the same time
DB at state S
Some tasks randomly fail, I sometimes get this error:
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
I don't understand, I thought transactions were meant for that. Is there something I'm missing ? Any experience, advice, clue is welcome.
The MySQL configuration is:
innodb_lock_wait_timeout = 500
transaction-isolation = SERIALIZABLE
and I add AUTOCOMMIT = 0 at the beginning of each session.
PS: The database was built and used under the REPEATABLE READ isolation level which I changed afterwards.
You can prevent deadlocks by ensuring that every transaction/process does a SELECT...FOR UPDATE on all required data/tables with the same ORDER BY in all cases and with the same order of the tables itself (with at least repeateable read isolation level in MySQL).
Apart from that, isolation levels and transactions are not meant to handle deadlocks, it is vice versa, they are the reason why deadlocks exist. If you encounter a deadlock, there are good chances that you would have an inconsistent state of your dataset (which might be much more serious - if not, you might not need transactions at all).
Is there any way to configure SQL Server 2008 to kill a transaction if it has neither been canceled nor committed for some time? (Say power, network connection or whatever gets cut to the computer having it open.)
Either so it happens automatically after some defined rule-sets or by making and calling a command line application that queries the SQL server for active transactions + time they have been running... and then instructs SQL Server to close those down that are "frozen".
To quote Gail Shaw from here:
SQL Server does not time queries out, the connecting application (in this case query analyser) does.
Whichever tech that you're using to connect (ADO, etc.) will probably have a connection timeout and and execution timeout property that you can change in your calling code. Defaults are usually 30 secs.
You could potentially wrap something like this in a loop that kills each offending spid:
select datediff(second, last_batch, getdate()) as secs_running, *
from sys.sysprocesses
where hostname != ''
and open_tran = 1
There would probably be many opinions on how to best find which processes are "safe" to kill, and I would certainly be a little worried about automatically doing such a thing based upon an arbitrary timespan. I'm also not sure that any data changes done in the process are guaranteed to be rolled-back.
We are getting exceptions like this
com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector#5b7a7896 -- APPARENT DEADLOCK!!! Complete Status:
Managed Threads: 3
Active Threads: 3
Active Tasks:
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#55bc5e2a (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#41ca435f (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2)
com.mchange.v2.c3p0.stmt.GooGooStatementCache$1StatementCloseTask#460d33b7 (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0)
Pending Tasks:
when load testing our application on MSSQL 2008 R2 (jTDS or official MS JDBC doesn't matter). We never get this exception when running the same tests against PostgreSQL or MySQL.
We don't just want to increase the number of helper threads for c3p0 (which solves the problem, but how long?). We want to know what's the problem as it is workings with other DBMS'.
The applications behaves like:
Send X requests
Wait for a while -> DEADLOCK
Send X requests
Wait for a while -> DEADLOCK
Does anyone know or has an idea why we have this behavior with MSSQL?
Thanks, Adrian
(Btw. BoneCP works without any problem too.)
SQL Server has a much more restrictive locking strategy compared to PostgreSQL or InnoDB.
Especially it will block SELECTs on rows (tables?) that are updated from a different connection/transaction (in the default installation).
You should make sure that you are not selecting the same rows in one session that are being updated from another.
If you can't change the sequence of your code, you might get away with using "dirty reads" in SQL Server.
If I remember that correctly, this is accomplished by adding WITH NOLOCK to the SELECT statements (but I'm not entirely sure)
Edit
A different possibility (if you are on SQL Server 2005 or later) would be to use the new "snapshot isolation" to avoid blocking selects.