How to perform distributed transactions as a mysql proxy - mysql

i am developing a distributed database middleware which intends to act as a proxy of MySQLs. When it comes to transactions across multiple MySQLs, i find it difficult to let multiple MySQLs commit or rollback as a whole. Here comes the case:
Say there are 2 mysql instances which are proxied by my database middleware, on the application side, when i want to perform a "prepare-commit" action on both mysql instances, Firstly, i send the "prepare" request to the middleware, and the middleware forwards the request to the 2 mysql instances, then i execute some sql through the middleware, finally, when i send the "commit" request to the middleware, the middleware will forwards the request to the 2 mysql instances, here is what confuses me:
if the "commit" request sent to the first mysql instance is successfully executed, while the "commit" request sent to the second instance somehow failed, as i know, if a transaction has been commited, it cannot be rollback, but this has caused the 2 mysql instances to be in an inconsistent state.
i am wondering how to deal with this problem, any help will be appreciated.

One is committed and one failed, coordinator should log the transaction changes, and return success to user. Once failed instance recover, retrieve log from the coordinator and make sure the consistency. Like the binlog inside of mysql as coordinator, your middleware should take the same responsibility.
MySQL external XA is also a example of distributed transaction which coordinator is client.
Spanner choose 2PC+Paxos to reduce the possibility of one node failure.

Related

Rollback required to catch all exceptions

I am working on a MySQL database and my question is two fold related to each other,
Do we require a rollback if an error were to occur during a transaction, as it would only commit changes if it reached the last line of execution (which would be the commit statment), if not dosent it automatically rollback
If there needs to be required a rollback then I was wondering if there was any new way introduced or any workaround for catching 'ALL' exceptions, similar to SQL server where you try and catch error and if any error occurs you rollback
My main Concerns are power outage, as it is common where I live and if the electricity is cut to the server (which would be nothing but a database installed on a PC shared through adhoc / wifi connection, not a dedicated server) in between a transaction, then useful statements might not execute like logging in transaction
If a transaction is not committed, and any of the following happen, it will be rolled back:
Power loss on your PC
Reboot on your PC
MySQL Server process crashes or is killed
Client application exits without committing transaction. The MySQL Server will detect the client is gone, and abort that client's session. This rolls back a transaction if one was open. This also happens if the network connection between client and server is dropped.

Mysql Router send request to down slave node for a Second

I have implemented InnoDB cluster using MySql router(version - 2.1.4) for HA
This is my mysqlrouter.conf file
[DEFAULT]
user=mysqlrouter
logging_folder=
runtime_folder=/tmp/mysqlrouter/run
data_folder=/tmp/mysqlrouter/data
keyring_path=/tmp/mysqlrouter/data/keyring
master_key_path=/tmp/mysqlrouter/mysqlrouter.key
[logger]
level = DEBUG
[metadata_cache:magentoCluster]
router_id=49
bootstrap_server_addresses=mysql://ic-1:3306,mysql://ic-2:3306,mysql://ic-3:3306
user=mysql_router49_sqxivre03wzz
metadata_cluster=magentoCluster
ttl=1
[routing:magentoCluster_default_rw]
bind_address=0.0.0.0
bind_port=6446
destinations=metadata-cache://magentoCluster/default?role=PRIMARY
mode=read-write
protocol=classic
[routing:magentoCluster_default_ro]
bind_address=0.0.0.0
bind_port=6447
destinations=metadata-cache://magentoCluster/default?role=ALL
mode=read-only
protocol=classic
[routing:magentoCluster_default_x_rw]
bind_address=0.0.0.0
bind_port=64460
destinations=metadata-cache://magentoCluster/default?role=PRIMARY
mode=read-write
protocol=x
[routing:magentoCluster_default_x_ro]
bind_address=0.0.0.0
bind_port=64470
destinations=metadata-cache://magentoCluster/default?role=ALL
mode=read-only
protocol=x
MySql Router split the read requests to slave nodes, if I down slave 1 then router takes some seconds to know the slave 1 is down. So the requests are sent to the down slave node and the request fails. Any Suggestion how to handle this failure?
The client should always check for errors. This is a necessity for any system, because network errors, power outages, etc, can occur in any configuration.
When the client discovers a connection failure (failure to connect / dropped connection), it should start over by reconnecting and replaying the transaction it is in the middle of.
For transaction integrity, the client must be involved in the process; recovery cannot be provide by any proxy.

MySQL Connection lost after successfull query

This question is theoretical. I've no real use case; I'm just trying to understand the MySQL behaviour.
Suppose I send a query (or a transaction) to the server (using transactional tables of course), and the query or transaction executes fine, but the connection is lost before the client (f.e., mysql or an App connecting to a remote server throught a C interface or any other framework like QtSQL) receives the answer of the server. So, the server knows the transaction finished properly, but the client doesn't because the answer didn't arrive.
What does it happen in this case? Does the server roll back the transaction even knowing that it finished succesfully? Any option to control the behaviour in these scenaries?

Couchbase/Membase: Moxi proxy downstream timeout SERVER_ERROR

I have a live Couchbase cluster on two Amazon EC2 instances (version 1.8.0) and about 5 application servers each running PHP with moxi clients on them. Once in a while, Moxi will return a SERVER_ERROR when attempting to access data. This happens about once every few minutes on average. The cluster processes about 500 operations per second.
After inspecting the moxi logs (with -vvv enabled), I notice the following at around the time I get a SERVER_ERROR:
2013-07-16 03:07:22: (cproxy.c.2680) downstream_timeout
2013-07-16 03:07:22: (cproxy.c.1911) 56: could not forward upstream to downstream
2013-07-16 03:07:22: (cproxy.c.2004) 56: upstream_error: SERVER_ERROR proxy downstream timeout^M
I tried increasing the downstream timeout in the moxi configs from 5000 to 25000, but that doesn't help at all. The errors still happen just as frequently.
Can someone suggest any ideas for me to discover the cause of the problem? Or if there's some likely culprit?
SERVER_ERROR proxy downstream timeout
In this error response, moxi reached a timeout while waiting for a
downstream server to respond to a request. That is, moxi did not see
any explicit errors such as a connection going down, but the response
is just taking too long. The downstream connection will be also closed
by moxi rather than putting the downstream connection back into a
connection pool. The default downstream_timeout configuration is 5000
(milliseconds).
Pretty straight forward error, but it can be caused by a few possible things.
try getting the output of "stats proxy" from moxi:
echo stats proxy | nc HOST 11211
obviously you have already figured out that you are concerned with these settings:
STAT 11211:default:pstd_stats:tot_downstream_timeout xxxx
STAT 11211:default:pstd_stats:tot_wait_queue_timeout nnnnn
your downstream_timeout as you've said should appear as 5000
but also check out:
STAT 11211:default:pstd_stats:tot_downstream_conn_queue_timeout 0
from URL:
http://www.couchbase.com/docs/moxi-manual-1.8/moxi-dataflow.html
pretty much a perfect walk through of the way moxi operates.
To understand some of the configurable command-line flags in moxi
(concurrency, downstream_max, downstream_conn_max, downstream_timeout,
wait_queue_timeout, etc), it can be helpful to follow a request
through moxi...
The normal flow of data for moxi is as follows:
A client connects
A client creates a connection (an upstream conn) to moxi.
moxi's -c command-line parameter ultimately controls the limits on
the maximum number of connections.
In this -c parameter, moxi inherits the same behavior as memcached,
and will stop accept()'ing client connections until
existing connections are closed. When the count of existing
connections drops below the -c defined level, moxi will accept() more
client connections.
The client makes a request, which goes on the wait queue
Next, the client makes a request — such as simple single-key
command (like set, add, append, or a single-key get).
At this point, moxi places the upstream conn onto the tail of a wait
queue. moxi's wait_queue_timeout parameter controls how long an
upstream conn should stay on the wait queue before moxi times it out
and responds to the client with a SERVER_ERROR response.
The concurrency parameter
Next, there's a configurable max limit to how many upstream conn
requests moxi will process concurrently off the head of the wait
queue. This configurable limit is called concurrency. (This formerly
used to be known, perhaps confusingly, as downstream_max. For
backwards compatibility, concurrency and downstream_max configuration
flags are treated as synonyms.)
The concurrency configuration is per-thread and per-bucket. That
is, the moxi process-level concurrency is actually concurrency X
num-worker-threads X num-buckets.
The default concurrency configuration value is 1024. This means moxi
will concurrently process 1024 upstream connection requests from
the head of the wait queue. (There are more queues in moxi, however,
before moxi actually forwards a request. This is discussed in later
sections.)
Taking the concurrency value of 1024 as an example, if you have 4
worker threads (the default, controlled by moxi's -t parameter) and 1
bucket (what most folks start out with, such as the "default" bucket),
you'll have a limit of 1024 x 4 x 1 or 4096 concurrently processed
client requests in that single moxi process.
The rationale behind the concurrency increase to 1024 for moxi's
configuration (it used to be much lower) is due to the evolving design
of moxi. Originally, moxi only had the wait queue as its only internal
queue. As more, later-stage queues were added during moxi's history,
we found that getting requests off the wait queue sooner and onto the
later stage queues was a better approach. We'll discuss these
later-stage queues below.
Next, let's discuss how client requests are matched to downstream connections.
Key hashing
The concurrently processed client requests (taken from the head
of the wait queue) now need to be matched up with downstream connections
to the Couchbase server. If the client's request comes with a key
(like a SET, DELETE, ADD, INCR, single-key GET), the request's key is
hashed to find the right downstream server "host:port:bucket" info.
For example, something like — "memcache1:11211:default". If the
client's request was a broadcast-style command (like FLUSH_ALL, or a
multi-key GET), moxi knows the downstream connections that it needs to
acquire.
The downstream conn pool
Next, there's a lookup using those host:port:bucket identifiers into
a downstream conn pool in order to acquire or reserve the
appropriate downstream conns. There's a downstream conn pool per
thread. Each downstream conn pool is just a hashmap keyed by
host:port:bucket with hash values of a linked-list of available
downstream conns. The max length of any downstream conn linked list is
controlled by moxi's downstream_conn_max configuration parameter.
The downstream_conn_max parameter
By default the downstream_conn_max value is 4. A value of 0 means no limit.
So, if you've set downstream_conn_max of 4, have 4 worker threads,
and have 1 bucket, you should see moxi create a maximum of 4
X 4 X 1 or 16 connections to any Couchbase server.
Connecting to a downstream server
If there isn't a downstream conn available, and the
downstream_conn_max wasn't reached, moxi creates a downstream conn as
needed by doing a connect() and SASL auth as needed.
The connect_timeout and auth_timeout parameters
The connect() and SASL auth have their own configurable timeout
parameters, called connect_timeout and auth_timeout, and these
are in milliseconds. The default value for connect_timeout is 400
milliseconds, and the auth_timeout default is 100 milliseconds.
The downstream conn queue
If downstream_conn_max is reached, then the request must wait until a
downstream conn becomes available; the request therefore is
placed on a per-thread, per-host:port:bucket queue, which is called a
downstream conn queue. As downstream conns are released back into the
downstream conn pool, they will be assigned to any requests that are
waiting on the downstream conn queue.
The downstream_conn_queue_timeout parameter
There is another configurable timeout, downstream_conn_queue_timeout,
that defines how long a request should
stay on the downstream conn queue in milliseconds before timing out.
By default, the downstream_conn_queue_timeout is 200 milliseconds. A
value of 0 indicates no timeout.
A downstream connection is reserved
Finally, at this point, downstream conn's are matched up for the
client's request. If you've configured moxi to track timing histogram
statistics, moxi will now get the official start time of the request.
moxi now starts asynchronously sending request message bytes to the
downstream conn and asynchronously awaits responses.
To turn on timing histogram statistics, use the "time_stats=1"
configuration flag. By default, time_stats is 0 or off.
The downstream_timeout parameter
Next, if you've configured a downstream_timeout, moxi starts a timer
for the request where moxi can limit the time it will spend
processing a request at this point. If the timer fires, moxi will
return a "SERVER_ERROR proxy downstream timeout" back to the client.
The downstream_timeout default value is 5000 milliseconds. If moxi sees
this time elapse, it will close any downstream connections that
were assigned to the request. Due to this simple behavior of closing
downstream connections on timeout, having a very short
downstream_timeout is not recommended. This will help avoid repetitive
connection creation, timeout, closing and reconnecting. On an
overloaded cluster, you may want to increase downstream_timeout so
that moxi does not constantly attempt to time out downstream
connections on an already overloaded cluster, or by creating even more
new connections when servers are already trying to process requests on
old, closed connections. If you see your servers greatly spiking, you
should consider making this adjustment.
Responses are received
When all responses are received from the downstream servers for a request (or the
downstream conn had an error), moxi asynchronously
sends those responses to the client's upstream conn. If you've
configured moxi to track timing histogram statistics, moxi now tracks
the official end time of the request. The downstream conn is now
released back to the per-thread downstream conn pool, and another
waiting client request (if any) is taken off the downstream conn queue
and assigned to use that downstream conn.
Backoff/Blacklisting
At step 6, there's a case where a connect() attempt might fail. Moxi
can be configured to count up the number of connect() failures for a
downstream server, and will also track the time of the last failing
connect() attempt.
With the connect() failure counting, moxi can be configured to
blacklist a server if too many connect() failures are seen, which is
defined by the connect_max_errors configuration parameter. When more
than connect_max_errors number of connect() failures are seen, moxi
can be configured to temporarily stop making connect() attempts to
that server (or backoff) for a configured amount of time. The backoff
time is defined via the connect_retry_interval configuration, in
milliseconds.
The default for connect_max_errors is 5 and the connect_retry_interval
is 30000 millisecods, that is, 30 seconds.
If you use connect_max_errors parameter, it should be set greater than
the downstream_conn_max configuration parameter.

Purging a SQL Server Service Broker Queue

I have a service broker queue setup on a SQL 2008R2 server & I'm trying to empty the queue. I'm running the script specified here, which has always worked in the past, but now the queue is not being emptied despite this running.
Is there a better way to empty the queue?
Better way to empty the queue is to always end conversation and both ends (both services) after processing all messages. In that case messages will be removed from queue.
Could it be possible that some other transaction is processing message at the time you run script?