Does a timed out HTTP POST result in SQL rollback? - mysql

From a question I asked previously (Preventing duplicate INSERTS MySQL) another question arose:
If a client delivers a POST request to a server, the server handles the POST, inserts into SQL and what not, then sends a reply (or at least HTTP 200 OK), but that reply is not received by the client... does the SQL statement then "not count" or does it auto-rollback or something?
This is very fundamental to using INSERT in POSTS for rows that cannot be uniquely identified by the client making the POST.

Generally speaking, no, the server will not roll back the sql insert. The request was handled on the server side, and then completed.
Think of the server handling it like entering a function. Once the functions job is complete, it returns whatever need be (in this case, the response to the client), and then is done with. The same process happens here, the server does not wait to see whether or not the client has received the request as it's function has already been completed.

Related

MySQL trigger notifies a client

I have an Android frontend.
The Android client makes a request to my NodeJS backend server and waits for a reply.
The NodeJS reads a value in a MySQL database record (without send it back to the client) and waits that its value changes (an other Android client changes it with a different request in less than 20 seconds), then when it happens the NodeJS server replies to client with that new value.
Now, my approach was to create a MySQL trigger and when there is an update in that table it notifies the NodeJS server, but I don't know how to do it.
I thought two easiers ways with busy waiting for give you an idea:
the client sends a request every 100ms and the server replies with the SELECT of that value, then when the client gets a different reply it means that the value changed;
the client sends a request and the server every 100ms makes a SELECT query until it gets a different value, then it replies with value to the client.
Both are bruteforce approach, I would like to don't use them for obvious reasons. Any idea?
Thank you.
Welcome to StackOverflow. Your question is very broad and I don't think I can give you a very detailed answer here. However, I think I can give you some hints and ideas that may help you along the road.
Mysql has no internal way to running external commands as a trigger action. To my knowledge there exists a workaround in form of external plugin (UDF) that allowes mysql to do what you want. See Invoking a PHP script from a MySQL trigger and https://patternbuffer.wordpress.com/2012/09/14/triggering-shell-script-from-mysql/
However, I think going this route is a sign of using the wrong architecture or wrong design patterns for what you want to achieve.
First idea that pops into my mind is this: Would it not be possible to introduce some sort of messaging from the second nodjs request (the one that changes the DB) to the first one (the one that needs an update when the DB value changes)? That way the the first nodejs "process" only need to query the DB upon real changes when it receives a message.
Another question would be, if you actually need to use mysql, or if some other datastore might be better suited. Redis comes to my mind, since with redis you could implement the messaging to the nodejs at the same time...
In general polling is not always the wrong choice. Especially for high load environments where you expect in each poll to collect some data. Polling makes impossible to overload the processing capacity for the data retrieving side, since this process controls the maximum throughput. With pushing you give that control to the pushing side and if there is many such pushing sides, control is hard to achieve.
If I was you I would look into redis and learn how elegantly its publish/subscribe mechanism can be used as messaging system in your context. See https://redis.io/topics/pubsub

postfix: timing of client responses in a milter and in after-queue processing?

I'm currently using postfix-2.11.3, and I am doing a lot of message processing through a milter. This processing takes place before the client is notified that the message is accepted, and it sometimes involves enough work that it delays the client's receipt of the initial SMTP 250 2.0.0 Ok: queued as xxxxxxxxxxx message.
During large email blasts to my server, this milter processing can cause a backlog, and in some cases, the client connections time out while waiting for that initial 250 ... message.
My question is this: if I rewrite my milter as a postfix after-queue filter with no before-queue processing, will clients indeed get the initial 250 messages right away, with perhaps subsequent SMTP messages coming later? Or will the 250 message still be deferred until after postfix completes the after-queue filtering?
And is it possible for an initial 250 message to be received by the client with a subsequent 4xx or 5xx message received and processed later by that same client, in case the after-queue filter decides to subsequently reject the message?
I know I could test this by writing an after-queue filter. However, my email server is busy, and I don't have a test server available, and so I'd like to know in advance whether an after-queue filter can behave in this manner.
Thank you for any wisdom you could share about this.
I managed to set up a postfix instance on a test machine, and I was able to install a dummy after-queue filter. This allowed me to figure out the answer to my question. It turns out that postfix indeed sends the 250 2.0.0 Ok: queued as xxxxxxxxxxx message before the after-queue filter completes.
This means that I can indeed move my slower milter processing to the after-queue filter in order give senders a quicker SMTP response.

How to deal with multiple request to server from client

I've my messenger app which sends request to server for group creation, server process the request(making a database entry of group) and send back response, but sometimes it happens due to weak connection, response is not received in particular time instant, as a result client sends request again for the same group.
The fault which occurs in this case the server processes both these request and makes two entries (or more in case of more requests) in the database with different group_id for the same group.
How can I avoid multiple entries in database and make it consistent?
Due to multiple entries, when client reinstall app, if there are three entries of a group in database, all three will be loaded in app.
One solution which I thought of is that check if the group with given name already exist, but this is not the accepted solution, since client can create more one group with same name.
Note:
I'm using MYSQL Enterprise edition for storing entries on server.
You can think of group creation as same as groups are created in WhatsApp messenger.
Packet Id is unique for such repeating JSON requests being sent to server. Use that as a filter and discard the duplicate packet Ids. Same as done with message packets and other requests.

Read after write consistency with mysql and multiple concurrent connections

I'm trying to understand whether it is possible to achieve the following:
I have multiple instances of an application server running behind a round-robin load balancer. The client expects GET after POST/PUT semantics, in particular the client will make a POST request, wait for the response and immediately make a GET request expecting the response to reflect the change made by the POST request, e.g:
> Request: POST /some/endpoint
< Response: 201 CREATED
< Location: /some/endpoint/123
> Request: GET /some/endpoint/123
< Response must not be 404 Not Found
It is not guaranteed that both requests are handled by the same application server. Each application server has a pool of connections to the DB. Each request will commit a transaction before responding to the client.
Thus the database will on one connection see an INSERT statement, followed by a COMMIT. One another connection, it will see a SELECT statement. Temporally, the SELECT will be strictly after the commit, however there may only be a tiny delay in the order of milliseconds.
The application server I have in mind uses Java, Spring, and Hibernate. The database is MySQL 5.7.11 managed by Amazon RDS in a multiple availability zone setup.
I'm trying to understand whether this behavior can be achieved and how so. There is a similar question, but the answer suggesting to lock the table does not seem right for an application that must handle concurrent requests.
Under ordinary circumstances, you will not have any issue with this sequence of requests, since your MySQL will have committed the changes to the database by the time the 201 response has been sent back. Therefore, any subsequent statements will see the created / updated record.
What could be the extraordinary circumstances under which the subsequent select will not find the updated / inserted record?
Another process commits an update or delete statement that changes or removes the given record. There is not too much you can do about this, since it is part of the normal operation. If you do not want such thing to happen, then you have to implement application level locking of data.
The subsequent GET request is routed not only to a different application server, but that one uses (or is forced to use) a different database instance, which does not have the most updated state of that record. I would envisage this to happen if either application or database server level there is a severe failure, or routing of the request goes really bad (routed to a data center at a different geographical location). These should not happen too frequently.
If you're using MyISAM tables, you might be seeing the effects of 'concurrent inserts' (see 8.11.3 in the mysql manual). You can avoid them by either setting the concurrent_insert system variable to 0, or by using the HIGH_PRIORITY keyword on the INSERT.

How to Kill MySQL Query Thread issued with $.getJSON Method

I have a query that is sent to MySQL server using the jQuery $.getJSON method. While the server is processing this query, I want to issue a new query which supersedes the previous one and kills the old thread.
I have tried using the following method from this post, as shown below:
var request = $.getJSON(....);
request.abort();
However, it only implements the abort function at browser level. What I need is to send a kill command to the server, so that it does not continue to query something that is already aborted.
The only way to do that, would be to send a new request that commands the server to explicitly about the other request.
But for one, I don't think this is possible from PHP at all. Secondly, if you could, it would mean that you have to tell the client about the MySQL thread, so it can later tell the server which one to kill. It will however be hard to make PHP return this information, if it is actually waiting for that query. It is not only the MySQL process that hangs, but the PHP process too.
MySQLi to the rescue.
If you use MySQLi, you can call mysqli_kill, which accepts a process id. This is the thread is that you get when connecting to MySQL. Call mysqli_thread_id to get this id.
Storing the thread id.
If you store this id in the session, you may be able to get that id on your next request and kill it. But I'm afraid the session may not be saved by the previous request (since it is still running), so the thread id may not be stored yet.
If this is indeed the case, you can make the first request store the thread id in memcache or in another table (a memory table will do). Use the session id as a key. Then, in your kill request, you can use the session id to find the thread id and kill the other request.
Not for the first request
This will only pose a problem if it is the very first request that hangs, because in that case you will not have a session yet.
(I'm assuming PHP, might be another server process too. Anyway, it's not JavaScript that's directly connecting to MySQL).