I have a query that is sent to MySQL server using the jQuery $.getJSON method. While the server is processing this query, I want to issue a new query which supersedes the previous one and kills the old thread.
I have tried using the following method from this post, as shown below:
var request = $.getJSON(....);
request.abort();
However, it only implements the abort function at browser level. What I need is to send a kill command to the server, so that it does not continue to query something that is already aborted.
The only way to do that, would be to send a new request that commands the server to explicitly about the other request.
But for one, I don't think this is possible from PHP at all. Secondly, if you could, it would mean that you have to tell the client about the MySQL thread, so it can later tell the server which one to kill. It will however be hard to make PHP return this information, if it is actually waiting for that query. It is not only the MySQL process that hangs, but the PHP process too.
MySQLi to the rescue.
If you use MySQLi, you can call mysqli_kill, which accepts a process id. This is the thread is that you get when connecting to MySQL. Call mysqli_thread_id to get this id.
Storing the thread id.
If you store this id in the session, you may be able to get that id on your next request and kill it. But I'm afraid the session may not be saved by the previous request (since it is still running), so the thread id may not be stored yet.
If this is indeed the case, you can make the first request store the thread id in memcache or in another table (a memory table will do). Use the session id as a key. Then, in your kill request, you can use the session id to find the thread id and kill the other request.
Not for the first request
This will only pose a problem if it is the very first request that hangs, because in that case you will not have a session yet.
(I'm assuming PHP, might be another server process too. Anyway, it's not JavaScript that's directly connecting to MySQL).
Related
I have an Android frontend.
The Android client makes a request to my NodeJS backend server and waits for a reply.
The NodeJS reads a value in a MySQL database record (without send it back to the client) and waits that its value changes (an other Android client changes it with a different request in less than 20 seconds), then when it happens the NodeJS server replies to client with that new value.
Now, my approach was to create a MySQL trigger and when there is an update in that table it notifies the NodeJS server, but I don't know how to do it.
I thought two easiers ways with busy waiting for give you an idea:
the client sends a request every 100ms and the server replies with the SELECT of that value, then when the client gets a different reply it means that the value changed;
the client sends a request and the server every 100ms makes a SELECT query until it gets a different value, then it replies with value to the client.
Both are bruteforce approach, I would like to don't use them for obvious reasons. Any idea?
Thank you.
Welcome to StackOverflow. Your question is very broad and I don't think I can give you a very detailed answer here. However, I think I can give you some hints and ideas that may help you along the road.
Mysql has no internal way to running external commands as a trigger action. To my knowledge there exists a workaround in form of external plugin (UDF) that allowes mysql to do what you want. See Invoking a PHP script from a MySQL trigger and https://patternbuffer.wordpress.com/2012/09/14/triggering-shell-script-from-mysql/
However, I think going this route is a sign of using the wrong architecture or wrong design patterns for what you want to achieve.
First idea that pops into my mind is this: Would it not be possible to introduce some sort of messaging from the second nodjs request (the one that changes the DB) to the first one (the one that needs an update when the DB value changes)? That way the the first nodejs "process" only need to query the DB upon real changes when it receives a message.
Another question would be, if you actually need to use mysql, or if some other datastore might be better suited. Redis comes to my mind, since with redis you could implement the messaging to the nodejs at the same time...
In general polling is not always the wrong choice. Especially for high load environments where you expect in each poll to collect some data. Polling makes impossible to overload the processing capacity for the data retrieving side, since this process controls the maximum throughput. With pushing you give that control to the pushing side and if there is many such pushing sides, control is hard to achieve.
If I was you I would look into redis and learn how elegantly its publish/subscribe mechanism can be used as messaging system in your context. See https://redis.io/topics/pubsub
Suppose I receive a big csv file with lots of data in it, and the loopback server must parse all this data after the file is loaded, run some processes in it (Ex. Create user accounts and do some other registrations related to the account, or just create a database entry for each row in the file) and say this file has possibly from 10,000 to 3'000,000 entries (I'm using MySQL btw, maybe there is a better option for that too), it takes a lot of time to process all that, is there a "neat" way to handle that? right now what I'm doing is, after I get the file, I return the response to the user in my remote method callback(null,{message:'got file, server still working'}); and continue to process in the background (in the same remote method code line, I just don't callback after done because I already did) and then I run a 500ms timer interval in the front-end to request the process status in a different endpoint (I save the progress percentage in a field's row on the database, for this endpoint to request), is this the way to do this? or is there a better option? I already run mysql queries in groups of 10,000 each commit and I disable foreign key checking too (I'm using mysql connector query execution directly). Thanks in advance :)
I'm trying to understand whether it is possible to achieve the following:
I have multiple instances of an application server running behind a round-robin load balancer. The client expects GET after POST/PUT semantics, in particular the client will make a POST request, wait for the response and immediately make a GET request expecting the response to reflect the change made by the POST request, e.g:
> Request: POST /some/endpoint
< Response: 201 CREATED
< Location: /some/endpoint/123
> Request: GET /some/endpoint/123
< Response must not be 404 Not Found
It is not guaranteed that both requests are handled by the same application server. Each application server has a pool of connections to the DB. Each request will commit a transaction before responding to the client.
Thus the database will on one connection see an INSERT statement, followed by a COMMIT. One another connection, it will see a SELECT statement. Temporally, the SELECT will be strictly after the commit, however there may only be a tiny delay in the order of milliseconds.
The application server I have in mind uses Java, Spring, and Hibernate. The database is MySQL 5.7.11 managed by Amazon RDS in a multiple availability zone setup.
I'm trying to understand whether this behavior can be achieved and how so. There is a similar question, but the answer suggesting to lock the table does not seem right for an application that must handle concurrent requests.
Under ordinary circumstances, you will not have any issue with this sequence of requests, since your MySQL will have committed the changes to the database by the time the 201 response has been sent back. Therefore, any subsequent statements will see the created / updated record.
What could be the extraordinary circumstances under which the subsequent select will not find the updated / inserted record?
Another process commits an update or delete statement that changes or removes the given record. There is not too much you can do about this, since it is part of the normal operation. If you do not want such thing to happen, then you have to implement application level locking of data.
The subsequent GET request is routed not only to a different application server, but that one uses (or is forced to use) a different database instance, which does not have the most updated state of that record. I would envisage this to happen if either application or database server level there is a severe failure, or routing of the request goes really bad (routed to a data center at a different geographical location). These should not happen too frequently.
If you're using MyISAM tables, you might be seeing the effects of 'concurrent inserts' (see 8.11.3 in the mysql manual). You can avoid them by either setting the concurrent_insert system variable to 0, or by using the HIGH_PRIORITY keyword on the INSERT.
Hi am confused with sql servers session. What does it actually mean? Does it keep track of the client like httpSession? I have read some documents on query life cycle. None talks about the sesion. Most of the documents say that after the query is recived by the server it gets parsed and then maintains a syntax tree and then execution plan and then executes the query and then a dispatch palan and then dispatches the resultset to the client who issued the query on the server. In the whole story where does the session on sql server like mysql server fits in and what actually it does? or There is no session concept on Mysql server(any sql server)? am i in wrong imagination?
A session in this context usually just refers to a single client connection.
The client connects to the DB server and authenticates; this is the start of the session.
When the client disconnects (gracefully or not) the session ends.
This is relevant for things like temporary tables or transactions: Un-committed transactions will be rolled back by the DBMS and all temporary tables created through this connection (=session) are discarded when the client disconnects, i.e. when the session ends.
Note that a client does not necessarily actively end a session or connection. The client may crash, or the network connection may break, or the server may shut down &c. Any of this implicitly terminates the session.
Problems may arise when a (client) application uses a connection pool keeping connections (and sessions) open and handing them out transparently to different application components. When not handled correctly, errors may occur because a given session may already be 'spoiled' by a previous operation. If, for example, one routine on the client creates a temporary table named 'X' and fails to explicitly drop it afterwards, the next routine that 'inherits' this session may encounter an error when trying to create another temporary table of that name, because it already exists in this specific session; which couldn't be the case if the connection/session was freshly created.
"Session" is mainly a generic term. You connect to a server (MySQL, Oracle, FTP, IRC... whatever), you do your stuff and finally disconnect when you're done. That has been a session.
HTTP is a particular case. It's a stateless protocol: if you spend an hour reading a web site, you don't remain connected for a whole hour. You make a quick connection, fetch an item at a time (an HTML document, a style sheet, a picture...) and close the connection. (Internals are actually more complex but that's the general idea.) When you ask for a second page, the server doesn't know who you are: that makes it impossible to keep track of your whole browsing session at protocol level. Thus HTTP sessions were invented: they're a way to emulate physical sessions.
The MySQL session starts when you open a connection to the server. A connection ID is assigned which can be read via the SELECT CONNECTION_ID() statement. The session is terminated when the connection is closed or, in case of persistent connections, after a certain timeout or when the server shuts down.
I have a desktop application that runs on a network and every instance connects to the same database.
So, in this situation, how can I implement a mutex that works across all running instances that are connected to the same database?
In other words, I don't wan't that two+ instances to run the same function at the same time. If one is already running the function, the other instances shouldn't have access to it.
PS: Database transaction won't solve, because the function I wan't to mutex doesn't use the database. I've mentioned the database just because it can be used to exchange information across the running instances.
PS2: The function takes about ~30 minutes to complete, so if a second instance tries to run the same function I would like to display a nice message that it can't be performed right now because computer 'X' is already running that function.
PS3: The function has to be processed on the client machine, so I can't use stored procedures.
I think you're looking for a database transaction. A transaction will isolate your changes from all other clients.
Update:
You mentioned that the function doesn't currently write to the database. If you want to mutex this function, there will have to be some central location to store the current mutex holder. The database can work for this -- just add a new table that includes the computername of the current holder. Check that table before starting your function.
I think your question may be confusion though. Mutexes should be about protecting resources. If your function is not accessing the database, then what shared resource are you protecting?
put the code inside a transaction either - in the app, or better -inside a stored procedure, and call the stored procedure.
the transaction mechanism will isolate the code between the callers.
Conversely consider a message queue. As mentioned, the DB should manage all of this for you either in transactions or serial access to tables (ala MyISAM).
In the past I have done the following:
Create a table that basically has two fields, function_name and is_running
I don't know what RDBMS you are using, but most have a way to lock individual records for update. Here is some pseduocode based on Oracle:
BEGIN TRANS
SELECT FOR UPDATE is_running FROM function_table WHERE function_name='foo';
-- Check here to see if it is running, if not, you can set running to 'true'
UPDATE function_table set is_running='Y' where function_name='foo';
COMMIT TRANS
Now I don't have the Oracle PSQL docs with me, but you get the idea. The 'FOR UPDATE' clause locks there record after the read until the commit, so other processes will block on that SELECT statement until the current process commits.
You can use Terracotta to implement such functionality, if you've got a Java stack.
Even if your function does not currently use the database, you could still solve the problem with a specific table for the purpose of synchronizing this function. The specifics would depend on your DB and how it handles isolation levels and locking. For example, with SQL Server you would set the transaction isolation to repeatable read, read a value from your locking row and update it inside a transaction. Don't commit the transaction until your function is done. You can also use explicit table locks in a transaction on most databases which might be simpler. This is probably the simplest solution given you are already using a database.
If you do not want to rely on the database for whatever reason you could write a simple service that would accept TCP connections from your client. Each client would request permission to run and would return a response when done. The server would be able to ensure only one client gets permission to run at a time. Dead clients would eventually drop the TCP connection and be detected as long as you have the correct keep alive setting.
The message queue solution suggested by Xepoch would also work. You could use something like MSMQ or Java Message Queue and have a single message that would act as a run token. All your clients would request the message and then repost it when done. You risk a deadlock if a client dies before reposting so you would need to devise some logic to detect this and it might get complicated.