Will IIS ever terminate the thread if a POST gets canceled by the browser [duplicate] - html

Environment:
Windows Server 2003 - IIS 6.x
ASP.NET 3.5 (C#)
IE 7,8,9
FF (whatever the latest 10 versions are)
User Scenario:
User enters search criteria against large data-set. After initiating the request, they are navigated to a results page, where they wait until the data is loaded and can then refine the data.
Technical Scenario:
After user sends search criteria (via ajax call), UI calls back-end service. Back-end service queries transactional system(s) and puts the resulting data into a db "cache" - a denormalized table, set-up for further refining the of the data (i.e. sorting, filtering). UI waits until the data is cached and then upon getting notified that the process is done, navigates to a resulting page. The resulting page then makes a call to get the data from the denormalized table.
Problem:
The search is relatively slow (15-25 seconds) for large queries that end up having to query many systems based on the criteria entered. It is relatively fast for other queries ( <4 seconds).
Technical Constraints:
We can not entirely re-architect this search / results system. There are way to many complexities here between how the UI and the back-end is tied together. The page is required (because of constraints that can not be solved on StackOverflow) to turn after performing the search criteria.
We also can not ask the organization to denormalize the data prior to searching because the data has to be real-time, i.e. if a user makes a change in other systems, the data has to show up correctly if they do a search afterwards.
Process that I want to follow:
I want to cheat a little. I want to issue the "Cache" request via an async HttpHandler in a fire-forget model.
After issuing the query, I want to transition the page to the resulting page.
On the transition page, I want to poll the "Cache" table to see if the data has been inserted into it yet.
The reason I want to do this transition right away, is that the resulting page is expensive on itself (even without getting the data) - still 2 seconds of load time before even getting to calling the service that gets the data from the cache.
Question:
Will the ASP.NET thread that is called via the async handler reliably continue processing even if I navigate away from the page using a javascript redirect?
Technical Boundaries 2:
Yes, I know... This search process does not sound efficient. There is nothing I can do about that right now. I am trying to do whatever I can to get it to perform a little better while we continue researching how we are going to re-architect it.
If your answer is to: "Throw it away and start over", please do not answer. That is not acceptable.

Yes.
There is the property Response.IsClientConnected which is used to know if a long running process is still connected. The reason for this property is a processes will continue running even if the client becomes disconnected and must be manually detected via the property and manually shut down if a premature disconnect occurs. It is not by default to discontinue a running process on client disconnect.
Reference to this property: http://msdn.microsoft.com/en-us/library/system.web.httpresponse.isclientconnected.aspx
update
FYI this is a very bad property to rely on these days with sockets. I strongly encourage you to do an approach which allows you to quickly complete a request that notes in some database or queue of some long running task to complete, probably use RabbitMQ or something like that, that in turns uses socket.io or similar to update the web page or app once completed.

How about don't do the async operation on an ASP.NET thread at all? Let the ASP.NET code call a service to queue the data search, then return to the browser with a token from the service, where it will then redirect to the result page that awaits the completed result? The result page will poll using the token from the service.
That way, you won't have to worry about whether or not ASP.NET will somehow learn that the browser has moved to a different page.

Another option is to use Threading (System.Threading).
When the user sends the search criteria, the server begins processing the page request, creates a new Thread responsible for executing the search, and finishes the response getting back to the browser and redirecting to the results page while the thread continues to execute on the server background.
The results page would keep verifying on the server if the query execution had finished as the started Thread would share the progress information. When it does finish, the results are returned when the next ajax call is done by the results page.
It could also be considered using WebSockets. In a sense that the Webserver itself could tell the browser when it is done processing the query execution as it offers full-duplex communications channels.

Related

MySQL trigger notifies a client

I have an Android frontend.
The Android client makes a request to my NodeJS backend server and waits for a reply.
The NodeJS reads a value in a MySQL database record (without send it back to the client) and waits that its value changes (an other Android client changes it with a different request in less than 20 seconds), then when it happens the NodeJS server replies to client with that new value.
Now, my approach was to create a MySQL trigger and when there is an update in that table it notifies the NodeJS server, but I don't know how to do it.
I thought two easiers ways with busy waiting for give you an idea:
the client sends a request every 100ms and the server replies with the SELECT of that value, then when the client gets a different reply it means that the value changed;
the client sends a request and the server every 100ms makes a SELECT query until it gets a different value, then it replies with value to the client.
Both are bruteforce approach, I would like to don't use them for obvious reasons. Any idea?
Thank you.
Welcome to StackOverflow. Your question is very broad and I don't think I can give you a very detailed answer here. However, I think I can give you some hints and ideas that may help you along the road.
Mysql has no internal way to running external commands as a trigger action. To my knowledge there exists a workaround in form of external plugin (UDF) that allowes mysql to do what you want. See Invoking a PHP script from a MySQL trigger and https://patternbuffer.wordpress.com/2012/09/14/triggering-shell-script-from-mysql/
However, I think going this route is a sign of using the wrong architecture or wrong design patterns for what you want to achieve.
First idea that pops into my mind is this: Would it not be possible to introduce some sort of messaging from the second nodjs request (the one that changes the DB) to the first one (the one that needs an update when the DB value changes)? That way the the first nodejs "process" only need to query the DB upon real changes when it receives a message.
Another question would be, if you actually need to use mysql, or if some other datastore might be better suited. Redis comes to my mind, since with redis you could implement the messaging to the nodejs at the same time...
In general polling is not always the wrong choice. Especially for high load environments where you expect in each poll to collect some data. Polling makes impossible to overload the processing capacity for the data retrieving side, since this process controls the maximum throughput. With pushing you give that control to the pushing side and if there is many such pushing sides, control is hard to achieve.
If I was you I would look into redis and learn how elegantly its publish/subscribe mechanism can be used as messaging system in your context. See https://redis.io/topics/pubsub

Why does read after right method not produce consistent results?

I'm watching this video about Akka.net and the speaker says read after right does not produce consistent results because the order of events is not predictable at the network level. The arhcitecture the presenter is speaking about in this video is as follows:
One Load Balancer
Multiple web servers. Load balancer determines which server to hit.
One database server (SQL Server).
I'm confused to why consistent results are not achieved with a single databse? If a lock is put before data is written wouldn't that bring you back consistent results?
So I'm going to guess you're talking about the scenario Aaron describes about 10 minutes into this video. Here's the scenario:
User is clicking things on a site and we're firing off asynchronous requests to record the clicks.
The not obvious part from the scenario he's describing is that we're not waiting for the previous requests to finish before sending more requests to capture a user's clicks (imagine a single page app where clicks don't cause a full refresh of the page from the server). We want to capture all the clicks.
We have some logic on the server that says, "If the user clicks these 3 things in a row, do some cool reaction..."
We check our condition on the web server ("Has the user clicked these 3 things in a row?") by writing the click event we just got to our DB, then reading to see if they've generated the stream of 3 things clicked to do our cool reaction.
Here's the problem: each request to record a click could be going to a different web server and we're not waiting on the previous one to finish before we send more requests to record clicks. So we have no guarantee that the request to write the first event has completed before we write the second, or the third, etc.
For example, the first request could be delayed (or even fail!) because of a faulty network, so the second request could reach our SQL Server first! And such, when it goes to read the stream of events that's happened, it could not be aware that a request was sent (but hasn't completed) to record that the first event happened.
I think the point he's trying to make is that in the face of multiple clients (in this example, web servers) writing to a database concurrently, you can't count on, "I sent that first so it will be recorded first". This holds true whether you're using DataStax Enterprise, Cassandra, SQL Server, Oracle, or whatever. Hope that helps!

Sync data on App_Closing event

I have few clumps of data that needs to be sync'd. The app is a calendar where in dates are stored, along with few other information. So on app exit I need to sync the all dates to the server. The dates and other info are converted to Json format and sent.
I have used HttpWebRequest for getting the responses from the server and hence are a series of callbacks. The function SyncHistory is called in on the Application_Closing
What happens is that the I can see the execution moving to the SyncHistory but once the app is closed, it does not further call the other functions.
I need the app to sync data before it stops? I have tried await keyword, sometimes it calls the functions but some other times it does not?
Where should the code ideally be put. I dont want to sync data everytime the user enters data. Is there any other common exit points which runs even after the app is closed?
This isn't a great idea - you only have a maximum of 10s to complete Application_Closing, before the phone OS will shutdown your app forcibly. Once your app is closed (or shutdown forcibly) none of your code will run.
The nature of a mobile phone networking and cellular networks is that you can't rely on having sent all your data to a server in 10s. You'll have to think of an alternative strategy if you want this to be reliable.
And you haven't even consider the Application_Deactivated scenario where you get even less time to complete.

UI Autocomplete : Make multiple ajax requests or load all data at once for a list of locations in a city?

I have a text box in my application which allows a user to select a location with the help of UI autocomplete. There are around 10,000 valid locations out of which the user must select one. There are two implementations for the autocomplete functionality:
Fetch the list of locations when the page loads for the first time and iterate over the array to find matching items on every keystroke in javascript
Make ajax requests on every keystroke as searching in MySQL(the db being used) is much faster?
Performance wise, which one is better?
An initial test shows that loading the data at once is the better approach from a performance point of view. However, this test was done on a MBP where JavaScipt processing is quite fast. I'm not sure whether this technique is the better one for machines with low processing power like lower end android phones, old systems etc.
Your question revolves around which is quicker, processing over 10,000 rows in the browser, or sending a request to a remote server to return the smaller result set. An interesting problem that depends on context and environment at runtime. Sending to the remote server incurs network delay mostly, with small amounts of server overhead.
So you have two variables in the performance equation, processing speed of the client and network latency. There is also a third variable, volume of data, but this is constant 10k in your question.
If both client browser and network are fast, use whatever you prefer.
If the network is faster, use the remote server approach, although be careful not to overload the server with thousands of little requests.
If the client is faster, probably use the local approach. (see below)
If both are slow, then you probably need to chose either, or spend lots of time and effort optimizing this.
Both clients slow can easily happen, my phone browser on 3G falls into this category, network latency for a random Ajax request is around 200mS, and it performs poorly for some JavaScript too.
As user perceieved performance is all that really matters, you could preload the first N values for each letter as variables in the initial page load, then use these for the first keystroke results, this buys you a few mS.
If you go with the server approach, you can always send requested result AND a few values for each of the next keystroke. This overlaps what users see and makes it appear snappier on slow networks. Eg
Client --> request 'ch'
Server responds with a few result for each potential next letter
'cha' = ...
'chb' = ...
Etc
This of course requires some specialized javascript to alternate between Ajax requests and using cached results from previous requests to prefill the selection.
If you are going with the local client searching through all 10k records, then make sure the server returns the records in sorted order. If your autocomplete scanning is able to use 'starting with' selection rather than 'contains' (eg typing RO will match Rotorua but not Paeroa) then you can greatly reduce processing time by using http://en.wikipedia.org/wiki/Binary_search_algorithm techniques, and I'm sure there are lots of SO answers on this area.
If there is no advantage for querying the backend every time, don't do it.
What could be an advantage of querying the backend all the time? If the amount of returned data for the initial call is to heavy (bandwidth, javascript processing time to prepare it, time at all), the partial request every time could be the smarter option.

Implementing a live voting system

I'm looking at implementing a live voting system on my website. The website provides a live stream, and I'd like to be able to prompt viewers to select an answer during a vote initiated by the caster. I can understand how to store the data in a mySQL database, and how to process the answers. However:
How would I initially start the vote on the client-side and display it? Should a script be running every few seconds on the page, checking another page to see if a question is available for the user?
Are there any existing examples of a real-time polling system such as what I'm looking at implementing?
You would have to query the server for a new question every few seconds.
The alternative is to hold the connection open until the server sends more data or it times out, which just reduces (but does not eliminate) the server hits. I think it is called "long polling". http://en.wikipedia.org/wiki/Push_technology
You will have to originate the connection from the client-side. The simplest solution is to have the page make an AJAX request every second or so. Web pages don't have to return immediately (they can take 30 seconds or more before responding without the connection timing out). This, opening one connection which doesn't respond until it has something to say, is "long-polling".
You could use setTimeout in JavaScript to make AJAX requests each few seconds to check whether there are new questions.
Yes, long polling might be better, but I'm sure it's a bit more complex. So if you are up to the job, go ahead and use it!
Here's a bit more info on the topic:
http://www.webdevelopmentbits.com/avoiding-long-polling