I'm using redux + react-redux + react-native
There are times (when typing on a text field for example) that multiple requests are sent to the server. How can I handle this case, so that only the last one is processed on the client?
I've read fetch doesn't have a promise reject, so I'm not sure if there is a way to differ promises, or a flow or middleware to handle this correctly in redux, like keeping track of all requests order.
I'm currently using redux-saga or redux-observable, they internally use RxJS and can cancel previous actions, like when you use takeLatest, only the last action will get completely executed.
You should have :
an action that changes the state of the value of the textfield
an other action that makes the request on submit (or using debounce on the onChange)
Here is an example: https://topheman.github.io/react-es6-redux/ (try the dev version, you'll have access to redux-devtools & sourcemaps)
Usefull resource if you don't know about debounce : https://davidwalsh.name/javascript-debounce-function
Related
In angular 2+, I have a component A which calls service A where i make some changes and call service B (Http Calls) and get the data which is simply passed back to service A, now i need to subscribe into service A to see the data and also subscribe into Component A to display data there?
why i need to subscribe at 2 places which means its making the http calls twice (which is not good at all)
what is the best way where I can fetch and store data in Service A by subscribing and do all manipulation and simply send that object back to component A to display it? even I try to make a variable in subscribing section in service A but when I try to log that variable outside the subscribe block. it is undefined.
thanks for the help.
while searching for the answer, I found one way (or can called worked around) that is to use "async-await" feature in angular with HttpClient.
which will basically wait at the same line of execution till you get result (success or error). and then proceed further with next line of execution.
for example:
async myFunction() {
this.myResult = await this.httpClient.get(this.url).toPromise();
console.log('No issues, it will wait till myResult is populated.');
}
Explanation:
adding async in front of the function to let it know that execution need to wait and the desire place (mostly at http service call as I need to wait till I get the result) we put await. so execution will go under wait period till it get the response back. and later. simply return the variable.
I have Redux app with React Router (based on https://github.com/este/este).
Inside one Route, there may be more than 1 AJAX calls (fired by redux-promise-middleware & redux-thunk). When the page changes (via react-router) I wish to reject all remaining _SUCESS or _FAILED callback actions fired by the previous route.
What is the best way to do this?
I'd suggest that you make the data you fetch page-aware. Meaning that in the action where the fetch is started, add a page-context. When the reducer gets the data it can either save it for that page-context or it can throw it away if the location is not the same as your browser (meaning that the user has navigated away). If you keep the data for the different pages/contexts you also have the bonus of these being ready if the user returns (if that is something that you'd want).
You are on url "/pageX". You start fetching data and the action makes sure that the page-context is remembered for when the SUCCESS action is to be dispatched. When the reducer handles the action it stores the data in store.context["/pageX"].data (or similar). Note: This is where you could also throw it away (reject) in case the current location is not the same as the received data.
The UI should know how to ask/use data from the context that matches it's location only.
You might also want to consider tracking the browser-location in the state for the app...
How can I configure Polymer's platinum-sw-cache or platinum-sw-fetch to cache all URL paths except for /_api, which is the URL for Hoodie's API? I've configured a platinum-sw-fetch element to handle the /_api path, then platinum-sw-cache to handle the rest of the paths, as follows:
<platinum-sw-register auto-register
clients-claim
skip-waiting
on-service-worker-installed="displayInstalledToast">
<platinum-sw-import-script href="custom-fetch-handler.js"></platinum-sw-import-script>
<platinum-sw-fetch handler="HoodieAPIFetchHandler"
path="/_api(.*)"></platinum-sw-fetch>
<platinum-sw-cache default-cache-strategy="networkFirst"
precache-file="precache.json"/>
</platinum-sw-cache>
</platinum-sw-register>
custom-fetch-handler.js contains the following. Its intent is simply to return the results of the request the way the browser would if the service worker was not handling the request.
var HoodieAPIFetchHandler = function(request, values, options){
return fetch(request);
}
What doesn't seem to be working correctly is that after user 1 has signed in, then signed out, then user 2 signs in, then in Chrome Dev Tools' Network tab I can see that Hoodie regularly continues to make requests to BOTH users' API endpoints like the following:
http://localhost:3000/_api/?hoodieId=uw9rl3p
http://localhost:3000/_api/?hoodieId=noaothq
Instead, it should be making requests to only ONE of these API endpoints. In the Network tab, each of these URLs appears twice in a row, and in the "Size" column the first request says "(from ServiceWorker)," and the second request states the response size in bytes, in case that's relevant.
The other problem which seems related is that when I sign in as user 2 and submit a form, the app writes to user 1's database on the server side. This makes me think the problem is due to the app not being able to bypass the cache for the /_api route.
Should I not have used both platinum-sw-cache and platinum-sw-fetch within one platinum-sw-register element, since the docs state they are alternatives to each other?
In general, what you're doing should work, and it's a legitimate approach to take.
If there's an HTTP request made that matches a path defined in <platinum-sw-fetch>, then that custom handler will be used, and the default handler (in this case, the networkFirst implementation) won't run. The HTTP request can only be responded to once, so there's no chance of multiple handlers taking effect.
I ran some local samples and confirmed that my <platinum-sw-fetch> handler was properly intercepting requests. When debugging this locally, it's useful to either add in a console.log() within your custom handler and check for those logs via the chrome://serviceworker-internals Inspect interface, or to use the same interface to set some breakpoints within your handler.
What you're seeing in the Network tab of the controlled page is expected—the service worker's network interactions are logged there, whether they come from your custom HoodieAPIFetchHandler or the default networkFirst handler. The network interactions from the perspective of the controlled page are also logged—they don't always correspond one-to-one with the service worker's activity, so logging both does come in handy at times.
So I would recommend looking deeper into the reason why your application is making multiple requests. It's always tricky thinking about caching personalized resources, and there are several ways that you can get into trouble if you end up caching resources that are personalized for a different user. Take a look at the line of code that's firing off the second /_api/ request and see if it's coming from an cached resource that needs to be cleared when your users log out. <platinum-sw> uses the sw-toolbox library under the hood, and you can make use of its uncache() method directly within your custom handler scripts to perform cache maintenance.
Objective : To block the UI until the Ajax validation call returns. With some dialog or message.
The problem: How in a Spine/MVC way, am i supposed to append and them remove the HTML content on the top of current view?
Half-baked solution: Inside Controller->
Bind the model ajaxSuccess function to remove the message HTML, and append the "loading" message on Saving the Model object.
Any ideas,?
Thanks.
Quick answer: you should try to avoid it altogether. It's annoying for the user and against the core philosophy of spine.js.
http://spinejs.com/docs/introduction :
Core values:
[...]
Asynchronous interfaces - Too many JavaScript applications & frameworks don't take full advantage of the power of client-side rendering. End-users don't care if background requests to the server are pending, and don't want to see loading messages and spinners. Users want unblocked interfaces, and instant interaction. To enable this, Spine stores and renders everything client-side, communicating with the server asynchronously.
I understand that sometimes blocking just can't be avoided. In those cases I would follow this pattern:
In the controller:
Add blocking overlay html
Call model method that is asynchronous but needs blocking
Wait for model to emit an event that signals that the action is finished, eg. validationDone
In the model:
Write asynchronous method as usual
In both success and error handlers, emit the validationDone event
I want to create some kind of AJAX script or call that continuously will check a MySQL database if any new messages has arrived. When there is a new message in the database, the AJAX script should invoke a kind of alert box or message box.
I’m not quite a AJAX expert (yet anyway) and have Googled around to find a solution but I’m having a hard time to figure out where to begin. I imagine that it is kind of the same method that an AJAX chat is using to see if any new chat-message has been send.
I’ve also tried to search for AJAX (httpxmlrequest) call in a continuously and infinity loop but still haven’t got a solution yet.
I hope there is someone, which can help me with such a AJAX script or maybe nudge me in the right direction.
Thanks
Sincerely
Mestika
Step 1 - You need a server-side page that you can call that checks to see if something new has arrived.
Step 2 - You could adapt the sequential AJAX request script from here (it uses jQuery to simplify the AJAX requests):
http://www.stevefenton.co.uk/Content/Blog/Date/201004/Blog/AJAX-and-Race-Conditions/
Currently, this script is for queuing a list of sequential AJAX requests, but you could use it to continually check by changing it like this...
var InfiniteAjaxRequest = function (uri) {
$.ajax({
url: uri,
success: function(data) {
// do something with "data"
if (data.length > 0) {
alert(data);// Do something sensible with it!
}
InfiniteAjaxRequest (uri);
},
error: function(xhr, ajaxOptions, thrownError) {
alert(thrownError);
}
});
};
InfiniteAjaxRequest ("CheckForUpdate.php");
What are the benefits of using this script?
Well, rather than checking every "x" seconds, it will only check once the previous request has been received, so it chains the requests. You could add in a delay to throttle this constant request, which I would highly recommend you do - otherwise you will be hitting your site with way too much traffic. You would add that delay in AFTER you've done something with the response, but BEFORE you call back into "InfiniteAjaxRequest".
Here's your nudge:
Get one of the available JavaScript frameworks (jQuery seems to be the most common, but there are others)
flip though the documentation on the AJAX methods it provides, choose a method for your task that seems appropriate
build a request to your site that fetches the info and reacts on the response (shows a message box or updates some part of your page), wrap that in a function
make sure request errors do not go unnoticed by implementing an error handler
check out setInterval() to call that function you've just made repeatedly
final step: make sure that the interval will be stopped in case of an error condition (or provide a on/off button for the user, even) so the server is not hammered needlessly
There is a technique called Comet where-by your client-side script would instantiate a HTTP request which remains open for a long time. The server can then push data into the response as they happen. It's a technique to deliver a push notification.
The Wikipedia link has more information on real-world implementations.
Instead of polling the server with AJAX calls you could also use push technology (COMET).
This way you can push the results to the client(s) as soon as the server is done with it's work.
There are many frameworks available like:
JQuery plugin
Cometd
Atmosphere (if your on java)