React Native fetch vs XMLHttpRequest performance - json

I'm trying to figure out why when using axios (which uses XMLHttpRequest), parsing a large (4-5mb json) takes about 10 times more than when using just fetch and .json() on the result. Even worse, when using XMLHttpRequest the whole UI becomes unresponsive, while using fetch there might be a tiny block when doing the json parsing but the UI is responsive pretty much throughout the download process.
I can't find any documentation about the internals of fetch, but outdated blogs say it just uses XMLHttpRequest internally. If this is true, then both methods should have similar performance.
Note: This difference was seen on both Android and IOS

What I can find is that the JSON parsing on fetch is done on a lower level than what axios does. With axios it happens later on in the request, but on the react-native package, the parsing of the JSON happens straight after getting the response.
The extra layer of returning the data from XMLHttpRequest in string form to axios, who then starts parsing the data is most likely the impact on the performance.
The react-native version of fetch is also just a polyfill, so it's not that. It's the way how fetch parses the data straight from the XMLHttpRequest that is the difference in performance.

From Mozilla API Documentation
...an easy, logical way to fetch resources asynchronously across the network.
This kind of functionality was previously achieved using XMLHttpRequest. Fetch provides a better alternative that can be easily used by other technologies...
Fetch isn't just a wrapper to XMLHttpRequest. This justifies the performance difference between the two alternatives.

Related

Iterating/Paging REST API with Biztalk Send Port- Best Practice?

There is a REST API I need to pull from and each response has a 100 record limit (300k+ total records). I'm trying to think about what the best practice might be with paging/iterating via BizTalk adapter(s). FYI, this is complicated by the fact that I have 10 or so endpoints I need to page through and have to use a custom pipeline (for each to convert to XML and specify a namespace since a JSON response is the only type available).
The main question for me is how to manage the paging and multiple endpoints efficiently. The page number and offset are in the JSON response so my first guess is I'd have to build an orchestration to analyze the response and create the new request from it. I know there are a lot of ways to do this so I'm curious what best practice dictates and what the most efficient way would be.
Can I somehow get away with NOT using an orchestration?
Can/should I make use of a dynamic send port?

Sending continuous data over HTTP with Go

I am currently working on a web service in Go that essentially takes a request and sends back JSON, rather typical. However, this particular JSON takes 10+ seconds to actually complete and return. Because I am also making a website that depends on the JSON, and the JSON contents are subject to change, I implemented a route that quickly generates and returns (potentially updated or new) names as placeholders that would get replaced later by real values that correspond to the names. The whole idea behind that is the website would connect to the service, get back JSON almost immediately to populate a table, then wait until the actual data to fill in came back from the service.
This is where I encounter an issue, potentially because I am newish to Go and don't understand its vast libraries completely. The previous method that I used to send JSON back through the HTTP requests was ResponseWriter.Write(theJSON). However, Write() terminates the response, so the website would have to continually ping the service which could now and will be disastrous in the future
So, I am seeking some industry knowledge into my issue. Can HTTP connections be continuous like that, where data is sent piecewise through the same http request? Is that even a computationally or security smart feature, or are there better ways to do what I am proposing? Finally, does Go even support a feature like that, and how would I asynchronously handle it for performance optimization?
For the record, my website is using React.js.
i would use https websockets to achieve this effect rather than a long persisting tcp.con or even in addition to this. see the golang.org/x/net/websocket package from the go developers or the excellent http://www.gorillatoolkit.org/pkg/websocket from gorilla web toolkit for use details. You might use padding and smaller subunits to allow interruption and restart of submission // or a kind of diff protocol to rewrite previously submitted JSON. i found websocket pretty stable even with small connection breakdowns.
Go does have a keep alive ability net.TCPConn's SetKeepAlive
kaConn, _ := tcpkeepalive.EnableKeepAlive(conn)
kaConn.SetKeepAliveIdle(30*time.Second)
kaConn.SetKeepAliveCount(4)
kaConn.SetKeepAliveInterval(5*time.Second)
Code from felixqe
You can use restapi as webservice and can sent data as a json.SO you can continously sent data over a communication channel.

Is it OK to just use POST method and JSON format for a REST-like API in Scala/Play

We decided to use POST method and JSON format for all of our internal APIs which makes everything simpler. But then we realized that this is not truly RESTful. More over it seems that GET requests are more lightweight than POSTs under high load.
We have a problem regarding GET methods. We have to bind our criteria object to the HTTP request (query string) which forces us to build Form object for each criteria model. As you know building the Form object will be done manually and there is no automation available like what we have for JSON formatters (Macro Inception).
Another issue is that we have to decide on whether to use route parameters or querystring.
I think it's simpler to use a single HTTP method and make all API calls uniform. Does it make sense?
POST is the method to be used for any operation that isn't standardized by the HTTP protocol, and simple retrieval is standardized in the GET method. So, using POST for simple retrieval isn't RESTful. More than that, it seems like you want to use POST so you can treat querystring parameters in the same way as the POST payload, but REST URIs are atomic identifiers, including the querystring. Your application shouldn't rely on URI semantics, and extracting bits of information that serve any purpose other than identification also doesn't make much sense in REST.
Frankly, from what you describe your API is so far from being considered truly RESTful that this shouldn't be a concern at all. Do whatever is more consistent with your tools and works better for your application. REST isn't for everyone, and worrying about designing an API that's truly RESTful when that isn't a requirement for your application is more likely to lead to bad design choices.
There's absolutely nothing wrong with using POST like you're describing. In fact, GET requests should not alter the state of the server but instead should only be used for retrieval. In other words, if you're sending data to the server to, for instance, create an entity, using GET would be technically incorrect.
There's nothing you're describing that sounds "not RESTful." POST can definitely be part of a RESTful architecture.
That said, the HTTP method you use should correspond to the action it will perform. For example, if you're retrieving an entity by ID, you should use GET whereas if you're updating an entity by ID, you should use POST or PUT. This gives developers using the API a hint as to the side effects and intended usage of the various API methods.

twisted - transfer data using json

I need to transfer data (objects) between client and server, and Twisted seems a good way to accomplish this. I've been doing a lot searching but still haven't found any example to understand the basic principle. So any simple code would help.
Thanks!
EDIT
Both client and server are written in python
The data may be large, so I need a fast, reliable transmission ( I've taken a look at producers, is that good?)
Flask is great, but I am using another framework, so the whole networking thing relies on Twisted.
It's hard to tell if your question is more about json, python or twisted, but here's an overview, more can follow once the specifics are known. Perhaps you could add some more info to your question so we can offer more assistance :-)
re Json: Json is just a string with a defined structure. If you are working in python and have an object to send as json, then you need to convert the object to a json string by use of
import json
json.dumps(objectName)
If your client is javascript then instead of json.dumps you might use JSON.stringify(objectname).
If you intend to use javascript for clients then some of the frameworks like jQuery make it very easy.
Pythons json.dumps has a lot of optional arguments, most of which you won't need. You can see the options at https://docs.python.org/2/library/json.html
Python is python, I assume you know how to create and populate objects. Will your client be python or javascript or something else? From a javascript client to a python server you would most likely use Ajax to send requests and get responses.
Twisted allows you to easily create a server that will listen on a given port and, when data arrives, an event will occur that supplies the data received. You can then do whatever you need to with the data. Just be careful about doing blocking things like database inserts since the server may miss some data or otherwise misbehave if you interrupt it's event loop. Twisted can be difficult to learn initially, but it is a very powerful and reliable system that is well proven. One alternative to consider, particularly if your clients are not python, is node.js. In my opinion, node is a little bit easier to grasp initially and there are thousands of add-on modules that let you do almost anything you'd want. I use both twisted and node for different things.
Neither node.js nor twisted are software that you can use to just quickly spin up a server or client without some study and experimentation. To use Twisted or Node.js properly confidently, using all their features and goodness, requires a bit of research and work on your part.
There are excellent frameworks like Flask that can be used to build a server that can react to a number of different Ajax calls from a client - you can have a single server be able to respond to several different kinds of requests instead of having a server for each Ajax type.
This is a small library that serializes an object with all its children to JSON and also parses it back to a fully working object:
https://github.com/Toubs/PyJSONSerialization/

What is the most efficient way to send OData payloads over the wire? "Dense JSON?"

I'm designing a distributed application that will consist of a variety of REST services. Lately I've been going back and forth about whether to implement my REST services using the ASP.NET MVC 4 Web API or OData. Web API seems like it will some day be what I need but right now it's only half baked. Specifically, it only has a partial implementation of OData-style URI querying and doesn't do hypermedia out-of-the-box.
So this forces me to take another long hard look at OData. I really like the URI querying capability and structural hypermedia for lazy loading; I think I will use these features a lot in my application. However, the Atom Pub specification appears to be grossly inefficient.
I recently read a post about an efficient format for OData which mentions "dense JSON" but such a thing does not appear to actually exist. Is this true? And even if there's no such thing as dense JSON, regular JSON is still much more efficient than Atom Pub, correct?
Is there any situation where I would want to use Atom Pub over JSON?
There should be very little difference between ATOM and JSON on the semantic level with OData. Also most OData servers (WCF Data Services for sure) support both, so it's a choice of the client which one to use. As the blog post from Pablo mentions, to get the best payload size you should enable HTTP compression. It works great on both ATOM and JSON.
Reading JSON tends to be faster (XML parsing is kind of expensive), but that's if you're concerned with CPU consumption on the client. If I remember correctly, last time I saw the numbers, the compressed payload size for ATOM and JSON is not that different.
ATOM PUB is usually easier to consume in client which has available good XML or ATOM libraries and not JSON. And vice versa. But other than that, there should not be much of a difference.