I dont know whether i am asking valid question
My problem is i want to send large data from one application to another for which i am using JSONP to pass data to handler file which stores it in database.As the data is large i am dividing it in chunks and passing the packets in loop, the more the number of packets the more time it takes to pass complete data which ultimately results in performance issue.(FYI my web server is bit slow)
Is there any way by which i can compress my data and send it at a time rather than sending it in packets.
OR
Any other way by which i can pass my data(large data) from application to another.
Need this ASAP
Thanks in advance.
Related
i am facing the problem of parsing large json-results from a rest-endpoint (elasticsearch).
besides the design of the system has got its flaws, I am wondering whether there is another way to do the parsing.
The rest-response contains 10k Object in Json-Array. I am using the native Json-mapper of elasticsearch and Jsoniter. Both lack performance and slow down the application. The request duration raises up to 10-15 sec.
I will encourage a change of the interface but the big result list will remain for the next 6 month.
Could anyone give me an advice what to do to speed up the performance with elasticsearch?
Profile everything.
Is Elasticsearch slow in generating the response?
If you perform the query with Curl, redirect the output to a file, and time it, what fraction of your app's time taken does that take?
Are you running it locally? You might be dropping packets/being throttled by low bandwidth over the network.
Is the performance hit is purely decoding the response?
How long does it take to decode the same blob of JSON using Jsoniter once loaded into memory from a static file?
Have you considered chunking your query?
What about spinning it off as a separate process and immediately returning to the event loop?
There are lots of options and not enough detail in your question to be able to give solid advice.
I'm building a system which requires an Arduino board to send data to the server.
The requirements/constraints of the app are:
The server must receive data and store them in a MySQL database.
A web application is used to graph and plot historical data.
Data consumption is critical
Web application must also be able to plot data in real time.
So far, the system is working fine, however, optimization is required.
The current adopted steps are:
Accumulate data in Arduino board for 10 seconds.
Send the data to the server using POST with data containing an XML string representing the 10 records.
The server parse the received XML and store the values in the database.
This approach is good for historical data, but not for realtime monitoring.
My question is: Is there a difference between:
Accumulating the data and send them as XML, and,
Send the data each second.
In term of data consumption, is sending a POST request each second too much?
Thanks
EDIT: Can anybody provide a mathematical formula benchmarking the two approaches in term of data consumption?
For your data consumption question you need to figure out how much each POST costs you giving your cell phone plan. I don't know if there is a mathematical formula, but you could easily test and work it out.
However, using 3G (even Wifi for that matter), the power consumption will be an issue, especially if your circuit runs on a battery; each POST bursts around 1.5 amps, that's too much for sending data every second.
But again, why would you send data every second?
Real time doesn't mean sending data every second, it means being at least as fast as the system.
For example, if you are sending temperatures, temperature doesn't change from 0° to 100° in one second. So all those POSTs will be a waste of power and data.
You need to know how fast the parameters change in your system and adapt your POST accordingly.
I have a .csv file with 400 million lines of data. I was wondering if I were to convert it into an data API which returns into JSON format, will there be any limitations if consumers were to call and GET the data API. Would it show the full content of the data API and would it take long for the API to produce an output when being called.
If you convert this as GET API call then you might run into following issues:
You might run into max size limit issue i.e maximum size of data you can transfer over a GET request although it will depend on you server and clients device you can refer this answer of details
Latency will depend on the physical location of your server and the clients, you can potentially reduce this by cache your information if your data is not changing frequently.
Hope that helps
I have been working with Angular for some time now. My question is simple, I have a database with multiple tables. There is a clients table and around 7 or 8 other tables that contain information about that client that I need. None of the data from these tables is too terribly large. In order to reduce http calls, it was my thought to load all of the tables and store the data from each into a object stored in a factory.
So once a particular client is called, the http requests are made from each table and each are stored inside of a factory. Then, when a user needs to access that table, its data is stored in memory as the http call has been completed at the outset. When the data is changed, it can make a quick save of the table data and reload it again.
Most of the data is financial containing information about the income and asset categories of the client.
Question is .. is this wise? Am I missing something?
Thanks in advance
Your use of the term factory is inappropriate as a factory is a creational pattern. What you are describing is a facade. It is reasonable for a facade to aggregate data for a client and present it in a unified manner.
So, a remote client requests some data. The server-side facade makes the many requests on behalf of the client and composes the single response.
You have mentioned about caching the data. If you choose to do so, you will need to consider how to manage the cache data for staleness, how much memory you will need, etc.
Are JSON responses ever incomplete because of server errors, or are they designed to fail loudly? Are there any special concerns for transferring very large sets of data over JSON, and can they be mitigated? I'm open to any suggestions.
Transferring JSON over HTTP is no different than transferring any bytes over HTTP.
Yes, server errors can result in incomplete transfers. Imagine turning your server off half way through a transfer. This is true of any network transfer. Your client will fail loudly if there is such an error. You might get a connection time out or an error status code. Either way you will know about it.
There is no practical limit to the amount of data you can transfer as JSON over HTTP. I have transferred 1GB+ of JSON data in a single HTTP request. When making a large transfer you want to be sure to use a streaming API on the server side. Which is to say write to the output stream of the HTTP response while reading the data from your db, rather than reading your data from the DB into RAM entirely and then encoding it to JSON and writing it to the output. This way your client can start processing the response immediately, plus your server wont run out of memory.