how to do post upload chunking(vs. download chunking) - html

So, in playframework, I can stream any response back so when i get a json request, I can do http chunking and stream the response back for some really really large responses. I was wondering if the same can be done on POST calls. If a client has a very very large POST call, can they stream me the request? Is this possible with html?
That said, if I can't do that, i need an api that curl or some other non-browser client will use to upload a file(the json request, or a csv, etc). How to create such an api?
I should note that I canNOT receive the whole request at once or will get out of memory. I need to receive pieces and as I receive pieces put that to the backend datastore a piece at a time.
Also, what would be the curl syntax to make sure it is streaming the file rather than sending it in one huge huge request that would break the server? How to force the client to stream the file in?
thanks,
Dean

You can get full control over HTTP request processing by using an EssentialAction. An EssentialAction processes the request body and returns a Result.
Normal Play Actions are a special case of EssentialAction. Actions process request bodies and return Results too, but they always perform their processing in two steps. Actions first parse the request body. Then the Actions parsethe parsed value to a function to get a Result. For you, having a separate parsing step is a problem because it means that the parsed value needs to be stored in memory.
If you use an EssentialAction then you can avoid storing a parsed value in memory, because you can just process the request body as it arrives.
What you need to do is add a method to your controller that returns an EssentialAction.
The signature of EssentialAction is:
trait EssentialAction extends (RequestHeader) ⇒ Iteratee[Array[Byte], SimpleResult]
The EssentialAction needs to accept the request header and then return an iteratee to process the request body. The iteratee will incrementally process the request body as it arrives. You can use your iteratee to put each piece into your data store as each piece arrives. When you're done processing all the pieces you can return a Result.
More info here: http://www.playframework.com/documentation/2.2.x/HttpApi.

Related

Send request by JSON or String what get less resources

Actually question is not important.
For example there data to send by post request as:
{data: {"asd":"asdasd", "asd1":"asd1asd1"}}
Is something change if send it as string like:
{data: '{"asd":"asdasd", "asd1":"asd1asd1"}'}
or if is possible (string only):
'{data: {"asd":"asdasd", "asd1":"asd1asd1"}}'
So question is what type of those will get less resources or it will the same even with large data?
The longer your string is, the more bandwidth will it use. In which schema you need to send your data only depends on the receiver.
The only differences you have in your examples above are whether you send them as text or JSON. In the end, it will be sent as a string as the body of a HTTP Post request always is sent as a string, which means the bandwidth does not depend on how you send your object as it always will be a string. The receiver will then create a JSON object out of it upon receiving.
As you probably have the object as a javascript object (or whichever language you use) in your client where you'll send the request from, just use the JSON object whenever possible. The library you use should make sure that it takes a performant way of translating it to a string.

how does google-cloud-function generate function-execution-id?

A Cloud Function triggered by an HTTP request has a corresponding function-execution-id for each calling request (in the request and response header). It is used for tracing and viewing the log of a specific request in Stack Driver Logging. In my case, it is a string of 12 characters. When I continuously do HTTP requests to a cloud function and see the function-execution-id, I get the result below:
j8dorcyxyrwb
j8do4wolg4i3
j8do8bxu260m
j8do2xhqmr3s
j8dozkdlrjzp
j8doitxtpt29
j8dow25ri4on
On each line, the first 4 characters are the same "j8do" but the rest are different, so I wonder what is the structure of function-execution-id.
How was it generated?
The execution id is opaque, meaning that it doesn't contain any useful data. It is just a unique ID. How it was generated should not be of any issue to you, the consumer. From examination, it looks like it might be some time-based value similar to UUIDv1, but any code that you write that consumes these IDs should make no assumptions about how they were generated.

Send Multiple Parameters in REST GET call to resource

What's the best way to send multiple parameters on a REST GET resource call. Normally we can call GET call with path param &/ query however the number of character is limited on a URL so any suggestion or best practice on how to achieve this.
This can be achieved via POST where sending the query in request body as JSON and use json converter on the resource end. I am thinking POST mayn't be a right approach for query or get service from a resource.
I search the existing questions on this but didn't get any proper answer.
Thanks in advance.
You can send a limited data with GET and even the data is visible in URL making data vurnerable. When you use POST data is a alot more safer than GET and you can send large no. of request parameters. You can checkout this link

Perl : Json : Can I perform a GET within my POST processing?

In theory... is it possible to
parse a POST payload in which a url is supplied,
then use the supplied url to perform a GET to acquire data,
that data is used to query a db and
then send my response to the POST?
Or will my original POST request pick up the GET response and kill everything?
I can't seem to find an example or reference to this being done anywhere...
Thanks!
Yes, it's possible, and not just in theory. I've done much the same thing myself. Just use LWP (or your preferred HTTP client library) to perform the GET as usual. Whether you're doing it within a POST handler or not makes no difference.

Square's Retrofit response parsing logic: streaming?

Could you please explain Square's Retrofit response parsing logic.
I'm interested in case when we should receive & parse a big json (>100Kb) - will Retrofit wait while all content will be received from server and only than parse it, or it will start to parse it immediately while getting stream data?
My goal is to speedup response processing.
Are there any options about it available to configure?
As soon as the HTTP client parses the headers, the InputStream will be handed back to Retrofit which will then hand it directly to the Converter. This means that as the underlying converter mechanism (say, Gson) is pulling bytes they are being read (and potentially blocking) directly from the network.
Note: this is only true if logging is off (as it should be in production / release builds). When logging is turned on beyond HEADERS level, the response body has to be read in its entirety into a byte[] in order to both log and hand the data to the converter.