Try to understand HttpWebRequest.GetResponseStream - getresponsestream

i am trying to understand how GetResponseStream works.
when I do request.GetResponseStream, does it download all the data then return the Stream object?

GetResponseStream returns a stream from which you can read the response. Internally, it can work any way it wants to. It can read the entire stream from the server before you call GetResponseStream, it can do it when you call GetResponseStream, it can do it later when you read from the stream. No guarantees are made as to how and when it will read from the server.

Related

Utf8JsonReader from a Stream

I'm trying to read a sequence of JSON objects from a network stream. This involves finding complete JSON objects and returning them one by one. As soon as a complete JSON object was received, I need to have it. Anything else that follows that JSON object is for the next object and must only be used when the next complete object was received.
I would have thought that the Utf8JsonReader class could do that but apparently it cannot accept a Stream of any kind. It even seems to be unwanted to have that possibility.
Now I'm wondering if it's possible at all to use .NET's shiny new JSON parser to read from a stream when I don't know when data arrives and how much of it. Do I need to split the JSON object messages manually or can the already existing reader just stop when it has something and continue when the next thing is available? I mean, if it can do that on a predefined array of bytes, it could surely also do it with some waiting in between until more data is available. That just doesn't seem to be exposed in the public API. Or did I miss something?
The JsonDocument.ParseAsync(Stream) method cannot be used because it would read the stream to the end. That doesn't make sense for a network stream that stays open for a long time and just reads some data from time to time.

Perl : Json : Can I perform a GET within my POST processing?

In theory... is it possible to
parse a POST payload in which a url is supplied,
then use the supplied url to perform a GET to acquire data,
that data is used to query a db and
then send my response to the POST?
Or will my original POST request pick up the GET response and kill everything?
I can't seem to find an example or reference to this being done anywhere...
Thanks!
Yes, it's possible, and not just in theory. I've done much the same thing myself. Just use LWP (or your preferred HTTP client library) to perform the GET as usual. Whether you're doing it within a POST handler or not makes no difference.

How to change content of Post Body in JMeter HTTP Request

Please forgive my ignorance as I'm a jmeter noob. My webservice accepts JSON objects so I was able to write a rudimentary test where I create an HTTP Request with a JSON object in the "Post Body" portion of the http request.
Anyway, what I want to do is have the HTTP Request choose a different JSON object from a csv file or some other input mechanism so that I can randomize the types of queries that are being run during the load test. Is there a way to do this? The closest is probably using variables (section 4.11 in the user manual) but I have a feeling that's not how variables are used.
A second way I've theorized (although I haven't tried yet since I think the method above is easier) is to create a HTTP Request Default obj with a bunch of HTTP Requests with different JSON objects in them and then use a Random Controller to randomly go thru my multiple HTTP Requests on each pass.
If there's a third way, I'm all ears to learn how to use this tool. I'll continue to read and possibly experiment with plan B above. Thanks in advance for any help you can give me.
UPDATE: So I tried the second way and it seems to work. I had 3 different HTTP requests and the number of times each request gets hit varies from run to run. I still invite answers from the community since I'd like to see what the pros do for issues similar to mine.
You have partially answered your question yourself, by saying "csv file or". Here are the specifics.
You will have to use CSV data set config in your test plan to read data from CSV. In your post body, use the variables read from CSV.
Here is a screen cast showing how to use csv data set config.

Appcelerator. Cache JSON output for a short time

I am developing an iOS app that uses a single context architecture. I make frequent calls to my API (PHP) and I want to "cache" the output for as long as the session is active. Right now I am saving the output to a variable that is defined in the app.s.
var contacts = {
contactsData: null
};
So I do this to save the output, is it really a good idea? Will it slow things down?
contacts.contactsData = output;
Thankful for all input!
It consist of how big is json file in mb. If device have enough RAM - it is the best way. Also be sure you save decoded json not just request response, so you will not decode it every time.
If json data is too big you must think about some kind of local storage. If Json is always the same (no need to synch every time) save it local.
If you need update it often you can upload extremly needed part with 1 limited request (API config needed) and other data with second background request.

Stream objects from MongoDB cursor into nodejs HTTP response

NOTE: I don't believe this question is a duplicate of this similar question because it is more specific.
I'm attempting to retrieve multiple objects from Mongo with the nodejs-mongodb-driver and write the objects to an HTTP response as JSON. The objects should be in the form of an array, but i don't want to call toArray() on the cursor because of the memory overhead and I try to avoid large JSON.stringify calls whenever possible.
var response = ... // an http response
collection.find().stream(JSON.stringify).pipe(response); // causes a malformed JSON string
The object in the browser appears as follow.
{"obj", "obj"}{"obj", "obj"} // clearly malformed
Is there an efficient way to do this?
I will explain the code you wrote so that you understand why it returns malformed JSON and why you probably need toArray() or the JSONStream libary from the answer you posted.
First collection.find() returns a Cursor object. At that point no data was read. Then, the .stream(JSON.stringify) call returns a readable Stream with the transformation function JSON.stringify. Still no data read.
The .pipe(response) call then reads the entire Stream to the end and for every object it calls the JSON.stringify function. Note that it does really call it for every single object seperately and therefore does not create an array. Instead you get your malformed JSON, object after object.
Now the answer in the question you posted as possible duplicate (Stream from a mongodb cursor to Express response in node.js) would work for you, but it requires an additional libary with a JSONStream. The JSONStream properly handles the CursorStream for JSON output. I don't know if that really reduces the overhead though, but you could try that.
Without an addition libary you will have to use toArray().