I am trying to retrieve the complete json response in VUGEN. I am new to writing script in VUGEN. I am using web-HTTP/HTML protocol and just wrote a simple script to call a rest service with POST.
Action()
{
web_rest("POST: http://losthost:8181/DBConnector/restServices/cass...",
"URL=http://losthost:8181/DBConnector/restServices/oep_catalog_v1",
"Method=POST",
"EncType=raw",
"Snapshot=t868726.inf",
HEADERS,
"Name=filter", "Value=upc=123456789", ENDHEADER,
"Name=env", "Value=qa", ENDHEADER,
LAST);
return 0;
}
I don't know what to do next. I searched on the internet to get any command to pull response value. I got web_reg_save_param but it just pulls one value. I need the complete response saved in a file or string.
Please help.
VuGen provides several APIs to extract response data.
For example, you can do the boundary based correlation with empty left and right boundary. The sample below saves the web_rest response (body of donuts.js) in the parameter CorrelationParameter3.
web_reg_save_param_ex(
"ParamName=CorrelationParameter3",
"LB=",
"RB=",
SEARCH_FILTERS,
"Scope=Body",
LAST);
web_rest("GET: donuts.js",
"URL=http://adobe.github.io/Spry/data/json/donuts.js",
"Method=GET",
"Snapshot=t769333.inf",
LAST);
This process of locating, extracting and replacing dynamic values is called “correlation”.
You can read more about correlations in LoadRunner correlations kept simple blog post.
Your manager owes your training and a mentor for a period if you are asked to perform in this capacity
Related
I would like to retrieve some historical stock prices via a REST API from the following site:
https://www.boerse-frankfurt.de/zertifikat/de0007873291-open-end-zertifikat-auf-dow-jones-industrial-average
The response is a JSON.
Basically, the query can be done as follows: An OPTIONS call is sent without parameters and then a GET request with header parameters.
Both calls are sent to the following address:
https://api.boerse-frankfurt.de/v1/data/quote_history_derivatives?isin=DE0007873291&mic=XSC&from=2021-11-12T07%3A00%3A00.000Z&to=2021-11-12T21%3A00%3A00.000Z&offset=0&limit=25
The following two parameters are included in the header:
Client-Date: 2021-11-16T23:02:29.529Z
X-Client-TraceId: d2d6911d81ebbbff7a7549555a2c26d6
And now my question: how do you get the X-Client-TraceId? It looks like a UUID, but it doesn't seem to be one. The value changes with every page view in the browser. But you can't just enter any value.
Many greetings,
Trebor
Since this question was asked, someone has written a blog post about this exact topic. The algorithm detailed there still seems to be in use (as of 2022-03-12).
An excerpt of the relevant parts:
Client-Date
This is the current time, converted to a string with Javascript’s toISOString() function.
[...]
X-Client-TraceId
[...]
salt is a fixed string, in this case w4icATTGtnjAZMbkL3kJwxMfEAKDa3MN. Apparently it appears in the source code as-is so it must be constant.
X-Client-TraceId is the md5 of time + url + salt.
Note: time is the string sent in the Client-Date header.
The blog post has some additional information around the process of reverse engineering this algorithm and the X-Security header.
I am using the REST API of Google Fit. I want to list sessions with the fitness.users.sessions.list method. This gives me a few dozen of results.
Now I would like to get more results and for this I set the pageToken to the value I got from the previous response. But the new results does not contain any data points, just yet another pageToken:
{
"session": [
],
"deletedSession": [
],
"nextPageToken": "1541027616563"
}
The same happens when I use the pagination function of the Google Python API Client: I iterate on results but never get any new data.
request = self.service.users().sessions().list(userId='me')
while request is not None:
response = request.execute()
for ds in response['session']:
yield ds
request = self.service.users().sessions().list_next(request, response)
I am sure there is much(!) more session data in Google Fit for my account. Am I missing something regarding pagination?
Thanks
I think that the description of the pageToken parameter is actually rather confusing in the documentation (this answer was written prior to the documentation being updated).
The continuation token, which is used to page through large result sets. To get the next page of results, set this parameter to the value of nextPageToken from the previous response.
This is conflating two concepts: continuation, and paging. There isn't actually any paging in the implementation of Users.sessions.
Sessions are indexed by their modification timestamp. There are two (or three, depending on how you count) ways to interact with the API:
Pass a start and/or end time. Omitted start and end times are taken to be the start and end of time respectively. In this case, you will get back all sessions falling between those times.
Pass neither start nor end times. In this case, you will receive all sessions between some time in the past and now. That time is:
pageToken, if provided
Otherwise, it's 7 days ago (this doesn't actually appear in the documentation, but it is the behavior)
In any of these cases, you receive a nextPageToken back which is just after the most recent session in the results. As such, nextPageToken is really a continuation token, because what it is saying is that you have been told about all sessions modified up to now: pass that token back to be told about anything modified between nextPageToken and "current time" to get updates.
As such, if you issue a request that fetches all sessions for the last 7 days (no start/end time, no page token) and get a nextPageToken, you will only get something back in a request using that nextPageToken if any sessions have been modified in between the first and second requests.
So, if you're making these requests in quick succession, it is expected that you won't see anything in the second response.
In terms of the validity of the startTime you were passing in your comment, that's a bug. RFC3339 defines that fractional seconds should be optional.
I'll see about getting that fixed; but in the interim, just make sure you pass a fractional number of seconds (even if it is just .0, e.g. 2018-10-18T00:00:00.0+00:00).
It may be because the format of the URL you're using is different from the example in the documentation.
You are using:
startTime=2018-10-18T00:00:00+00:00
Wherein the one in the documentation has it as:
startTime=2014-04-01T00:00:00.00Z
The documentation also stated that both startTime and endTime query parameters are required.
I am trying to ingest data from a 3rd party API into a Dataflow pipeline. Since the 3rd party doesn't make webhooks available, I wrote a custom script that constantly polls their endpoint for more data.
The data is refreshed every 15 minutes, but since I don't want to miss any datapoints and I want to consume as soon as new data is available, my "crawler" runs every 1 minute. The script then sends the data to a PubSub topic. Easy to see that PubSub will receive about 15 repeated messages for each datapoint in the source.
My first attempt to identify and discard those repeated messages was to add a custom attribute to each PubSub message (eventid), created from a hash of its [ID + updated_time] at source.
const attributes = {
eventid: Buffer.from(`${item.lastupdate}|${item.segmentid}`).toString('base64'),
timestamp: item.timestamp.toString()
};
const dataBuffer = Buffer.from(JSON.stringify(item))
publisher.publish(dataBuffer, attributes)
Then I configured Dataflow with a withIdAttribute() (which is the new idLabel(), based on Record IDs).
PCollection<String> input = p
.apply("ReadFromPubSub", PubsubIO
.readStrings()
.fromTopic(String.format("projects/%s/topics/%s", options.getProject(), options.getIncomingDataTopic()))
.withTimestampAttribute("timestamp")
.withIdAttribute("eventid"))
.apply("OutputToBigQuery", ...)
With that implementation, I was expecting that when the script sends the same datapoint a second time, the repeated eventid would be the same and the message discarded. But for some reason, I still see duplicates on the output dataset.
Some questions:
Is there a clever way to ingest the data to dataflow from that 3rd party API if they don't provide webhooks?
Any ideas on why dataflow is not discarding the messages on this situation?
I know about the 10-minute restriction for deduplication on dataflow, but I see duplicated data even on the 2nd insertion (2 minutes).
Any help will be greatly appreciated!
I think you are on the right track, instead of the hash I recommend to use timestamps. A better way to to this is by using windows. Review this document which filters data that is outside of the window.
Regarding the additional duplicate data, if you are using pull subscriptions and the acknowledgement deadline is reached before having the data processed the message will be resent as per the at-least-once delivery. In this case change the acknowledgement deadline, the defaults is 10 seconds.
I have a collection of IDs of RESTful resources (all the same type of resource), the number of which can be indefinitely large. I want to make a REST call to get the names of these resources. Something like this:
Send:
['005fc983-fe41-43b5-8555-d9a2310719cd', '4c6e6898-e519-4bac-b03e-e8873d3fa3f0',...]
Receive:
['Resource A', 'Resource B',...]
What is the best way to retrieve the names of these resources RESTfully?
Here are the ideas I have had and the problems I see with each approach:
The naive approach is to iterate through all IDs in my collection and do a 'GET /resource/:id' for each ID. This would be prohibitively slow and resource intensive because of the large number of HTTP calls I would have to make.
The next approach I thought of is to pass the IDs as parameters to a single GET call. The problem here is that most servers have a limit on the URL length, which would be quickly exceeded.
Next, I thought that putting the IDs in the body of a GET would work, but according to Roy Fielding, data in the GET body should not affect the results of a REST call: HTTP GET with request body
I could use a POST request and put the data on the POST body, but POST is intended for creating and modifying resources, which is not what I'm doing. Maybe I should ignore the intent of the verb and use it anyway?
I could split the request into multiple GET requests to avoid exceeding the max URL length. The problem here is that I have to combine the results after all calls have returned, which is potentially slow.
I could create a collection resource within my main resource by posting my list of IDs to 'POST /resource/collection', then use a 'GET /resource/collection/:id' call to retrieve the results. This actually works, but then I have to do a 'DELETE /resource/collection/:id' to clean up. It takes multiple calls, requires cleanup, and seems a bit clunky overall, so it's okay, but not ideal.
Is there a better way to do this?
Your last approach is RESTful and the one I recommend. I'd do this:
Step 1:
Request:
POST /resource/collection
Content-Tpye: application/json
{
"ids": [
"005fc983-fe41-43b5-8555-d9a2310719cd",
"4c6e6898-e519-4bac-b03e-e8873d3fa3f0"
]
}
Response:
201 Created
Location: /resource/collection/89AB8902-FDF1-11E4-ADDF-CD4FB664A5DC
Step 2:
Request:
GET /resource/collection/89AB8902-FDF1-11E4-ADDF-CD4FB664A5DC
Response:
200 OK
Content-Type: application/json
{
"resources": [ ... ]
}
but then I have to do a 'DELETE /resource/collection/:id' to clean up.
Not, that is not necessary. The server could implement a job that removes all collections that are older than a specific timestamp. It is not the client who has to do this.
If later a client access the collection again, the server would respond with
410 Gone
I have UITableView representing list of cities (100 cities).
For each city I want to call specific remote(URL) JSON to get city's weather information and populate response data for each city cell in the UITableView.
When I run application, I want to see my table as fast as possible, so I don't need to wait for all json responses. I want that informations got asynchronously (when specific json is loaded, set it's information for corresponding city cell in the UITableView).
Note: It is important for me to call seperate remote JSON files.
Which technic is the best for this task?
I would start with the following approach:
Create a data structure to hold city information, including:
path to your data service,
service call "state" (idle, waiting, completed, error),
weather information (from JSON returned by service call)
When you first show the table, you will want to:
initialize your array (of the aforementioned data structure),
initiate each service call asynchronously,
set each row (city) state to waiting.
You will also probably want to return a custom UITableCellView with the city name (if you already have it) and a spinning activity indicator. This will be your best option to have a fast load time (not waiting for services to complete) and give some visual indication that the data is loading.
Each service call should use the ViewController as its delegate; you will need a key field so that when the services return, they can identify with which row/city they are associated.
As each service completes and calls the delegate, it will send the data to the ViewController, which (in turn) will update the array and initiate a UITableView update.
The UITableView update is, in my opinion, the most difficult part. Typically cells are drawn or updated when they become visible; the table pre-fetches all visible cells' geometry and then queries the actual contents when it's ready to draw each cell; as a result, your strategy for updating cells will depend on how your table is used.
If your cell geometry changes, you will most likely need to redraw your entire table; I shudder to think about what 50 simultaneous UITableView redraws will do for your app, so you might need to set a time-threshold to "chunk" updates and handle drawing more intelligently.
[theTableView reloadData] will cause the entire table to be re-queried and redrawn.
If your cell geometry does not change, you can try to be more surgical of updating only the visible cells (the non-visible ones aren't an issue since their data will be queried when they become visible).
[theTableView visibleCells] returns an array of visible cells; when your service call returns, you could update the data and then search the array to see if the cell in question is visible; if it is, you will probably need to send the specific UITableCellView a setNeedsDisplay message.
There is a good explanation of setNeedsDisplay, setNeedsLayout, and 'reloadData' at http://iosdevelopertips.com/cocoa/understanding-reload-repaint-and-re-layout-for-uitableview.html.
There is a relevant SO question at How to refresh UITableViewCell?
Lastly, you will probably want to implement some updating logic in the service delegate error routine, just so you don't create endlessly spinning activity indicators.
I do this now while searching multiple servers. I use Core Data, but you can use an NSMutableArray to accumulate your JSON responses.
Every time you finish receiving date from one of your servers (for example, when connectionDidFinishLoading executes), take the JSON data object and add it to an NSMutableArray (let's call it weatherResults) (add it using the addObject method). You may want to convert the JSON to an NSDictionary before adding it to the mutable array weatherResults.
Assuming your dataSource delegate methods refer to what is in the weatherResults NSMutableArray (for example, getting the number of rows from the size of the array using [weatherResults count]) you can do the following:
After inserting the object to the array, you can simply call reloadData in the dataSource controller. You will see the table update as each new JSON results arrives. The results should append to the bottom of the table as they come in. If you want to sort the NSMutableArray each time a JSON results arrives, you can do that too.
I do this and the time it takes to resort and reload the table is insignificant on my iPad. If you do not resort, it should be even faster.
By the way, in this explanation, I assume that the JSON response contains all of the information that you need to fill in your table cell. That may not be the case. If it's not, you will have to correlate the response with other information you have, such as a list of cities that your program is presenting.