I am trying to retrieve the properties using this method: GET :urn/metadata/:guid/properties
This is something that we have running and works daily in our workflows, but I think this is an especailly large model.
For this particular model we are getting the following repsonse:
413 Request Entity Too Large
{Diagnostic": "Please use the 'forceget' parameter to force querying the data."}
Can anyone advise me as to how I do apply the forceget parameter to this call as I can't see any mention of it in the api docs.
forceget (string): To force get the large resource even if it exceeded the expected maximum length. Possible values: true, false. The implicit value is false.
The maximum data size 'without' forceget=true, is 2097152 bytes gzip compressed.
If you add 'forceget' then there is no size limit, however, if the generation of the gzip file takes longer than 2 hours, it will time out and try again later and eventually give up and report an error.
More details in this blog post: https://forge.autodesk.com/blog/faster-get-hierarchy-api-and-how-solve-error-413
#augusto-goncalves. Do you know what is the max number of parameters that are possible to get through a request with 'forceget' param?
... and what is the maximum number of properties we might have in the model so that we can retrieve all properties from that endpoint WITHOUT using 'forceget' parameter?
Related
We used to use the guava cache and we want to change it to caffeine.
We want to set for each entity its own "expiration time", something like - put(K key, V value, long expiration_time).
I saw the 3 functions above and I wonder what exactly they are doing, if you can explain me the meaning ant the operations of each one of them it will be great.
For example, the return value of expireAfterCreate should be the duration we want for this entity from it's creation untill it's expiration? or something else?
I'm also wondering why we have the parameter "currentTime" in both expireAfterRead and expireAfterUpdate if we don't use it in the function?
When we used the guava cache we used the expireAfterAccess, what is the substitution for it in caffeine?
My last question is how can I set a default value for entities without a unique expiration time.
Thank you,
May
When we used the guava cache we used the expireAfterAccess, what is the substitution for it in caffeine?
We mirror the Guava API, so this is also available on the cache builder.
My last question is how can I set a default value for entities without a unique expiration time.
Use expireAfterAccess, expireAfterWrite, or return a constant duration with expireAfter(Expiry).
I saw the 3 functions above and I wonder what exactly they are doing, if you can explain me the meaning ant the operations of each one of them it will be great.
Expiry is a callback interface where a single timestamp value is updated. The invoked method corresponds to the operation performed on the cache entry (created, updated, read). An update or read that should have no effect can return currentDuration to no-op.
For example, the return value of expireAfterCreate should be the duration we want for this entity from it's creation untill it's expiration? or something else?
Yes. However if the expireAfterUpdate returns a custom value (something other than currentDuration), then that overrides the prior expiration duration.
I'm also wondering why we have the parameter "currentTime" in both expireAfterRead and expireAfterUpdate if we don't use it in the function?
This can most often be ignored, but is provided if somehow useful. It is the current nano timestamp from the Ticker (not wall clock time).
We want to set for each entity its own "expiration time", something like - put(K key, V value, long expiration_time).
The callback Expiry is required and generally recommended, because ideally entries are loaded through the cache to avoid stampedes (e.g. LoadingCache). A stampede is when multiple threads lookup the same entry, miss, load it, and overwrite each other putting it in. That wasted work rather than having only one thread perform the load and others wait for the results.
That said, this method is available under Cache.policy().expiresVariably(). Those configuration-specific methods are stashed in that area to offer more power when deemed necessary.
Thank you,
You're very welcome.
Is there a way to determine the max value of an argument in a requests.post command to a website if we don't know the amount of data in the website's dataset? I'm trying to execute the following code to get specific information on all daycares from this website, but don't know the value of the last argument (length). Currently, I'm assuming this value is 20, but it is subject to change from time to time. How do I keep it open ended so I don't have to guess the max value for lenth? Code as follows:
data_requested = requests.post("https://data.nj.gov/views/INLINE/rows.json?"
"accessType=WEBSITE&method=getByIds&asHashes=true&start=0&length=20",
json=data)
njcc_data = data_requested.json()
Notice that this has nothing to do with requests.post - the range of values length can take is determined by the creator of that API and is an unknown quantity both to you and to requests.
You can try to reason about what possible values it could take, is it the length of a person? If yes, it's probably not going to be more than 250cm.
You can also use trial and error and see how high you can make it before the API endpoint gives back an error, but I guess this is what you were trying to avoid.
If length is the number of items returned (the length of the returned json array) then you could just try setting it to a high number like 1000 and see if you can get away with it.
I'm using this API request:
https://en.wikipedia.org/w/api.php?action=query&list=geosearch&gsradius=10000&gscoord=51.540951897949|-0.051086739997922&format=json&gslimit=50&continue=
which delivers 50 results. I want to use the 'continue' parameter to get the next page of results. According to the documentation I should get a continue field back in the results. I don't get any such result so can't get the next page.
Does anyone have any suggestions?
Dave, as #svick says, it seems list=geosearch (which is part of extension:GeoData) does not support continuation; indeed, it actually returns a "batchcomplete" element to indicate no more results (see in human-readable form).
I think you should either just get the maximum number of results (500 for users, 5000 for bots on Wikipedia), or if that's not satisfactory for your use case (which is?), pipe in at task T78703.
(Or, if you believe it to be a separate issue, report a new bug.
I see that value in view has exceeded 1 MB size (it is 1.7 MB) and thereby not getting emitted in views. I have tried to change values of max_kv_size_per_doc in default.ini (then restarted couchbase) but still value is not getting emitted.
Could someone please suggest workaround?
Yes: dont emit documents in views - its not a recommended practice. In fact, by emitting docs in views you're creating a copy of the doc on your storage (== bad). 100 docs + view that returns them = 200 docs's space.
Instead, emit keys for docs and retrieve them after you get the results from the view. or just emit the part of the doc that you need (if its smallish).
edit: i'm guessing you havn't tried the "include_docs" options? it should attach the complete doc to your results without creating a duplicate.
Please help me with following issue:
I have a simple Jmeter test with where variables are stored in CSV file. There is only one request in the test:
Get .../api/${page} , where ${page} is a variable from CSV
Everything goes well with thread properties for ex. 10 threads x30 loop count
If i increase any parameter, for ex. in 10x40 or 15x30, i receive at least one error and looks like this is jmeter issue:
one request (random) isn't able to take variable from CSV and i got an error:
-.../api/page returns 404 error
So the question is - is there any limit in jmeter's connection to CSV file?
Thanks in advance.
A key point to focus on is the way your application manage the case when 2 different users require the same page.
There are few checks that I would recommend:
be sure that the "Recycle on EOF" property is true
be sure that you have more lines on CSV than the number of threads you are firing
use a "View result tree" controller to investigate the kind of error you are getting
Let us know