Remove all cache from Azure APIM - azure-api-management

I am looking for a way to programmatically expire/delete the internal cache from Azure APIM. I understand when using custom cache policies you can remove a cache entry based on the key. However I don't need this, I am simply using the following simple cache entries to cache the entire response.
<cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" />
<cache-store duration="86400" />
I don't see there is anyway to remove this cache without updating the policy to invoke the operation to get a new cache entry.

Related

How do I get OWASP ZAP ajax scan to run in Github Workflow?

I have an OWASP Zap workflow which runs and I am trying to add the ajax scan by adding "-j", thus:
uses: zaproxy/action-full-scan#v0.2.0
with:
target: "https://example.com/"
cmd_options:
# use the Ajax spider in addition to the traditional one
"-j"
This runs, but I assumed there would be an additional report created (there isn't) and the only mention of ajax in the logs is:
WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$2
(file:/root/.ZAP/plugin/spiderAjax-release-23.3.0.zap) to method
java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
Looks like a permission issue but I do not know how to get around it, or even if it is the problem.
Thats just a warning - you can ignore it. We are planning to fix that in due course.
ZAP will not generate an additional report if you use the Ajax Spider, but it is likely to include more info if the Ajax Spider finds more URLs.

Disable Search Parameters

I have an instance of Microsoft FHIR Server, and I would like to disable some of the search parameters. Can I do this by updating the SearchParameter resource and set its "status" to "retired," or do I need to add the parameter's URL to the unsupported-search-parameters list? The goal is to reduce the amount of search values index when our application does not use search-parameters.
P.S. It would be nice if the solution allows to re-activate the search param if needed (and perform $reindex).
Thanks!
There is currently no API level support to do this with the built-in FHIR parameters.
The values in unsupported-search-parameters are loaded into the database, after this they are tracked there. This is because over time the server may support new parameters that can't be turned on immediately as it would leave the indexes inconsistent.
In the Cosmos collection the status can be "Enabled", "Supported", "Disabled" and "Deleted". If Supported it will not be available for search, but will continue to be indexed.
When Disabled the server will recheck for support, I believe when set to Deleted is will no longer index the data.
To re-enable it could be set back to Supported.

Is there any way to retry 'BackendConnectionFailure at transfer-response' errors in Azure API Management

I am having intermittent connectivity problems with an old legacy api that sometimes causes a
'BackendConnectionFailure at transfer-response' error to be returned from Azure API Management. From my experience retrying the request to the legacy api is usually successful. I have a retry policy similar to the below that is checking for 5xx status codes, however, the retries do not seem to take place.
<retry
condition="#(context.Response.StatusCode == 500)"
count="10"
interval="10"
max-interval="100"
delta="10"
first-fast-retry="false">
<forward-request buffer-request-body="true" />
</retry>
Upon further research Application Insights seems to indicate that the Backend Dependency has a call status = false, but a Result Code = 200.
Is there any way to detect this condition so that a retry takes place, or any other policies that can be used?
In your policy above retry covers only receival of response status code and headers from backend. Response body is not proactively read by APIM and instead transferred directly from backend to client piece by piece. That is what "Transfer response" means. By that time all your policies have already completed.
One way to avoid that is to proactively buffer response from backend at APIM side. Try adding as the first thing in outbound:
<set-body>#(context.Response.Body.As<byte[]>())</set-body>

What is difference between ckanext-ngsiview and right_time_context plugin CKAN?

I had a query that what is difference in ckanext-ngsiview and ckanext-right_time_context plugin in ckan?
I was using ckanext-ngsiview of conwetlab,They have made another release and renamed it to right_time_context.
While,I was working with the latter plugin,I did'nt recieved the expected result which I used to receive with ngsiview.(Screenshot attached)
Also,do I need to enable any other plugin for right_time_context?
After adding the id as ngsi_view in my (development.ini file) I get the following error as follows
I am not being rendered the NGSI-VIEW after adding right_time_context as id in my (.ini file)
The plugin has evolved and is not only a view for NGSI anymore, so we think that the name was not representing the functionality. On the other hand, Telefónica has the previous name registered on pypi, so we cannot make releases using that name. Apart from that, the new version is an evolution of the previous releases we did.
That message means that there is no view configured for that resource. I guessing you are complaining because the raw NGSI view was not configured automatically (in fact this can be perfectly ok since you may want to manually add views).
To enable automatic configuration of the raw view, make sure you include the ngsi_view view into the ckan.views.default_views setting. The important detail here is that the id of the view has changed from ngsiview to ngsi_view in this new version. Take into account that this is not the id of the plugin that is right_time_context and this id is the one you have to use to enable the plugin using ckan.plugins.
Also, do I need to enable any other plugin for right_time_context?
The resource_proxy plugin (comes directly with CKAN, but has to be enabled), is required for using the raw view, although it is optional if you don't need that view.
The ckanext-oauth2 plugin is required to make request to secured Context Broker instances.

How can I configure Polymer's platinum-sw-* to NOT cache one URL path?

How can I configure Polymer's platinum-sw-cache or platinum-sw-fetch to cache all URL paths except for /_api, which is the URL for Hoodie's API? I've configured a platinum-sw-fetch element to handle the /_api path, then platinum-sw-cache to handle the rest of the paths, as follows:
<platinum-sw-register auto-register
clients-claim
skip-waiting
on-service-worker-installed="displayInstalledToast">
<platinum-sw-import-script href="custom-fetch-handler.js"></platinum-sw-import-script>
<platinum-sw-fetch handler="HoodieAPIFetchHandler"
path="/_api(.*)"></platinum-sw-fetch>
<platinum-sw-cache default-cache-strategy="networkFirst"
precache-file="precache.json"/>
</platinum-sw-cache>
</platinum-sw-register>
custom-fetch-handler.js contains the following. Its intent is simply to return the results of the request the way the browser would if the service worker was not handling the request.
var HoodieAPIFetchHandler = function(request, values, options){
return fetch(request);
}
What doesn't seem to be working correctly is that after user 1 has signed in, then signed out, then user 2 signs in, then in Chrome Dev Tools' Network tab I can see that Hoodie regularly continues to make requests to BOTH users' API endpoints like the following:
http://localhost:3000/_api/?hoodieId=uw9rl3p
http://localhost:3000/_api/?hoodieId=noaothq
Instead, it should be making requests to only ONE of these API endpoints. In the Network tab, each of these URLs appears twice in a row, and in the "Size" column the first request says "(from ServiceWorker)," and the second request states the response size in bytes, in case that's relevant.
The other problem which seems related is that when I sign in as user 2 and submit a form, the app writes to user 1's database on the server side. This makes me think the problem is due to the app not being able to bypass the cache for the /_api route.
Should I not have used both platinum-sw-cache and platinum-sw-fetch within one platinum-sw-register element, since the docs state they are alternatives to each other?
In general, what you're doing should work, and it's a legitimate approach to take.
If there's an HTTP request made that matches a path defined in <platinum-sw-fetch>, then that custom handler will be used, and the default handler (in this case, the networkFirst implementation) won't run. The HTTP request can only be responded to once, so there's no chance of multiple handlers taking effect.
I ran some local samples and confirmed that my <platinum-sw-fetch> handler was properly intercepting requests. When debugging this locally, it's useful to either add in a console.log() within your custom handler and check for those logs via the chrome://serviceworker-internals Inspect interface, or to use the same interface to set some breakpoints within your handler.
What you're seeing in the Network tab of the controlled page is expected—the service worker's network interactions are logged there, whether they come from your custom HoodieAPIFetchHandler or the default networkFirst handler. The network interactions from the perspective of the controlled page are also logged—they don't always correspond one-to-one with the service worker's activity, so logging both does come in handy at times.
So I would recommend looking deeper into the reason why your application is making multiple requests. It's always tricky thinking about caching personalized resources, and there are several ways that you can get into trouble if you end up caching resources that are personalized for a different user. Take a look at the line of code that's firing off the second /_api/ request and see if it's coming from an cached resource that needs to be cleared when your users log out. <platinum-sw> uses the sw-toolbox library under the hood, and you can make use of its uncache() method directly within your custom handler scripts to perform cache maintenance.