Couchbase - Can the N1QL DP4 release handle the stale option being set on a query? - couchbase

I have been running some tests using the DP4 release of N1QL.
It seems that if I write to the database (save a document) I can access it by key straight away, but if I run a query to find it by the document type and another matching value it doesn't come back in the results for between 1 and 10 seconds.
After this time has passed, the query returns the expected result.
I have seen the issue raised here: https://issues.couchbase.com/browse/MB-10944
The issue says it is resolved in DP4 but there is no confirmation of this or documentation on how to use the new feature.
Has anybody figured out how to do this or could one of the Couchbase developers lend a hand?

yes but that feature is currently not available via the N1QL shell and you will need to use the HTTP REST API directly to pass those parameters.
e.g.
curl -v http://localhost:8093/query/service -d 'statement=select * from default&scan_consistency=REQUEST_PLUS'
By setting the scan_consistency parameter to 'REQUEST PLUS', N1QL will set stale=false internally for the view scan.

Related

Is there a way to find the workitems for a release in Azure DevOps Api 5.1?

I can find a release in Azure DevOps Api 5.1 by a request to https://vsrm.dev.azure.com/mycompany/myproject/_apis/release/releases/myreleaseid?api-version=5.1
How can I get the workitems of that release as shown on the devops portal under Deployment - Stages - Workitems?
My naive approach just using https://vsrm.dev.azure.com/mycompany/myproject/_apis/release/releases/myreleaseid/workitems?api-version=5.1
resulted in a 404.
There is a stakeholder in the workitem and I want to send him a notification adter the release.
How can I get the workitems of that release as shown on the devops
portal under Deployment - Stages - Workitems?
Hard to say, but I don't find any document about this topic... So I determine to use F12 to find that. And here's the one I finally find:
Get:https://vsrm.dev.azure.com/mycompany/myproject/_apis/Release/releases/myreleaseId/workitems?baseReleaseId={my baseReleaseId}&%24top=250&artifactAlias={my artifactAlias}
It will returns the IDs of the workItems for the release. Its response format:
After you get the IDs, it's easy to get details if you need using Get Work Items Batch or what.
In addition:
1.myreleaseId is the ReleaseID. (On my side, the ID is 7 if it's Release-7)
2.my artifactAlias is this:
3.For my baseReleaseId, I'm not 100% sure about its meaning. I think it could be something like ReleaseToCompareAgainst. Hint from Daniel. (On my side, if my releaseId=7, then I use basereleaseID=6(7-1), it works to get the correct WIT ids). (Actually I suggest you can use F12 in that web page to check your corresponding URL.)
And according to Mathias F: The baseReleaseId realy is the last previous release that has a deployment (-1 in some cases)
4.About how to use F12 to find rest api which may not be documented:
Hope all above helps :)

Supplying 1 JDBC request result to multiple threads in JMeter

I'm facing an issue in Jmeter. The API I am testing, get the parameters from a prior JDBC request.
This works fine when there is only 1 thread. But, when I run multiple threads it throws the error below
{"Message": "A transient error has occurred. Please try again. (1205)","Data":null}
Here is the screenshot
I need to run 5 threads without having to run the JDBC request 5 times.
I can retrieve 5 results in 1 JDBC call and supply them sequentially for each of the thread. Is this possible? How can I do this?
Worst-case scenario I will have to manually set up CSV file instead of JDBC calls.
Normally people use setUp Thread Group for test data preparation and tearDown Thread Group for eventual clean-up. I would suggest moving your JDBC Request under the setUp Thread Group and run it with 1 virtual user.
If you have to keep the test plan structure as it is and can amend the SQL query to return more results, be aware that according to the JDBC Request sampler documentation the results look like:
myVar_#=5
myVar_1=foo
myVar_2=bar
myVar_3=baz
myVar_4=qux
myVar_5=corge
Therefore you can use the values using __V() and __threadNum() functions combination like:
${__V(myVar_${__threadNum},)}

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

Socrata distance_in_meters(...) returns errors even on 2.1 endpoint

When I make a query from the CMS Nursing Home Compare data and attempt to order using distance_in_meters(...) I get the following error:
Error: function distance_in_meters is not defined in SoQL.
The docs say distance_in_meters(...) works with the 2.1 endpoint (further information on endpoints are stuck in a redirect loop from the link at the top of that page at the time of this writing but you can get it from Google cache): https://dev.socrata.com/docs/functions/distance_in_meters.html
I have confirmed the data set is using the 2.1 endpoint.
To be sure it wasn't an issue with just the order clause, I also set it up in the select. Two variations:
https://data.medicare.gov/resource/4pq5-n9py.json?$select=*,%20distance_in_meters(location,%20%27POINT%20(-78.9627624%2043.0171854)%27)%20AS%20range&$where=within_circle(location,43.0171854,-78.9627624,16093.44)
https://data.medicare.gov/resource/4pq5-n9py.json?$where=within_circle(location,43.0171854,-78.9627624,16093.44)&$order=distance_in_meters(location,%20%27POINT%20(-78.9627624%2043.0171854)%27)
So, the question ultimately is does the 2.1 endpoint not support distance_in_meters(...) or am I missing something stupid obvious?
It looks like although you're looking at the API docs for the 2.1 endpoint, you're actually using the 2.0 one in your queries. You should use:
https://data.medicare.gov/resource/b27b-2uc7.json
... instead of:
https://data.medicare.gov/resource/4pq5-n9py.json
I change the root endpoint to the 2.1 version, and the query looks like it returns the results you'd expect:
https://data.medicare.gov/resource/b27b-2uc7.json?$where=within_circle(location,43.0171854,-78.9627624,16093.44)&$order=distance_in_meters(location,%20%27POINT%20(-78.9627624%2043.0171854)%27)

How to ping from Zabbix agent?

Is it possible to ping from Zabbix agent and pass that data into Zabbix server? I would like to be able to get response time from the agent.
I read that it is possible by using fping, would be great if someone could guide me to the correct path.
Thank you,
Rijath Mohammed
While that is not currently available out of the box, you can implement such a functionality using a feature called "user parameters". This forum thread has a simple example:
UserParameter=myping[*],/etc/zabbix/fping -q $1;echo $?
Although for you the path to fping is likely to be /usr/sbin/fping or /usr/bin/fping.
You can read more about user parameters in the official manual: https://www.zabbix.com/documentation/3.0/manual/config/items/userparameters .
While I haven't ever configured that, it would be similar on Windows - see this forum thread for some inspiration.
And if you would like to see this feature implemented out of the box, make sure to vote on this feature request.
Got it working using the below powershell script, :)
$Test = test-connection google.com -count 1
$Test.responsetime
This will just return the response time for Google.com and that value is passed to Zabbix using the below user parameter:
UnsafeUserParameters=1
UserParameter=ping.google,C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe C:\zabbix\pinggoogle.ps1
I am calling this parameter from Zabbix using the key "ping.google"