GSI Instantiation Failed in Couchbase - couchbase

When I try to implement my N1Ql query which uses GSI in the log it says
GSI instantiation failed: Post /_metakv: missing port in address
I have tried gooling it, but the basic result that I get is issue: MB-15001
For Example:
When I fire the Query:
CREATE INDEX ko ON beer-sample(name);
Result is:
{
"requestID": "63ba3eae-528c-4042-a8ba-807a7096144d",
"signature": null,
"results": [
],
"status": "success",
"metrics": {
"elapsedTime": "6.092730473s",
"executionTime": "6.092483222s",
"resultCount": 0,
"resultSize": 0
}
}
But when I fire the same Query Using GSI i.e.
CREATE INDEX new ON beer-sample(name) USING GSI ;
Result:
{
"requestID": "a864c2a7-475d-4794-b267-cca89efb9b9e",
"signature": null,
"results": [
],
"errors": [
{
"code": 12005,
"msg": "Indexer not implemented GSI may not be enabled"
}
],
"status": "errors",
"metrics": {
"elapsedTime": "1.194775ms",
"executionTime": "1.008475ms",
"resultCount": 0,
"resultSize": 0,
"errorCount": 1
}
}
In the Logger:
time=2015-07-22T12:07:18+05:30 level=ERROR _msg=GSIC[default; beer-sample] GSI instantiation failed: Post /metakv: missing port in address
time=2015-07-22T12:07:18+05:30 _level=WARN _msg=Error loading GSI indexes for keyspace beer-sample. Error GSI client instantiation failed - cause: Post /metakv: missing port in address
Please provide a detailed solution.

I'm not sure what OS you are using but if it is Amazon Linux be sure to run
yum install openssl098e
before installing couchbase. I had the same issue and since installing openssl098e everything has been working great.

Related

APIM logger resource referring to AI in other subscription

Trying to enable Application Insights on an API Management Service. The Application Insights is in another subscription. Parameter "ApplicationInsightsInstanceRI" contains the full resource AI id. Any idea of why this error occurs?
Error:
InvalidResourceType: The resource type could not be found in the namespace 'Microsoft.Insights' for api version '2019-12-01'.
"type": "Microsoft.ApiManagement/service/loggers",
"name": "[concat(parameters('apiManagementServiceName'), '/', parameters('ApplicationInsightsInstanceName'))]",
"dependsOn": ["[resourceId('Microsoft.ApiManagement/service', parameters('apiManagementServiceName'))]"],
"apiVersion": "2018-06-01-preview",
"properties": {
"loggerType": "applicationInsights",
"description": "Logger resources to APIM",
"resourceid": "[parameters('ApplicationInsightsInstanceRI')]"
"credentials": {
"instrumentationKey": "[reference(resourceId('Microsoft.Insights/component', parameters('ApplicationInsightsInstanceName')), '2019-12-01', 'Full').properties.InstrumentationKey]",
Any idea of why this error occurs?
This error is due to invalid instrumentation key. After directly mentioning the instrumentation key in my template I was able to get the desired result & the API call is working fine.
Below is the template that worked for me.
{
"type": "Microsoft.ApiManagement/service/loggers",
"apiVersion": "2022-04-01-preview",
"name": "[concat(parameters('service_HelloWorld_APimanagement_name'), '/sangammigrationmetrics')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('service_HelloWorld_APimanagement_name'))]"
],
"properties": {
"loggerType": "applicationInsights",
"credentials": {
"instrumentationKey": "{{<INSTRUMENATION_KEY>}}"
},
"isBuffered": true,
"resourceId": "[parameters('components_SangamMigrationMetrics_externalid')]"
}
},

FLuxMonitor locally: FROM address in transaction is wrong

I'm trying to run decentralized-model locally. I've managed to deploy:
Link contract
AggregatorProxy
FluxAggregator
Consumer contract
Oracle node (offchain)
External adapters (coingecko + coinapi)
I'm mainly struggling for the last piece which is creating a Job which uses the FluxMonitor initiator.
I've created the following job where "0x5379A65A620aEb405C5C5338bA1767AcB48d6750" is the address of FluxAggregator contract
{
"initiators": [
{
"type": "fluxmonitor",
"params": {
"address": "0x5379A65A620aEb405C5C5338bA1767AcB48d6750",
"requestData": {
"data": {
"from": "ETH",
"to": "USD"
}
},
"feeds": [
{
"bridge": "coinapi_cl_ea"
},
{
"bridge": "coingecko_cl_ea"
}
],
"threshold": 1,
"absoluteThreshold": 1,
"precision": 8,
"pollTimer": {
"period": "15m0s"
},
"idleTimer": {
"duration": "1h0m0s"
}
}
}
],
"tasks": [
{
"type": "NoOp"
}
]
}
Unfortunately, it doesn't work, it makes my local ganache fail with this error "Error: The nonce generation function failed, or the private key was invalid"
I've put my Ganache in debug mode in order to log requests to the blockchain. Noticed the following call
eth_call
{
"jsonrpc": "2.0",
"id": 28,
"method": "eth_call",
"params": [
{
"data": "0xfeaf968c",
"from": "0x0000000000000000000000000000000000000000",
"to": "0x5379a65a620aeb405c5c5338ba1767acb48d6750"
},
"latest"
]
}
the signature of the function is correct
"latestRoundData()": "feaf968c"
However , what seems weird is that the from address is "0x0" . Any idea why my Oracle node doesn't use its key to sign the transaction?
thanks a lot
Problem from Ganache. In fact , I wrote a truffle script which:
calls "latestRoundData()" populating the "FROM" with a valid address
calls "latestRoundData()" populating the "FROM" with a 0x0 address
Then I ran the script 2 times:
Connecting to Ganache-cli --> 1st call is successful while the 2nd call fails
Connecting to Kovan testnet --> both calls are successful
I've just opened an issue for ganache-cli team: https://github.com/trufflesuite/ganache-cli/issues/840

Get host status by CheckMK Web-API

I'm trying to get the status of a host with the CheckMK WebAPI. Can someone point me in the right direction how to get these data?
We're currently using CheckMK enterprise 1.4.0.
I've tried:
https://<monitoringhost.tld>/<site>/check_mk/webapi.py?action=get_host&_username=<user>&_secret=<secret>&output_format=json&effective_attributes=1&request={"hostname": "<hostname>"}
But the response does not have any relevant information about the host itself (e.g. state up/down, uptime, etc.).
{
"result": {
"attributes": {
"network_scan": {
"scan_interval": 86400,
"exclude_ranges": [],
"ip_ranges": [],
"run_as": "api"
},
"tag_agent": "cmk-agent",
"snmp_community": null,
"ipv6address": "",
"alias": "",
"management_protocol": null,
"site": "testjke",
"tag_address_family": "ip-v4-only",
"tag_criticality": "prod",
"contactgroups": [
true,
[]
],
"network_scan_result": {
"start": null,
"state": null,
"end": null,
"output": ""
},
"parents": [],
"management_address": "",
"tag_networking": "lan",
"ipaddress": "",
"management_snmp_community": null
},
"hostname": "<host>",
"path": ""
},
"result_code": 0
The webapi is only for getting/setting the configuration of a host or other objects. If you want't to get the live status of a host use livestatus.
If you enabled livestats on port 6557 (default) you could query the status of a host via network. If you are logged into a shell locally you can use 'lq'.
OMD[mysite]:~$ lq "GET hosts\nColumns: name"
Why:
The CheckMK webapi if for accessing WATO. WATO is the source for creating the nagios configuration. Nagios will do the monitoring of the hosts and the livestatus api is an extension of the nagios core.
http://<monitoringhost.tld>/<site>/check_mk/view.py?view_name=allhosts&output_format=csv
You can use all the views that you see in the webui by adding output_format=[csv|json|python].
You will the data of the table that you see.
You also need to add the creditals as seen in yout question.

Spring-data-rest testing in POSTMAN

I am developing a application in Spring-data-rest. I am testing the POST requests from POSTMAN client to test whether the data is inserting into DB.In my DB i have a cartItems table. I am able to POST the data while i POST the JSON as Follows: merchandise, cart and merchandiseType are foreign key references.
{
"rate": 500,
"quantity": 1,
"amount": 500,
"createdAt": "2015-04-12T23:40:00.000+0000",
"updatedAt": "2015-04-14T21:35:20.000+0000",
"merchandise": "http://localhost:8080/sportsrest/merchandises/10",
"cart":"http://localhost:8080/sportsrest/carts/902",
"merchandiseType":"http://localhost:8080/sportsrest/merchandiseTypes/1"
}
But while i POST the data as below i am getting error: Instead of URL for merchandise i placed the merchandise JSON which i already tested the POST request for merchandise table JSON:
{
"rate": 500,
"quantity": 1,
"amount": 500,
"createdAt": "2015-04-12T23:40:00.000+0000",
"updatedAt": "2015-04-14T21:35:20.000+0000",
"merchandise": {
"id": 4,
"shortDescription": "white football",
"rate": 500,
"updatedAt": "2015-04-24T18:30:00.000+0000",
"createdAt": "2015-04-20T18:30:00.000+0000",
"longDescription": "test description for binary 1001",
"type": "1"
},
"cart":"http://localhost:8080/sportsrest/carts/902",
"merchandiseType":"http://localhost:8080/sportsrest/merchandiseTypes/1"
}
I am getting following ERROR:
{
"cause": {
"cause": {
"cause": null,
"message": "Template must not be null or empty!"
},
"message": "Template must not be null or empty! (through reference chain: co.vitti.sports.bean.CartItem[\"merchandise\"])"
},
"message": "Could not read JSON: Template must not be null or empty! (through reference chain: co.vitti.sports.bean.CartItem[\"merchandise\"]); nested exception is com.fasterxml.jackson.databind.JsonMappingException: Template must not be null or empty! (through reference chain: co.vitti.sports.bean.CartItem[\"merchandise\"])"
}
Can some one please help me to why i am getting this error.
Thanks.
I guess you didn't provide the Content-Type: application/json header.
Try to annotate your "MerchandiseRepository" with "#RestResource(exported = false)".

How to rename a bucket in couchbase?

I have a bucket name 0001 when I use the following N1QL statement I get a "5000" syntax error:
cbq> Select * from 0001;
{
"requestID": "f2b70856-f80c-4c89-ab37-740e82d119b5",
"errors": [
{
"code": 5000,
"msg": "syntax error"
}
],
"status": "fatal",
"metrics": {
"elapsedTime": "349.733us",
"executionTime": "204.442us",
"resultCount": 0,
"resultSize": 0,
"errorCount": 1
}
}
I think it takes 0001 as a number and not as a bucket name, is there an easy way to rename it?
In this case you can use back ticks in N1QL to escape the bucket name:
cbq> Select * from `0001`;
{
"requestID": "f48527e6-6035-47e7-a34f-90efe9f90d4f",
"signature": {
"*": "*"
},
"results": [
{
"0001": {
"Hello": "World"
}
}
],
"status": "success",
"metrics": {
"elapsedTime": "2.410929ms",
"executionTime": "2.363788ms",
"resultCount": 1,
"resultSize": 80
}
}
Currently there is noway to rename a bucket instead you could do one of the following:
Backup the bucket using cbbackup. Then recreate it and restore it using cbrestore.
Create a second cluster and use XDCR to transfer the data to the new cluster with the correctly named bucket.
There is no way I am seeing to rename. I checked the CLI as well and nothing. Your best bet, if you can, is to create a new bucket with the settings you want and then use cbtransfer to move the data over from the old to the new bucket. This is an online operation.