How to rename a bucket in couchbase? - couchbase

I have a bucket name 0001 when I use the following N1QL statement I get a "5000" syntax error:
cbq> Select * from 0001;
{
"requestID": "f2b70856-f80c-4c89-ab37-740e82d119b5",
"errors": [
{
"code": 5000,
"msg": "syntax error"
}
],
"status": "fatal",
"metrics": {
"elapsedTime": "349.733us",
"executionTime": "204.442us",
"resultCount": 0,
"resultSize": 0,
"errorCount": 1
}
}
I think it takes 0001 as a number and not as a bucket name, is there an easy way to rename it?

In this case you can use back ticks in N1QL to escape the bucket name:
cbq> Select * from `0001`;
{
"requestID": "f48527e6-6035-47e7-a34f-90efe9f90d4f",
"signature": {
"*": "*"
},
"results": [
{
"0001": {
"Hello": "World"
}
}
],
"status": "success",
"metrics": {
"elapsedTime": "2.410929ms",
"executionTime": "2.363788ms",
"resultCount": 1,
"resultSize": 80
}
}
Currently there is noway to rename a bucket instead you could do one of the following:
Backup the bucket using cbbackup. Then recreate it and restore it using cbrestore.
Create a second cluster and use XDCR to transfer the data to the new cluster with the correctly named bucket.

There is no way I am seeing to rename. I checked the CLI as well and nothing. Your best bet, if you can, is to create a new bucket with the settings you want and then use cbtransfer to move the data over from the old to the new bucket. This is an online operation.

Related

Looking for a way to filter data withing Azure API call

I am looking for a way to extract data out of an azure environment. Problem i'm currently having is that when I use my API call I receive about 60 lines of json while I only need 4 of those lines. To reduce load, increase efficiency and remove the need for parsing withing the other environment where I need to data, I want to find a way to filter the data in the api call. Currently my call looks like this.
https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Web/sites/{application or resource}/providers/microsoft.insights/metrics?api-version=2021-05-01&metricnames=IoWriteBytesPerSecond,IoReadBytesPerSecond&timeSpan=PT1M
now the ouput looks something like this.
{
"cost": 0,
"timespan": "2022-10-11T10:18:00Z/2022-10-11T10:19:00Z",
"interval": "PT1M",
"value": [
{
"id": "/subscriptions//resourceGroups//providers/Microsoft.Web/sites//providers/Microsoft.Insights/metrics/IoWriteBytesPerSecond",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "IoWriteBytesPerSecond",
"localizedValue": "IO Write Bytes Per Second"
},
"displayDescription": "The rate at which the app process is writing bytes to I/O operations. For WebApps and FunctionApps.",
"unit": "BytesPerSecond",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2022-10-11T10:18:00Z",
"total": 288.0
}
]
}
],
"errorCode": "Success"
},
{
"id": "/subscriptions//resourceGroups//providers/Microsoft.Web/sites//providers/Microsoft.Insights/metrics/IoReadBytesPerSecond",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "IoReadBytesPerSecond",
"localizedValue": "IO Read Bytes Per Second"
},
"displayDescription": "The rate at which the app process is reading bytes from I/O operations. For WebApps and FunctionApps.",
"unit": "BytesPerSecond",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2022-10-11T10:18:00Z",
"total": 284.0
}
]
}
],
"errorCode": "Success"
}
],
"namespace": "Microsoft.Web/sites",
"resourceregion": "westeurope"
}
Out of all these lines I only need about 4 objects, Is it possible to use the $filter function within the URL api call? If yes, can someone redirect me to a forum, doc or example where this is used?
Thanks, regards

Amadeus flights API error: carrier code is a 2 or 3 alphanum except YY and YYY

I am using the following SDK to search for and purchase flights via Amadeus:
https://github.com/autotune/amadeus/pull/1/files
This was a previously abandoned project I have decided to take on and make work. As part of that project I am trying to purchase a ticket in the sandbox environment and getting the following error:
{
"errors": [
{
"code": 477,
"title": "INVALID FORMAT",
"detail": "carrier code is a 2 or 3 alphanum except YY and YYY",
"source": {
"pointer": "/data/flightOffers[0]/itineraries[1]/segments[0]/operating/carrierCode",
"example": "AF"
},
"status": 400
}
]
}
Here is the json data being sent:
{
"type": "flight-order",
"travelers": [
{
"id": "1",
"dateOfBirth": "1990-02-15",
"name": {
"firstName": "Foo",
"lastName": "Bar"
},
"gender": "MALE",
"contact": {
"emailAddress": "foo#bar.com",
"phones": [
{
"deviceType": "MOBILE",
"countryCallingCode": "33",
"number": "5555555555"
}
]
}
}
],
"ticketingAgreement": {
"option": "DELAY_TO_CANCEL",
"delay": "6D"
},
"remarks": {},
"operating": {
"carrierCode": "UA"
}
}
Any help appreciated!
The error suggests that the sent payload is invalid. I'd advice you use a tool like Curl or Postman to verify you're using the right API documentation, before debugging actual code.
After further reading your PR and checking the API reference at :
https://developers.amadeus.com/self-service/category/air/api-doc/flight-create-orders/api-reference
I think you need to confirm that the Carrier code being passed is available in the segments under:
flightOffers > itineraries > segments
Although the API reference doesn't have operating > carrierCode like you used in the data sent, my guess after seeing the API error response you shared is that they are performing a check against the flight offers passed.
I suggest you check the results gotten when you call the flightOffers and also add it to the payload sent to the sandbox.

Azure Data Factory - attempting to add params to dynamic content in the body of a REST API request

In Azure Data Factory, I'm attempting to add params to the body of a copy task (connected to a REST API post request as the source). I'm wanting to use dynamic content to do so, but I'm struggling trying to find the real solution for the proper nomenclature. Here's what I have so far.
copy task
dynamic content
{
"datatable":
{
"start":0,
"length": 10000,
"filters": [
{
"name": "Arrival Dates",
"start": "pipeline().parameters.pDate1",
"end": "pipeline().parameters.pDate2"
}
],
"sort": [
{
"name": "start_date",
"order": "ASC"
}
]
}
}
You'll notice that I've added params for dates. Is this the correct nomenclature for trying to add dynamic content? The autocorrect tried to add the # sign in the beginning of the code block, which will cause the entire thing to error out. I've tried adding it before each parameter, but that isn't actually reading the dynamic values either.
This is not correct. You need to use concat to concatenate the different variables. Something like this :
#concat('{ "datatable": { "start":0, "length": 10000, "filters": [ { "name": "Arrival Dates", "start": "',pipeline().parameters.pDate1,'", "end": "',pipeline().parameters.pDate2,'" } ], "sort": [ { "name": "start_date", "order": "ASC" } ] } }')
This is also documented in the SO question.

How to use nested queries for dynamoDB using AWS CLI

I'm trying to delete some items in dynamodb which currently is not null.I'm not sure how many items will be there.So first I've used a scan with filter attributes to get that item then I'll feed that result set to delete-item
So far I can able to perform the first step. but I couldn't figure out how the nested queries may work with dynamodb (Using AWS CLI)
filter.json
{
"demo": {
"ComparisonOperator": "NOT_NULL"
}
}
First Query:
aws dynamodb scan --table-name test --scan-filter file://D:\filter.JSON
Now I need to find a way to feed the result set of the above query into delete-item
UPDATE 1
Output of Scanned query:
{
"Count": 2,
"Items": [
{
"demo": {
"S": "Hai"
},
"id": {
"S": "123"
}
},
{
"demo": {
"S": "Welcome"
},
"id": {
"S": "124"
}
}
],
"ScannedCount": 3643,
"ConsumedCapacity": null
}

How to upload multiple documents with multiple JSON files to Cloudant DB via node js?

Currently, I have a requirement to reprocess the failure records that sit in the Cloudant DB ex.say fail DB. I need to take records from there for a particular day, say 20 records, and place them in Reprocess DB. Can you please help me how to bulk insert 20 failure records that can be stored as 20 different JSON Files using Node JS.
Sample request:
{
"docs": [
{
"_id": "XXX",
"_rev": "1-XXX",
"timestamp": "2018-01-06T14:36:09.834Z",
"DocType": "CustFail",
"RequestPayload": {
},
"CustID": "4",
"Response": "Fail"
},
{
"_id": "XXX",
"_rev": "1-XXX",
"timestamp": "2018-01-06T14:36:09.834Z",
"DocType": "CustFail",
"RequestPayload": {
},
"CustID": "42",
"Response": "Fail"
}
]
}
Thanks!!
if you are using the nodejs-cloudant library, you should be able to call bulk() passing in the array of JSON docs to be inserted