Couchbase - update child in self referencing document (N1QL) - couchbase

I have a self referencing document in my couchbase 6.0.0 instance that can be an arbitrary number of levels deep.
{
"branchEndDateTime": "2020-09-22 10:00 am",
"branchEndX": 0,
"branchId": "id-652c12fe-000e-4b42-a7e6-e4817d123456",
"branchName": "Root",
"branchStartDateTime": "1975-09-22 10:00 am",
"branchStartX": 0,
"children": [
{
"branchEndDateTime": "1984-09-22 10:00 am",
"branchEndX": 100,
"branchId": "id-15c1737f-1ab5-417e-b74c-14f3ee3f3461",
"branchName": "Test Child Level 1",
"branchStartDateTime": "1980-09-22 10:00 am",
"branchStartX": 0,
"children": [
{
"branchEndDateTime": "1984-09-22 10:00 am",
"branchEndX": 100,
"branchId": "id-15c1737f-1ab5-417e-b74c-14f3ee3f3467",
"branchName": "Test Child Level 2",
"branchStartDateTime": "1980-09-22 10:00 am",
"branchStartX": 0,
"children": [
{
"branchEndDateTime": "1984-09-22 10:00 am",
"branchEndX": 100,
"branchId": "id-15c1737f-1ab5-417e-b74c-14f3ee3fxxxx",
"branchName": "Test Child Level 3",
"branchStartDateTime": "1980-09-22 10:00 am",
"branchStartX": 0,
"children": [],
"type": "Branch"
}
],
"type": "Branch"
}
],
"type": "Branch"
}
],
"type": "Branch"
}
Each parent has a child fragment, and every fragment has has an id branchId
"branchId": "id-15c1737f-1ab5-417e-b74c-14f3ee3f3461",
Given I know the branch Id, is there any way to update a single property on a child fragment?
example:
I want to change the branchName "Test Child Level 2" to "Custardy underpants competition" given I know the branchId is "id-15c1737f-1ab5-417e-b74c-14f3ee3f3467"
Is there an elegant solution to this problem?
I have tried the following, but it's not working out
UPDATE BucketName AS l
SET o.branchName ='Custardy underpants competition' FOR o IN l.children END
WHERE l.branchId = "id-15c1737f-1ab5-417e-b74c-14f3ee3f3467"
Many thanks!
References https://www.youtube.com/watch?v=RA68D8hOuSw

The matching branchId can be in the root of document or in the children.
Update root of the document you must use d.branchName i.e first SET in the following SQL.
Second SET will take care of all the children how deep it is irrespective of structure (WITHIN).
Also you need WHERE clause WITHIN if there is no branchId never mutate document.
UPDATE default d
SET d.branchName = CASE WHEN d.branchId = "id-15c1737f-1ab5-417e-b74c-14f3ee3f3467" TEHN "Custardy underpants competition" ELSE d.branchName END,
b.branchName = "Custardy underpants competition" FOR b WITHIN d WHEN b.branchId = "id-15c1737f-1ab5-417e-b74c-14f3ee3f3467" AND b.type = "Branch" END
WHERE ANY b WITHIN d SATISFIES b.type = "Branch" AND b.branchId = "id-15c1737f-1ab5-417e-b74c-14f3ee3f3467" AND b.type = "Branch" END;

Related

get the latest dated dictionary from list of dictionaries

I have list of dictionary objects with date and name and status.
elements = [
{
"PersonId": 301178,
"Wwid": "10692133",
"FullNm": "abc",
"CompletionDt": "2015-04-29",
"status": "Complete",
},
{
"PersonId": 301178,
"Wwid": "10692133",
"FullNm": "abc",
"CompletionDt": "2019-07-30",
"status": "complete",
},
{
"PersonId": 301178,
"Wwid": "10692133",
"FullNm": "abc",
"CompletionDt": "2016-08-01",
"status": "Inclomplete",
},
{
"PersonId": 301178,
"Wwid": "10692133",
"FullNm": "abc",
"CompletionDt": "2017-04-10",
"status": "Completed",
},
]
In this dictionary how to pick the latest dated object using python?
In the above example
result= {
"PersonId": 301178,
"Wwid": "10692133",
"FullNm": "abc",
"CompletionDt": "2019-07-30",
"status" : "complete"
}
from datetime import datetime
result = sorted(
elements, key=lambda x: datetime.strptime(x["CompletionDt"], "%Y-%m-%d")
)[-1]
or you can try python built-in max.
result = max(elements, key=lambda x: datetime.strptime(x["CompletionDt"], "%Y-%m-%d"))
Output:
{'PersonId': 301178, 'Wwid': '10692133', 'FullNm': 'abc', 'CompletionDt': '2019-07-30', 'status': 'complete'}
Here I am going to assume that your objects' ID is getting larger as time goes on (the later an object is created, the larger the ID) as comparing DateTime objects will be a pain - or unnecessary.
qs = Person.objects.order_by('-id')[0]
Basically you sort it by reversing ID order (largest --> smallest) and then retrieve the first item, which will be the latest created object in the queryset
if my assumption above stands.
result = max(x['CompletionDt'] for x in myList)

How do I selectively filter and aggregate jsonb in Postgres

{
"timeStamp": 1593664441878,
"timingRecords": [
{
"task": "extendedClean",
"time": 31,
"modules": [
"main"
]
},
{
"task": "clean",
"time": 35,
"modules": [
"lint"
]
},
{
"task": "compile",
"time": 35,
"modules": [
"test"
]
}
]
}
This is my json data in the table. I have multiple rows of similar records.
I am looking for a result as the sum of all times where task in (extendedClean, clean)
So my final expected result would look like
timestamp | sum(time)
1593664441878| 66
1593664741878| 22
It's a bit unclear how you need that in the context of a complete query. But given a single JSON value as shown in your question, you can do this:
select sum( (e ->> 'time')::int )
from the_table
cross join jsonb_array_elements(the_json_column -> 'timingRecords') as e
where e ->> 'task' in ('extendedClean', 'clean');
Online example

Generate child array json from given hierarchy levels object list

"questionnaireData": [
{
"questionaireType": "Q_AUTO_APPLICATION",
"cardID": "<blank>",
"setID": "<blank>",
"setName": "<blank>",
"sectionID": "<blank>",
"sectionName": "<blank>",
"pageID": "Person",
"fieldName": "LASTNAME",
"content": "Steve",
"fieldCode": "1111"
},
{
"questionaireType": "Q_AUTO_APPLICATION1",
"cardID": "Card 1",
"setID": "<blank>",
"setName": "<blank>",
"sectionID": "<blank>",
"sectionName": "<blank>",
"pageID": "Person",
"fieldName": "LASTNAME",
"content": "Steve",
"fieldCode": "1111"
}
]
Above JSON array show 2 objects (sample list) and each object having set of field in same order. It means, first node (questionaireType) is at root level.
For Other fields, previous values should refer as parent node in each if the parent node value is not <blank>. if the parent node is <blank> should go to next parent which is not <blank>.
Below, I have added comments in each line for explanation.
Please help me to generate JSON object for below kind of list to use in a angular tree (it means I need to generate nested child json array from this list).
"questionaireType": "Q_AUTO_APPLICATION", // root parent
"cardID": "<blank>", // should not consider as value is <blank>
"setID": "2", // should display as root parent's child as
// cardID is blank
"setName": "SET1", // setID & setName are in same object
"sectionID": "<blank>", //should not consider as value is <blank>
"sectionName": "<blank>", //should not consider as value is <blank>
"pageID": "Person", //pageID should come under setID as
//sectionID is <blank>
"fieldName": "FIRSTNAME", //fieldName,content & fieldCode are in same
//object and come under pageID
"content": "John",
"fieldCode": "1111"

Insert into existing map a map structure in DynamoDB using Nodejs

Structure of an item in database is as shown below:
{
"cars": {
"x": [
{
"time": 1485700907669,
"value": 23
}
]
},
"date": 1483214400000,
"id":"1"
}
I have to add a new item "z" of type list to cars like
{
"cars": {
"x": [
{
"time": 1485700907669,
"value": 23
}
],
"z": [
{
"time": 1485700907669,
"value": 23
}
]
},
"date": 1483214400000,
"id": "1"
}
What would the update expression in Node.js look like if I want to achieve somethings like this?
So far this is what I came up with:
set #car.#model= list_append(if_not_exists(#car.#model, :empty_list), :value)
However, if the item does not exist at the time of creation it throws error. Any idea how to do this?
This is the updated parameter I am using, still doesn't work
var params = {
TableName:table,
Key:{
"id": id,
"date": time.getTime()
},
ReturnValues: 'ALL_NEW',
UpdateExpression: 'SET #car.#model = if_not_exists(#car.#model,
:empty_list)',
ExpressionAttributeNames: {
'#car': 'cars',
'#model':"z"
},
ExpressionAttributeValues: {
':empty_list': [],
}
};
The solution is to update operation in two steps, first create a empty map for the parent since it does not exist in the first place.
So, in my case
SET #car= :empty_map
where :empty_map = {}
after doing this run the other update expression
SET #car.#model = list_append(if_not_exists(#car.#model, :empty_list), :value)
where :empty_list=[] and :value= {
"time": 1485700907669,
"value": 23
}
Break your update expression apart into two separate expressions:
SET #car.#model = if_not_exists(#car.#model, :empty_list) SET #car.#model = list_append(#car.#model, :value)

JQ Array to new fields

I have a sample json data like:
{
"phone_number": "780-414-2085",
"city": "Edmonton",
"updated": "2015-10-19T00:03:10",
"name": "Sir William Place ",
"url": "http://www.bwalk.com/en-CA/Rent/Details/Alberta/Edmonton/Sir-William-Place",
"last_visited": "2015-10-19T00:03:10",
"rooms": [{
"available": "Available",
"bathrooms": ["1"],
"suite_type": "1 Bedroom",
"square_feet": ["594", "649"],
"deposit": ["$499"],
"price_range": ["$1059", "$1209"]
}, {
"available": "Available",
"bathrooms": ["1"],
"suite_type": "1 Bedroom + Den",
"square_feet": ["771"],
"deposit": ["$499"],
"price_range": ["$1169", "$1249"]
}, {
"available": "Available",
"bathrooms": ["1", "2"],
"suite_type": "2 Bedroom",
"square_feet": ["894", "970"],
"deposit": ["$499"],
"price_range": ["$1344", "$1494"]
}, {
"available": "Available",
"bathrooms": ["2"],
"deal": ["October FREE and $299 Security Deposit on 12 month leases "],
"suite_type": "2 Bedroom Bi-level",
"square_feet": ["894"],
"deposit": ["$499"],
"price_range": ["$1344", "$1394"]
}, {
"available": "Waiting List",
"bathrooms": ["1"],
"suite_type": "Bachelor",
"square_feet": ["540"],
"deposit": ["$499"],
"price_range": ["$1004", "$1054"]
}],
"address": "8830-85 St., Edmonton, Alberta, T6C 3C3",
"zip_code": "T6C 3C3"
}
And I am running a jq expression like:
'{phone_number, city, updated, name, address, zip_code, url, last_visited} + (.rooms[] | {suite_type, price_range_start: .price_range[0], price_range_end: .price_range[1]} + {available, square_foot_start:.square_feet[0], square_foot_end:.square_feet[1], deposit:.deposit[0], bathrooms:.bathrooms[0]})'
This gives me an ok output but repeats the same names because I just list the rooms array. I want to be able to set each item in the rooms array to something like room1, room2, room3 etc. But also to keep it in one entry, so for example with the sample here it ends up 5 entries because there is 5 rooms and name for instance gets repeated 5 times because the way I have it set now. I think I need to map the rooms to something but not sure how to do that.
Can someone advise on how to do this?
You can update the elements in the array whilst keeping the other elements as is like this:
'.rooms[] |= {suite_type, price_range_start: .price_range[0],
price_range_end: .price_range[1]} + {available,
square_foot_start:.square_feet[0], square_foot_end:.square_feet[1],
deposit:.deposit[0], bathrooms:.bathrooms[0]}'
Here is a solution which uses functions.
def common_columns:
"phone_number", "city", "updated", "name", "address", "zip_code", "url", "last_visited"
;
def common:
.phone_number, .city, .updated, .name, .address, .zip_code, .url, .last_visited
;
def room_columns(n):
range(n)
| (
"available_\(.)", "bathrooms_\(.)", "suite_type_\(.)",
"square_feet_start_\(.)", "square_feet_end_\(.)", "deposit_\(.)",
"price_range_start_\(.)", "price_range_end_\(.)"
)
;
def rooms(n):
. as $r
| range(n)
| $r.rooms[.]
| (
.available, .bathrooms[0], .suite_type,
.square_feet[0,1], .deposit[0], .price_range[0,1]
)
;
[ common_columns, room_columns(6) ]
, [ common, rooms(6) ]
| #csv
You can change the 6 to however many sets of room columns you need.