want to sum inner element with JSON in using N1QLCouchbase - json

when I run below query
SELECT * FROM myBucket WHERE ANY x IN transactions SATISFIES x.type in [0,4] END;
Result:
{
"_type": "Company",
"created": "2015-12-01T18:30:00.000Z",
"transactions": [
{
"amount": "96.5",
"date": "2016-01-03T18:30:00.000Z",
"type": 0
},
{
"amount": "483.7",
"date": "2016-01-10T18:30:00.000Z",
"type": 0
}
]
}
I get multiple json like this
SELECT sum(transactions[*].amount) FROM Inheritx WHERE ANY x IN transactions SATISFIES x.type in [0,4] END;
Result:
[
{
"$1": null
}
]
Now I want to sum of all this. How can I do it?

transactions[*].amount this is return array so first need to user array function
ARRAY_SUM
than use sum like below.
SELECT sum(ARRAY_SUM(transactions[*].amount)) FROM Inheritx WHERE ANY x IN transactions SATISFIES x.type in [0,4] END;

Related

JSON object: Query a value from unkown node based on a condition

I'm trying to query two values (DISCOUNT_TOTAL and ITEM_TOTAL) from a JSON object in a PostgreSQL database. Take the following query as reference:
SELECT
mt.customer_order
totals -> 0 -> 'amount' -> centAmount DISCOUNT_TOTAL
totals -> 1 -> 'amount' -> centAmount ITEM_TOTAL
FROM
my_table mt
to_jsonb(my_table.my_json -> 'data' -> 'order' -> 'totals') totals
WHERE
mt.customer_order in ('1000001', '1000002')
The query code works just fine, the big problem is that, for some reason out of my control, the values DISCOUNT_TOTAL and ITEM_TOTAL some times change their positions in the JSON object from one customer_order to other:
JSON Object
So i cannot aim to totals -> 0 -> 'amount' -> centAmount assuming that it contains the value related to type : DISCOUNT_TOTAL (same for type: ITEM_TOTAL). Is there any work around to get the correct centAmount for each type?
Use a path query instead of hardcoding the array positions:
with sample (jdata) as (
values (
'{
"data": {
"order": {
"email": "something",
"totals": [
{
"type": "ITEM_TOTAL",
"amount": {
"centAmount": 14990
}
},
{
"type": "DISCOUNT_TOTAL",
"amount": {
"centAmount": 6660
}
}
]
}
}
}'::jsonb)
)
select jsonb_path_query_first(
jdata,
'$.data.order.totals[*] ? (#.type == "DISCOUNT_TOTAL").amount.centAmount'
) as discount_total,
jsonb_path_query_first(
jdata,
'$.data.order.totals[*] ? (#.type == "ITEM_TOTAL").amount.centAmount'
) as item_total
from sample;
db<>fiddle here
EDIT: In case your PostgreSQL version does not support json path queries, you can do it by expanding the array into rows and then doing a pivot by case and sum:
with sample (order_id, jdata) as (
values ( 1,
'{
"data": {
"order": {
"email": "something",
"totals": [
{
"type": "ITEM_TOTAL",
"amount": {
"centAmount": 14990
}
},
{
"type": "DISCOUNT_TOTAL",
"amount": {
"centAmount": 6660
}
}
]
}
}
}'::jsonb)
)
select order_id,
sum(
case
when el->>'type' = 'DISCOUNT_TOTAL' then (el->'amount'->'centAmount')::int
else 0
end
) as discount_total,
sum(
case
when el->>'type' = 'ITEM_TOTAL' then (el->'amount'->'centAmount')::int
else 0
end
) as item_total
from sample
cross join lateral jsonb_array_elements(jdata->'data'->'order'->'totals') as a(el)
group by order_id;
db<>fiddle here

How to map nested array items with N1QL?

I have documents in a bucket called blocks in the following format:
{
"random_field": 1,
"transactions": [{
"id": "CCCCC",
"inputs": [{
"tx_id": "AAAAA",
"index": 0
},{
"tx_id": "BBBBB",
"index": 1
}]
}]
}
{
"transactions": [{
"id": "AAAAA",
"outputs": [{
"field1": "value123",
"field2": "value456"
},{
"field1": "ignore",
"field2": "ignore"
}]
}]
}
{
"transactions": [{
"id": "BBBBB",
"outputs": [{
"field1": "ignored",
"field2": "ignored"
},{
"field1": "value999",
"field2": "value888"
}]
}]
}
and I need to map the inputs from the first document to the corresponding outputs of the second and third documents. The way to do it manually is to, for each input, find a transaction with id equal to the input's tx_id, and then get the item from the outputs array based on the index of the input. To exemplify, this is the object I would like to return in this scenario:
{
"random_field": 1,
"transactions": [{
"id": "CCCCC",
"inputs": [{
"tx_id": "AAAAA",
"index": 0,
"output": {
"field1": "value123",
"field2": "value456"
}
},{
"tx_id": "BBBBB",
"index": 1,
"output": {
"field1": "value999",
"field2": "value888"
}
}]
}]
}
I managed to come up with the following query:
SELECT b.random_field,
b.transactions -- how to map this?
FROM blocks b
UNNEST b.transactions t
UNNEST t.inputs input
JOIN blocks `source` ON (ANY tx IN `source`.transactions SATISFIES tx.`id` = input.tx_id END)
UNNEST `source`.transactions source_tx
UNNEST source_tx.outputs o
WHERE (ANY tx IN b.transactions SATISFIES tx.`id` = 'AAAAA' END) LIMIT 1;
I suppose there should be a way to map b.transactions.inputs by using source_tx.outputs, but I couldn't find how.
I came across this other answer, but I don't really understand how it applies to my scenario. Maybe it does, but I am very new to Couchbase, so I am very much lost: How to map array values in one document to another and display in result
Basically you want inline some other document into current document using condition.
Instead of JOINs+ GROUPS use subquery expressions + correlated subqueries. (b.*, "abc" AS transactions, selects all fields of b and adds transactions (if already exist overwrite else adds)
CREATE INDEX ix1 ON blocks (ALL ARRAY FOR ot.id FOR ot IN transactions END);
SELECT b.*,
(SELECT t.*,
(SELECT i.*,
(SELECT RAW ot
FROM blocks AS o
UNNEST o.transactions AS ot
UNNEST ot.outputs AS oto
WHERE i.tx_id = ot.id AND i.`index` = UNNEST_POS(oto))[0] AS output
FROM t.`inputs` AS i) AS inputs
FROM b.transactions AS t) AS transactions
FROM blocks AS b
WHERE ANY tx IN b.transactions SATISFIES tx.`inputs` IS NOT NULL END ;
OR
SELECT b.*,
(SELECT t.*,
(SELECT i.*,
(SELECT RAW ot.outputs[i.`index`]
FROM blocks AS o
UNNEST o.transactions AS ot
WHERE i.tx_id = ot.id
LIMIT 1)[0] AS output
FROM t.`inputs` AS i) AS inputs
FROM b.transactions AS t) AS transactions
FROM blocks AS b
WHERE ANY tx IN b.transactions SATISFIES tx.`inputs` IS NOT NULL END ;

Expand Postgresql Nested Array Json Field

I have a table (log_table) and in this table there is a nested array json field (activities). With using this activities field, I want to normalize my row.
log_table:
- id:long
- activities:json
- date:timestamp
example activities field:
[
{
"actionType":"NOTIFICATION",
"items":null
},
{
"actionType":"MUTATION",
"items":[
{
"id":387015007,
"name":"epic",
"value":{
"currency":"USD",
"amount":1.76
}
},
{
"id":386521039,
"name":"test",
"value":{
"currency":"USD",
"amount":1.76
}
}
]
}
]
As query, I've tried:
select
*
from
log_table l,
json_array_elements(l.activities) elems,
json_array_elements(elems->'items') obj;
With this query, I got error like below:
ERROR: cannot call json_array_elements on a scalar
Is there any suggestion?
The lack of items should be marked as [null], not null. You can use the case expression to correct this, e.g.:
select elems->>'actionType' as action_type, obj
from log_table
cross join jsonb_array_elements(l.activities::jsonb) elems
cross join jsonb_array_elements(case elems->'items' when 'null' then '[null]' else elems->'items' end) obj
action_type | obj
--------------+---------------------------------------------------------------------------------
NOTIFICATION | null
MUTATION | {"id": 387015007, "name": "epic", "value": {"amount": 1.76, "currency": "USD"}}
MUTATION | {"id": 386521039, "name": "test", "value": {"amount": 1.76, "currency": "USD"}}
(3 rows)

N1QL Array Query for where condition to check inner elament

I have JSON like below
{
"_id": "000fad10-b2de-11e6-92de-632a9b1d21d9",
"_type": "Company",
"status": 1,
"transactions": [
{
"completed": 1,
"currency": "USD",
"date": "2015-12-01T18:30:00.000Z",
"method": 0,
"type": 0
}
]
}
I want to run like below query
select * from MyBucket where transactions.method in (0,3);
How can I do it in N1QL ??
Try this:
SELECT * FROM MyBucket b UNNEST b.transactions t WHERE t.method in [0,3];
keep this cheatsheet
SELECT * FROM MyBucket WHERE ANY x IN transactions SATISFIES x.method in[1,0] END;
I get answer by this

Couchbase N1QL array query

Document sample from my giata_properties bucket: link
Relevant json paste
{
"propertyCodes": {
"provider": [
{
"code": [
{
"value": [
{
"value": "304387"
}
]
}
],
"providerCode": "hotelbeds",
"providerType": "gds"
},
{
"code": [
{
"value": [
{
"name": "Country Code",
"value": "EG"
},
{
"name": "City Code",
"value": "HRG"
},
{
"name": "Hotel Code",
"value": "91U"
}
]
}
],
"providerCode": "gta",
"providerType": "gds"
}
]
},
"name": "Arabia Azur Resort"
}
I want a query (and an index) to retrieve a document based on propertyCodes.provider.code.value.value and propertyCodes.provider.providerCode. I've managed to do each separately but I'm not sure how to merge both of them in a single query.
SELECT meta().id FROM giata_properties AS gp USE INDEX(`#primary`) WHERE ANY v WITHIN gp.propertyCodes.provider[*].code SATISFIES v.`value` = '150613' END;
SELECT meta().id FROM giata_properties AS gp USE INDEX(`#primary`) WHERE ANY v within gp.propertyCodes.provider[*].providerCode SATISFIES v = 'hotelbeds' END;
So for example I want to fetch the document that includes propertyCodes.provider.code.value.value of 304387 and that provider is also hotelbeds, because code value can be duplicated over documents, but code and providerCode combination is unique.
Here are the query and the indexes.
The query.
SELECT META().id
FROM giata_properties AS gp
WHERE ANY p IN propertyCodes.provider SATISFIES ( ANY v WITHIN p.code SATISFIES v.`value` = '304387' END ) AND p.providerCode = 'hotelbeds' END;
The indexes.
CREATE INDEX idx_value ON giata_properties
( DISTINCT ARRAY ( DISTINCT ARRAY v.`value` FOR v WITHIN p.code END ) FOR p IN propertyCodes.provider END );
CREATE INDEX idx_providerCode ON giata_properties
( DISTINCT ARRAY p.providerCode FOR p IN propertyCodes.provider END );