In PostGraphile query filter 'or' operator is not working - postgraphile

Postgraphile filter or is working as and is working
When I was trying or in query filter and checked explained explorer i found it changed to and that's why its always working for and not for or
{
allCashierViewsList(
filter: {locationId: {equalTo: "454"} or: {name: {equalTo: "STORE MANAGER"}}}
) {
posId
name
locationId
}
}
And this is what the explorer says.
{
"data": {
"allCashierViewsList": []
},
"explain": [
{
"query": "select to_json(((__local_0__.\"pos_id\"))::text) as \"posId\", to_json((__local_0__.\"name\")) as \"name\", to_json(((__local_0__.\"location_id\"))::text) as \"locationId\" from (select __local_0__.*\nfrom \"public\".\"cashier_view\" as __local_0__\n\nwhere (((__local_0__.\"name\" = $1)) and ((__local_0__.\"location_id\" = $2))) and (TRUE) and (TRUE)\n\n\n) __local_0__",
"plan": "Subquery Scan on __local_0__ (cost=3542.55..3543.19 rows=1 width=96)\n Filter: (((__local_0__.name)::text = 'STORE MANAGER'::text) AND (__local_0__.location_id = '454'::bigint))\n -> Unique (cost=3542.55..3542.71 rows=31 width=24)\n -> Sort (cost=3542.55..3542.63 rows=31 width=24)\n Sort Key: c.pos_id\n -> Nested Loop (cost=8.38..3541.78 rows=31 width=24)\n -> Hash Join (cost=8.09..3390.49 rows=478 width=16)\n Hash Cond: (v.data_point_mapping_id = dpm.id)\n -> Seq Scan on value_table v (cost=0.00..2855.33 rows=136633 width=24)\n -> Hash (cost=8.08..8.08 rows=1 width=8)\n -> Hash Join (cost=4.78..8.08 rows=1 width=8)\n Hash Cond: (dpm.data_point_id = d.id)\n -> Seq Scan on data_point_mapping dpm (cost=0.00..2.93 rows=93 width=16)\n -> Hash (cost=4.77..4.77 rows=1 width=8)\n -> Hash Join (cost=2.18..4.77 rows=1 width=8)\n Hash Cond: (d.reference_table_id = r.id)\n -> Seq Scan on data_point d (cost=0.00..2.47 rows=47 width=16)\n -> Hash (cost=2.16..2.16 rows=1 width=8)\n -> Index Scan using reference_table_type_unique on reference_table r (cost=0.14..2.16 rows=1 width=8)\n Index Cond: ((type)::text = 'Cashier'::text)\n -> Index Scan using cashier_loc_pos_date_idx on cashier c (cost=0.29..0.31 rows=1 width=24)\n Index Cond: ((location_id = v.location_id) AND (pos_id = v.pos_id))"
}
]
}

I got how to use the 'or' operator in the filter of PostGraphile. I am posting because it might help others.
{
allCashierViewsList(
filter: {
or: [
{locationId: { equalTo: "454" } }
{name: { equalTo: "STORE DIRECTOR" } }
]
}
) {
posId
name
locationId
}
}

Related

JSON object: Query a value from unkown node based on a condition

I'm trying to query two values (DISCOUNT_TOTAL and ITEM_TOTAL) from a JSON object in a PostgreSQL database. Take the following query as reference:
SELECT
mt.customer_order
totals -> 0 -> 'amount' -> centAmount DISCOUNT_TOTAL
totals -> 1 -> 'amount' -> centAmount ITEM_TOTAL
FROM
my_table mt
to_jsonb(my_table.my_json -> 'data' -> 'order' -> 'totals') totals
WHERE
mt.customer_order in ('1000001', '1000002')
The query code works just fine, the big problem is that, for some reason out of my control, the values DISCOUNT_TOTAL and ITEM_TOTAL some times change their positions in the JSON object from one customer_order to other:
JSON Object
So i cannot aim to totals -> 0 -> 'amount' -> centAmount assuming that it contains the value related to type : DISCOUNT_TOTAL (same for type: ITEM_TOTAL). Is there any work around to get the correct centAmount for each type?
Use a path query instead of hardcoding the array positions:
with sample (jdata) as (
values (
'{
"data": {
"order": {
"email": "something",
"totals": [
{
"type": "ITEM_TOTAL",
"amount": {
"centAmount": 14990
}
},
{
"type": "DISCOUNT_TOTAL",
"amount": {
"centAmount": 6660
}
}
]
}
}
}'::jsonb)
)
select jsonb_path_query_first(
jdata,
'$.data.order.totals[*] ? (#.type == "DISCOUNT_TOTAL").amount.centAmount'
) as discount_total,
jsonb_path_query_first(
jdata,
'$.data.order.totals[*] ? (#.type == "ITEM_TOTAL").amount.centAmount'
) as item_total
from sample;
db<>fiddle here
EDIT: In case your PostgreSQL version does not support json path queries, you can do it by expanding the array into rows and then doing a pivot by case and sum:
with sample (order_id, jdata) as (
values ( 1,
'{
"data": {
"order": {
"email": "something",
"totals": [
{
"type": "ITEM_TOTAL",
"amount": {
"centAmount": 14990
}
},
{
"type": "DISCOUNT_TOTAL",
"amount": {
"centAmount": 6660
}
}
]
}
}
}'::jsonb)
)
select order_id,
sum(
case
when el->>'type' = 'DISCOUNT_TOTAL' then (el->'amount'->'centAmount')::int
else 0
end
) as discount_total,
sum(
case
when el->>'type' = 'ITEM_TOTAL' then (el->'amount'->'centAmount')::int
else 0
end
) as item_total
from sample
cross join lateral jsonb_array_elements(jdata->'data'->'order'->'totals') as a(el)
group by order_id;
db<>fiddle here

How to extract JSON value from VARCHAR column in Snowflake?

I have a VARCHAR column storing JSON data.
Here is one row:
{
"id": null,
"ci": null,
"mr": null,
"meta_data":
{
"product":
{
"product_id": "123xyz",
"sales":
{
"d_code": "UK",
"c_code": "5814"
},
"amount":
{
"currency": "USD",
"value": -1230
},
"entry_mode": "virtual",
"transaction_date": "2020-01-01",
"transaction_type": "purchase",
"others":
[]
}
}
}
Example data:
WITH t1 AS (
SELECT '{"id":null,"ci":null,"mr":null,"meta_data":{"product":{"product_id":"123xyz","sales":{"d_code":"UK","c_code":"5814"},"amount":{"currency":"USD","value":-1230},"entry_mode":"virtual","transaction_date":"2020-01-01","transaction_type":"purchase","others":[]}}}'::varchar AS value
)
In Postgres, I do like this. How can I extract the following values below in Snowflake?
SELECT
value,
value -> 'meta_data' -> 'product' ->> 'product_id' AS product_id,
value -> 'meta_data' -> 'product' -> 'sales' ->> 'd_code' AS d_code,
value -> 'meta_data' -> 'product' -> 'sales' ->> 'c_code' AS c_code,
value -> 'meta_data' -> 'product' -> 'amount' ->> 'currency' AS currency,
value -> 'meta_data' -> 'product' ->> 'entry_mode' AS entry_mode,
value -> 'meta_data' -> 'product' ->> 'transaction_type' AS transaction_type
FROM t1
It is possible with Snowflake too. The key point is usage of TRY_PARSE_JSON/PARSE_JSON:
PARSE_JSON
Interprets an input string as a JSON document, producing a VARIANT value.
WITH t1 AS (
SELECT '{"id":null,"ci":null,"mr":null,"meta_data":{"product":{"product_id":"123xyz","sales":{"d_code":"UK","c_code":"5814"},"amount":{"currency":"USD","value":-1230},"entry_mode":"virtual","transaction_date":"2020-01-01","transaction_type":"purchase","others":[]}}}'::varchar AS value
)
SELECT TRY_PARSE_JSON(t1.value) AS v
,v:meta_data:product:product_id::TEXT AS product_id
,v:meta_data:product:sales:d_code::TEXT AS d_code
-- ...
FROM t1;
Or with another cte:
WITH t1 AS (
SELECT '{"id":null,"ci":null,"mr":null,"meta_data":{"product":{"product_id":"123xyz","sales":{"d_code":"UK","c_code":"5814"},"amount":{"currency":"USD","value":-1230},"entry_mode":"virtual","transaction_date":"2020-01-01","transaction_type":"purchase","others":[]}}}'::varchar AS value
), t1_cast AS (
SELECT *,TRY_PARSE_JSON(t1.value) AS v
FROM t1
)
SELECT
v:meta_data:product:product_id::TEXT AS product_id
,v:meta_data:product:sales:d_code::TEXT AS d_code
-- ...
FROM t1_cast;
Output:
In Snowflake you use the VARIANT data type to store semi-structured data such as JSON. First, you should convert the VARCHAR string to VARIANT with the function PARSE_JSON, then you can query like this:
WITH t1 AS (
SELECT parse_json('{"id":null,"ci":null,"mr":null,"meta_data":{"product":{"product_id":"123xyz","sales":{"d_code":"UK","c_code":"5814"},"amount":{"currency":"USD","value":-1230},"entry_mode":"virtual","transaction_date":"2020-01-01","transaction_type":"purchase","others":[]}}}'::varchar) AS value
)
select value:meta_data:product:product_id as product_id,
value:meta_data:product:sales:d_code as d_code,
value:meta_data:product:sales:c_code AS c_code,
value:meta_data:product:amount:currency AS currency,
value:meta_data:product:entry_mode AS entry_mode,
value:meta_data:product:transaction_type AS transaction_type
from t1;

Postgres query on json with empty value

I have a query in JSON to filter out the data based on data present inside JSON field .
Table name: audit_rules
Column name: rule_config (json)
rule_config contains JSON which contain 'applicable_category' as an attribute in it.
Example
{
"applicable_category":[
{
"status":"active",
"supported":"yes",
"expense_type":"Meal",
"acceptable_variation":0.18,
"minimum_value":25.0
},
{
"status":"active",
"supported":"yes",
"expense_type":"Car Rental",
"acceptable_variation":0.0,
"minimum_value":25.0
},
{
"status":"active",
"supported":"yes",
"expense_type":"Airfare",
"acceptable_variation":0.0,
"minimum_value":75
},
{
"status":"active",
"supported":"yes",
"expense_type":"Hotel",
"acceptable_variation":0.0,
"minimum_value":75
}
],
"minimum_required_keys":[
"amount",
"date",
"merchant",
"location"
],
"value":[
0,
0.5
]
}
But some of the rows doesn't have any data or doesn't have the 'applicable_category' attribute in it.
So while running following query i am getting error:
select s.*,j from
audit_rules s
cross join lateral json_array_elements ( s.rule_config#>'{applicable_category}' ) as j
WHERE j->>'expense_type' in ('Direct Bill');
Error: SQL Error [22023]: ERROR: cannot call json_array_elements on a scalar
You can restrict the result to only rows that contain an array:
select j.*
from audit_rules s
cross join lateral json_array_elements(s.rule_config#>'{applicable_category}') as j
WHERE json_typeof(s.rule_config -> 'applicable_category') = 'array'
and j ->> 'expense_type' in ('Meal')

create index on couchbase for ARRAY_REMOVE

I want to execute this query :
UPDATE `bucket` SET etats= ARRAY_REMOVE( etats, etats[2])
my question is how to create an index to execute this query, i don't want to use
primary index couchbase.
the goal of the query is to remove an element from array 'etats'.
example of the document :
{
"lastUpdateTime": "2019-03-31T22:02:00.164",
"origin": "origin1",
"etats": [
{
"dateTime": "2019-03-28T17:13:49.766",
"etat": "etat1",
"code": "code1"
},
{
"dateTime": "2019-03-29T15:26:48.577",
"etat": "etat2",
"code": "code2"
},
{
"dateTime": "2019-03-31T22:01:59.843",
"etat": "etat3",
"code": "code3"
}
],
"etatType": "type1"
}
You must have WHERE clause to choose index, otherwise only option is primary index.
In general you check elements if present then only do update the field.
The following query removes object from array that have code value "code2"
CREATE INDEX ix1 ON default (DISTINCT ARRAY v.code FOR v IN etats END) WHERE etatType = "type1";
UPDATE default AS d
SET d.etats = ARRAY v FOR v IN d.etats WHEN v.code != "code2" END
WHERE d.etatType = "type1" AND ANY v IN d.etats SATISFIES v.code = "code2" END;
If you really want index for your query only.
CREATE INDEX ix1 ON `bucket` (etatType);
UPDATE `bucket` SET etats= ARRAY_REMOVE( etats, etats[2])
WHERE etatType = "type1";

Couchbase N1QL query sum from sub document array

I have the following document model in my couchbase db
{
type:"account"
id : "123",
transactions: [
{
type : "credit",
value : 100
},
{
type : "debit",
value : 10
}
]
}
How do i query all the account Ids and their sum of all credits ?
Using AS ARRAY functions https://docs.couchbase.com/server/6.0/n1ql/n1ql-language-reference/arrayfun.html
SELECT d.id,
ARRAY_SUM(ARRAY v.`value` FOR v IN d.transactions WHEN v.type = "credit" END) AS s
FROM default AS d
WHERE d.type = "account";
OR
Using subquery expression https://docs.couchbase.com/server/6.0/n1ql/n1ql-language-reference/subqueries.html
SELECT d.id,
(SELECT RAW SUM(d1.`value`)
FROM d.transactions AS d1
WHERE d1.type = "credit")[0] AS s
FROM default AS d
WHERE d.type = "account";