In postgres Say I have schema as such:
table item {
type varchar(40)
entity_id bigint
entity_type varchar(40)
user_id bigint
}
And I want to query the table to get the info like this:
{
"typeA": {
"count": 3,
"me": true
},
"typeC": {
"count": 3,
"me": false
},
"typeE": {
"count": 3,
"me": false
},
"typeR": {
"count": 3,
"me": true
}
}
From a query where the main data is this:
SELECT ARRAY_AGG(x)
FROM
(
SELECT type,
count(*),
(CASE
WHEN (SELECT id
FROM items as i
WHERE i.entity_type = 'sometype'
AND i.entity_id = 234
AND i.user_id = 32
AND i.type = items.type) is not null
THEN true
ELSE false
END) AS me
FROM items
WHERE items.entity_type = 'sometype'
AND items.entity_id = 234
GROUP BY type
) as x
This returns an array of the info i need type count and me. But I need it formatted like above versus:
[
{
"type": "typeA",
"count": 3,
"me": true
},
{
"type": "typeC",
"count": 3,
"me": false
},
{
"type": "typeE",
"count": 3,
"me": false
},
{
"type": "typeR",
"count": 3,
"me": true
}
]
Which is the current way it is formatted. Have been unable to find a way to build the json object I need. I was able to get three json objects that are like that But I need the three nested in one object.
Not exactly what you want, but from PostgreSQL - Aggregate Functions, I would guess, you can try json_object_agg(name, value), e.g.
SELECT JSON_OBJECT_AGG(type, x)
FROM
(
SELECT type,
count(*),
(CASE
WHEN (SELECT id
FROM items as i
WHERE i.entity_type = 'sometype'
AND i.entity_id = 234
AND i.user_id = 32
AND i.type = items.type) is not null
THEN true
ELSE false
END) AS me
FROM items
WHERE items.entity_type = 'sometype'
AND items.entity_id = 234
GROUP BY type, me
) as x
Related
I have documents in a bucket called blocks in the following format:
{
"random_field": 1,
"transactions": [{
"id": "CCCCC",
"inputs": [{
"tx_id": "AAAAA",
"index": 0
},{
"tx_id": "BBBBB",
"index": 1
}]
}]
}
{
"transactions": [{
"id": "AAAAA",
"outputs": [{
"field1": "value123",
"field2": "value456"
},{
"field1": "ignore",
"field2": "ignore"
}]
}]
}
{
"transactions": [{
"id": "BBBBB",
"outputs": [{
"field1": "ignored",
"field2": "ignored"
},{
"field1": "value999",
"field2": "value888"
}]
}]
}
and I need to map the inputs from the first document to the corresponding outputs of the second and third documents. The way to do it manually is to, for each input, find a transaction with id equal to the input's tx_id, and then get the item from the outputs array based on the index of the input. To exemplify, this is the object I would like to return in this scenario:
{
"random_field": 1,
"transactions": [{
"id": "CCCCC",
"inputs": [{
"tx_id": "AAAAA",
"index": 0,
"output": {
"field1": "value123",
"field2": "value456"
}
},{
"tx_id": "BBBBB",
"index": 1,
"output": {
"field1": "value999",
"field2": "value888"
}
}]
}]
}
I managed to come up with the following query:
SELECT b.random_field,
b.transactions -- how to map this?
FROM blocks b
UNNEST b.transactions t
UNNEST t.inputs input
JOIN blocks `source` ON (ANY tx IN `source`.transactions SATISFIES tx.`id` = input.tx_id END)
UNNEST `source`.transactions source_tx
UNNEST source_tx.outputs o
WHERE (ANY tx IN b.transactions SATISFIES tx.`id` = 'AAAAA' END) LIMIT 1;
I suppose there should be a way to map b.transactions.inputs by using source_tx.outputs, but I couldn't find how.
I came across this other answer, but I don't really understand how it applies to my scenario. Maybe it does, but I am very new to Couchbase, so I am very much lost: How to map array values in one document to another and display in result
Basically you want inline some other document into current document using condition.
Instead of JOINs+ GROUPS use subquery expressions + correlated subqueries. (b.*, "abc" AS transactions, selects all fields of b and adds transactions (if already exist overwrite else adds)
CREATE INDEX ix1 ON blocks (ALL ARRAY FOR ot.id FOR ot IN transactions END);
SELECT b.*,
(SELECT t.*,
(SELECT i.*,
(SELECT RAW ot
FROM blocks AS o
UNNEST o.transactions AS ot
UNNEST ot.outputs AS oto
WHERE i.tx_id = ot.id AND i.`index` = UNNEST_POS(oto))[0] AS output
FROM t.`inputs` AS i) AS inputs
FROM b.transactions AS t) AS transactions
FROM blocks AS b
WHERE ANY tx IN b.transactions SATISFIES tx.`inputs` IS NOT NULL END ;
OR
SELECT b.*,
(SELECT t.*,
(SELECT i.*,
(SELECT RAW ot.outputs[i.`index`]
FROM blocks AS o
UNNEST o.transactions AS ot
WHERE i.tx_id = ot.id
LIMIT 1)[0] AS output
FROM t.`inputs` AS i) AS inputs
FROM b.transactions AS t) AS transactions
FROM blocks AS b
WHERE ANY tx IN b.transactions SATISFIES tx.`inputs` IS NOT NULL END ;
I have a table (log_table) and in this table there is a nested array json field (activities). With using this activities field, I want to normalize my row.
log_table:
- id:long
- activities:json
- date:timestamp
example activities field:
[
{
"actionType":"NOTIFICATION",
"items":null
},
{
"actionType":"MUTATION",
"items":[
{
"id":387015007,
"name":"epic",
"value":{
"currency":"USD",
"amount":1.76
}
},
{
"id":386521039,
"name":"test",
"value":{
"currency":"USD",
"amount":1.76
}
}
]
}
]
As query, I've tried:
select
*
from
log_table l,
json_array_elements(l.activities) elems,
json_array_elements(elems->'items') obj;
With this query, I got error like below:
ERROR: cannot call json_array_elements on a scalar
Is there any suggestion?
The lack of items should be marked as [null], not null. You can use the case expression to correct this, e.g.:
select elems->>'actionType' as action_type, obj
from log_table
cross join jsonb_array_elements(l.activities::jsonb) elems
cross join jsonb_array_elements(case elems->'items' when 'null' then '[null]' else elems->'items' end) obj
action_type | obj
--------------+---------------------------------------------------------------------------------
NOTIFICATION | null
MUTATION | {"id": 387015007, "name": "epic", "value": {"amount": 1.76, "currency": "USD"}}
MUTATION | {"id": 386521039, "name": "test", "value": {"amount": 1.76, "currency": "USD"}}
(3 rows)
I have a document called
player::id
for each player. Where id is the player's id (auto-incremented).
How can I run search operations on the array below such as checking the id's or count? This array is stored in a player's save document.
"inventory": {
"0": {
"count": 1,
"id": 6
},
"1": {
"count": 1,
"id": 13
},
"2": {
"count": 1,
"id": 142
},
"3": {
"count": 1,
"id": 144
}
},
There is no ARRAY in the object you have posted.
if you want search id 13 is present in the document and get the corresponding count you can use OBJECT_PAIRS() function which convert dynamic object into ARRAY described https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/objectfun.html
SELECT op.val.id, op.val.count, op.name AS pos
FROM default AS d
UNNEST OBJECT_PAIRS(d.inventory) AS op
WHERE op.val.id = 13
OR
SELECT d.*
FROM default AS d
WHERE ANY op IN OBJECT_PAIRS(d.inventory) SATISFIES op.val.id = 13 END;
I have the following document structure:
{
"customerId": "",
"schemeId": "scheme-a",
"type": "account",
"events": [
{
"dateTime": "2019-03-14T02:23:58.573Z",
"id": "72998bbf-94a6-4031-823b-6c304707ad49",
"type": "DebitDisabled",
"authorisedId": ""
},
{
"dateTime": "2018-05-04T12:40:15.439Z",
"transactionReference": "005171-15-1054-7571-60990-20180503165536",
"id": "005171-15-1054-7571-60990-20180503165536-1",
"type": "Credit",
"authorisedId": ",
"value": 34,
"funder": "funder-a"
},
{
"dateTime": "2019-03-06T04:14:54.564Z",
"transactionReference": "000000922331",
"eventDescription": {
"language": "en-gb",
"text": "
},
"id": "000000922331",
"type": "Credit",
"authorisedId": "",
"value": 16,
"funder": "funder-b"
},
{
"dateTime": "2019-03-10T04:24:17.903Z",
"transactionReference": "000001510154",
"eventDescription": {
"language": "en-gb",
"text": ""
},
"id": "000001510154",
"type": "Credit",
"authorisedId": "",
"value": 10,
"funder": "funder-c"
}
]
}
And the following indexes :
CREATE INDEX `scheme-a_customers_index`
ON `default`(`type`,`schemeId`,`customerId`)
WHERE ((`schemeId` = "scheme-a") and (`type` = "account"))
WITH { "num_replica":1 }
CREATE INDEX `scheme-a_credits_index`
ON `default`(
`type`,
`schemeId`,
`customerId`,
(distinct (array (`e`.`funder`) for `e` in `events` when ((`e`.`type`) = "Credit") end))
)
WHERE ((`type` = "scheme") and (`schemeId` = "scheme-a"))
WITH { "num_replica":1 }
I am trying to query all the customerIds and events for each where type="credit" and funder like "funder%"
below is my query :
SELECT
customerId,
(ARRAY v.`value` FOR v IN p.events WHEN v.type = "Credit" AND v.funder like "funder%" END) AS credits
FROM default AS p
WHERE p.type = "account" AND p.schemeId = "scheme-a"
AND (ANY e IN p.events SATISFIES e.funder = "funder-a" END)
I am expecting the query to use the index scheme-a_credits_index, instead it is using scheme-a_customers_index. Can't understand why ! isn't the query supposed to use scheme-a_credits_index ?
Your query doesn't have predicate on customerId. So query can only push two predicates to indexers and both indexes are qualify. scheme-a_customers_index is more efficient because of number of entries in the index due to non array index.
You should try the following.
CREATE INDEX `ix1` ON `default`
(DISTINCT ARRAY e.funder FOR e IN events WHEN e.type = "Credit" END, `customerId`)
WHERE ((`schemeId` = "scheme-a") and (`type` = "account")) ;
SELECT
customerId,
(ARRAY v.`value` FOR v IN p.events WHEN v.type = "Credit" AND v.funder like "funder%" END) AS credits
FROM default AS p
WHERE p.type = "account" AND p.schemeId = "scheme-a"
AND (ANY e IN p.events SATISFIES e.funder LIKE "funder%" AND e.type = "Credit" END);
I have a table defined like this:
CREATE TABLE data_table AS (
id bigserial,
"name" text NOT NULL,
"value" text NOT NULL,
CONSTRAINT data_table_pk PRIMARY KEY (id)
);
INSERT INTO data_table ("name", "value") VALUES
('key_1', 'value_1'),
('key_2', 'value_2');
I would like to get a JSON object from this table content, which will look like this:
{
"key_1":"value_1",
"key_2":"value_2"
}
Now I'm using the client application to parse the result set into JSON format. Is it possible to accomplish this by a postgresl query?
If you're on 9.4 you can do the following:
$ select json_object_agg("name", "value") from data_table;
json_object_agg
----------------------------------------------
{ "key_1" : "value_1", "key_2" : "value_2" }
select
format(
'{%s}',
string_agg(format(
'%s:%s',
to_json("name"),
to_json("value")
), ',')
)::json as json_object
from data_table;
json_object
---------------------------------------
{"key_1":"value_1","key_2":"value_2"}
In a generic scenario you can nest more than one json_object_agg functions on top of a subquery. The inner subqueries should always have at least one column that will be used by outer subquery as keys for the json_object_agg function.
In the example, in the subquery C the values of the column action are used as keys in the subquery A. In A the values of column role are used as keys in query A.
-- query A
select json_object_agg(q1.role, q1.actions) from (
-- subquery B
select q2.role, json_object_agg(q2.action, q2.permissions) as actions from (
-- subquery C
select r.name as role, a.name as action, json_build_object (
'enabled', coalesce(a.bit & bit_and(p.actionids) <> 0, false),
'guestUnsupported', r.name = 'guest' and a."guestUnsupported"
) as permissions
from role r
left join action a on a.entity = 'route'
left join permission p on p.roleid = r.id
and a.entity = p.entityname
and (p.entityid = 1 or p.entityid is null)
where
1 = 1
and r.enabled
and r.deleted is null
group by r.name, a.id
) as q2 group by q2.role
) as q1
The result is a single row/single column with the following content:
{
"Role 1": {
"APIPUT": {
"enabled": false,
"guestUnsupported": false
},
"APIDELETE": {
"enabled": false,
"guestUnsupported": false
},
"APIGET": {
"enabled": true,
"guestUnsupported": false
},
"APIPOST": {
"enabled": true,
"guestUnsupported": false
}
},
"Role 2": {
"APIPUT": {
"enabled": false,
"guestUnsupported": false
},
"APIDELETE": {
"enabled": false,
"guestUnsupported": false
},
"APIGET": {
"enabled": true,
"guestUnsupported": false
},
"APIPOST": {
"enabled": false,
"guestUnsupported": false
}
}
}