N1QL nested json, query on field inside object inside array - couchbase

I have json documents in my Couchbase cluster that looks like this
{
"giata_properties": {
"propertyCodes": {
"provider": [
{
"code": [
{
"value": [
{
"name": "Country Code",
"value": "EG"
},
{
"name": "City Code",
"value": "HRG"
},
{
"name": "Hotel Code",
"value": "91U"
}
]
}
],
"providerCode": "gta",
"providerType": "gds"
},
{
"code": [
{
"value": [
{
"value": "071801"
}
]
},
{
"value": [
{
"value": "766344"
}
]
}
],
"providerCode": "restel",
"providerType": "gds"
},
{
"code": [
{
"value": [
{
"value": "HRG03Z"
}
]
},
{
"value": [
{
"value": "HRG04Z"
}
]
}
],
"providerCode": "5VF",
"providerType": "tourOperator"
}
]
}
}
}
I'm trying to create a query that fetches a single document based on the value of giata_properties.propertyCodes.provider.code.value.value and a specific providerType.
So for example, my input is 071801 and restel, I want a query that will fetch me the document I pasted above (because it contains these values).
I'm pretty new to N1QL so what I tried so far is (without the providerType input)
SELECT * FROM giata_properties AS gp
WHERE ANY `field` IN `gp.propertyCodes.provider.code.value` SATISFIES `field.value` = '071801' END;
This returns me an empty result set. I'm probably doing all of this wrongly.
edit1:
According to geraldss answer I was able to achieve my goal via 2 different queries
1st (More general) ~2m50.9903732s
SELECT * FROM giata_properties AS gp WHERE ANY v WITHIN gp SATISFIES v.`value` = '071801' END;
2nd (More specific) ~2m31.3660388s
SELECT * FROM giata_properties AS gp WHERE ANY v WITHIN gp.propertyCodes.provider[*].code SATISFIES v.`value` = '071801' END;
Bucket have around 550K documents. No indexes but the primary currently.
Question part 2
When I do either of the above queries, I get a result streamed to my shell very quickly, then I spend the rest of the query time waiting for the engine to finish iterating over all documents. I'm sure that I'll be only getting 1 result from future queries so I thought I can use LIMIT 1 so the engine stops searching on first result, I tried something like
SELECT * FROM giata_properties AS gp WHERE ANY v WITHIN gp SATISFIES v.`value` = '071801' END LIMIT 1;
But that made no difference, I get a document written to my shell and then keep waiting until the query finishes completely. How can this be configured correctly?
edit2:
I've upgraded to the latest enterprise 4.5.1-2844, I have only the primary index created on giata_properties bucket, when I execute the query along with the LIMIT 1 keyword it still takes the same time, it doesn't stop quicker.
I've also tried creating the array index you suggested but the query is not using the index and it keeps insisting on using the #primary index (even if I use USE INDEX clause).
I tried removing SELF from the index you suggested and it took a much longer time to build and now the query can use this new index, but I'm honestly not sure what I'm doing here.
So 3 questions:
1) Why LIMIT 1 using primary index doesn't make the query stop at first result?
2) What's the difference between the index you suggested with and without SELF? I tried to look for SELF keyword documentation but I couldn't find anything.
This is how both indexes look in Web ui
Index 1 (Your original suggestion) - Not working
CREATE INDEX `gp_idx1` ON `giata_properties`((distinct (array (`v`.`value`) for `v` within (array_star((((self.`giata_properties`).`propertyCodes`).`provider`)).`code`) end)))
Index 2 (Without SELF)
CREATE INDEX `gp_idx2` ON `giata_properties`((distinct (array (`v`.`value`) for `v` within (array_star(((self.`propertyCodes`).`provider`)).`code`) end)))
3) What would be the query for a specific giata_properties.propertyCodes.provider.code.value.value and a specific providerCode? I managed to do both separately but I wasn't successful in merging them.
Thanks for all your help dear

Here is a query without the providerType.
EXPLAIN SELECT *
FROM giata_properties AS gp
WHERE ANY v WITHIN gp.giata_properties.propertyCodes.provider[*].code SATISFIES v.`value` = '071801' END;
You can also index this in Couchbase 4.5.0 and above.
CREATE INDEX idx1 ON giata_properties( DISTINCT ARRAY v.`value` FOR v WITHIN SELF.giata_properties.propertyCodes.provider[*].code END );
Edit to answer question edits
The performance has been addressed in 4.5.x. You should try the following on Couchbase 4.5.1 and post the execution times here.
Test on 4.5.1.
Create the index.
Use the LIMIT. In 4.5.1, the limit is pushed down to the index.

Related

Postgres - Performance of select for large jsonb column

We are using Postgres jsonb type in one of our database tables. Table structure is shown as below:
CREATE TABLE IF NOT EXISTS public.draft_document (
id bigserial NOT NULL PRIMARY KEY,
...
document jsonb NOT NULL,
ein_search character varying(11) NOT NULL
);
CREATE INDEX IF NOT EXISTS count_draft_document_idx ON public.draft_document USING btree (ein_search);
CREATE INDEX IF NOT EXISTS read_draft_document_idx ON public.draft_document USING btree (id, ein_search);
The json structure of document column may vary. Below is one example of a possible schema for document:
"withholdingCredit": {
"type": "array",
"items": {
"$ref": "#/definitions/withholding"
}
}
Where the withholding structure (array elements) respects:
"withholding": {
"properties": {
...
"proportionalityIndicator": {
"type": "boolean"
},
"tribute": {
"$ref": "#/definitions/tribute"
},
"payingSourceEin": {
"type": "string"
},
"value": {
"type": "number"
}
...
}
...
},
"tribute": {
"type": "object",
"properties": {
"code": {
"type": "number"
},
"additionalCode": {
"type": "number"
}
...
}
}
Here is an example of the json into document jsonb column:
{
"withholdingCredit":[
{
"value": 15000,
"tribute":{
"code": 1216,
"additionalCode": 2
},
"payingSourceEin": "03985506123132",
"proportionalityIndicator": false
},
...
{
"value": 98150,
"tribute":{
"code": 3155,
"additionalCode": 1
},
"payingSourceEin": "04185506123163",
"proportionalityIndicator": false
}
]
}
The maximum number of elements in the array can vary up to a maximum limit of 100.000 (one hundred thousand) elements. It is a business limit.
We need a paged select query that returns the withholding array disaggregated (1 element per row), where each row also brings the sum of the withholding elements value and the array length.
The query also needs to return the withholdings ordered by proportionalityIndicator, tribute-->code, tribute-->additionalCode, payingSourceEin. Something like:
id
sum
jsonb_array_length
jsonb_array_elements
30900
1.800.027
2300
{"value":15000,"tribute":{"code":1216,...}, ...}
...
...
...
{ ... }
30900
1.800.027
2300
{"value":98150,"tribute":{"code":3155,...}, ...}
We have defined the following query:
SELECT dft.id,
SUM((elem->>'value')::NUMERIC),
jsonb_array_length(dft.document->'withholdingCredit'),
jsonb_array_elements(jsonb_agg(elem
ORDER BY
elem->>'proportionalityIndicator',
(elem->'tribute'->>'code')::NUMERIC,
(elem->'tribute'->>'additionalCode')::NUMERIC,
elem->>'payingSourceEin'))
FROM
draft_document dft
CROSS JOIN LATERAL jsonb_array_elements(dft.document->'withholdingCredit') arr(elem)
WHERE (dft.document->'withholdingCredit') IS NOT NULL
AND dft.id = :id
AND dft.ein_search = :ein_search
GROUP BY dft.id
LIMIT :limit OFFSET :offset;
This query works, but with performance limitation when we have a large number of elements into the jsonb array.
Any suggestion on how to improve it is welcome.
BTW, we are using Postgres 9.6.
Your weird query which breaks it apart, aggregates it, and breaks is apart again does seem to trigger some pathological memory management issue in PostgreSQL (tested on 15dev). Maybe you should file a bug report on that.
But you can avoid the problem by just breaking it apart one time. Then you need to use a window function to get the tabulations you want to include all rows even those removed by the offset and limit.
SELECT dft.id,
SUM((elem->>'value')::NUMERIC) over (),
count(*) over (),
elem
FROM
draft_document dft
CROSS JOIN LATERAL jsonb_array_elements(dft.document->'withholdingCredit') arr(elem)
WHERE (dft.document->'withholdingCredit') IS NOT NULL
AND dft.id = 4
AND dft.ein_search = '4'
ORDER BY
elem->>'proportionalityIndicator',
(elem->'tribute'->>'code')::NUMERIC,
(elem->'tribute'->>'additionalCode')::NUMERIC,
elem->>'payingSourceEin'
limit 4 offset 500;
In my hands this gives the same answer as your query, but takes 370 ms rather than 13,789 ms.
At higher offsets than that, my query still works while yours leads to a total lock up requiring a hard reset.
If anyone wants to reproduce the poor behavior, I generated the data by:
insert into draft_document select 4, jsonb_build_object('withholdingCredit',jsonb_agg(jsonb_build_object('value',floor(random()*99999)::int,'tribute','{"code": 1216, "additionalCode": 2}'::jsonb,'payingSourceEin',floor(random()*99999999)::int,'proportionalityIndicator',false))),'4' from generate_series(1,100000) group by 1,3;

Importing json in neo4J

[PROBLEM - My final solution below]
I'd like to import a json file containing my data into Neo4J.
However, it is super slow.
The Json file is structured as follow
{
"graph": {
"nodes": [
{ "id": 3510982, "labels": ["XXX"], "properties": { ... } },
{ "id": 3510983, "labels": ["XYY"], "properties": { ... } },
{ "id": 3510984, "labels": ["XZZ"], "properties": { ... } },
...
],
"relationships": [
{ "type": "bla", "startNode": 3510983, "endNode": 3510982, "properties": {} },
{ "type": "bla", "startNode": 3510984, "endNode": 3510982, "properties": {} },
....
]
}
}
Is is similar to the one proposed here: How can I restore data from a previous result in the browser?.
By looking at the answer.
I discovered that I can use
CALL apoc.load.json("file:///test.json") YIELD value AS row
WITH row, row.graph.nodes AS nodes
UNWIND nodes AS node
CALL apoc.create.node(node.labels, node.properties) YIELD node AS n
SET n.id = node.id
and then
CALL apoc.load.json("file:///test.json") YIELD value AS row
with row
UNWIND row.graph.relationships AS rel
MATCH (a) WHERE a.id = rel.endNode
MATCH (b) WHERE b.id = rel.startNode
CALL apoc.create.relationship(a, rel.type, rel.properties, b) YIELD rel AS r
return *
(I have to do it in two times because else their are relation duplication due to the two unwind).
But this is super slow because I have a lot of entities and I suspect the program to search over all of them for each relation.
At the same time, I know "startNode": 3510983 refers to a node.
So the question: does it exists anyway to speed up to import process using ids as index, or something else?
Note that my nodes have differents types. So I did not find a way to create an index for all of them, and I suppose that would be too huge (memory)
[MY SOLUTION]
CALL apoc.load.json('file:///test.json') YIELD value
WITH value.graph.nodes AS nodes, value.graph.relationships AS rels
UNWIND nodes AS n
CALL apoc.create.node(n.labels, apoc.map.setKey(n.properties, 'id', n.id)) YIELD node
WITH rels, COLLECT({id: n.id, node: node, labels:labels(node)}) AS nMap
UNWIND rels AS r
MATCH (w{id:r.startNode})
MATCH (y{id:r.endNode})
CALL apoc.create.relationship(w, r.type, r.properties, y) YIELD rel
RETURN rel
[EDITED]
This approach may work more efficiently:
CALL apoc.load.json("file:///test.json") YIELD value
WITH value.graph.nodes AS nodes, value.graph.relationships AS rels
UNWIND nodes AS n
CALL apoc.create.node(n.labels, apoc.map.setKey(n.properties, 'id', n.id)) YIELD node
WITH rels, apoc.map.mergeList(COLLECT({id: n.id, node: node})) AS nMap
UNWIND rels AS r
CALL apoc.create.relationship(nMap[r.startNode], r.type, r.properties, nMap[r.endNode]) YIELD rel
RETURN rel
This query does not use MATCH at all (and does not need indexing), since it just relies on an in-memory mapping from the imported node ids to the created nodes. However, this query could run out of memory if there are a lot of imported nodes.
It also avoids invoking SET by using apoc.map.setKey to add the id property to n.properties.
The 2 UNWINDs do not cause a cartesian product, since this query uses the aggregating function COLLECT (before the second UNWIND) to condense all the preceding rows into one (because the grouping key, rels, is a singleton).
Have you tried indexing the nodes before the LOAD JSON? This may not be tenable since you have multiple node labels. But if they are limited you can create placeholder node, create and index and then delete the placeholder. After this, run the LOAD Json
Create (n:YourLabel{indx:'xxx'})
create index on: YourLabel(indx)
match (n:YourLabel) delete n
The index will speed the matching or merging

Couchbase Index and N1QL Query

I have created a new bucket, FooBar on my couchbase server.
I have a Json Document which is a List with some properties and it is in my couchbase bucket as follows:
[
{
"Venue": "Venue1",
"Country": "AU",
"Locale": "QLD"
},
{
"Venue": "Venue2",
"Country": "AU",
"Locale": "NSW"
},
{
"Venue": "Venue3",
"Country": "AU",
"Locale": "NSW"
}
]
How Do i get the couchbase query to return a List of Locations when using N1QL query.
For instance, SELECT * FROM FooBar WHERE Locale = 'QLD'
Please let me know of any indexes I would need to create as well. Additionally, how can i return only results where the object is of type Location, and not say another object which may have the 'Locale' Property.
Chud
PS - I have also created some indexes, however I would like an unbiased answer on how to achieve this.
Typically you would store these as separate documents, rather than in a single document as an array of objects, which is how the data is currently shown.
Since you can mix document structures, the usual pattern to distinguish them is to have something like a 'type' field. ('type' is in no way special, just the most common choice.)
So your example would look like:
{
"Venue": "Venue1",
"Country": "AU",
"Locale": "QLD",
"type": "location"
}
...
{
"Venue": "Venue3",
"Country": "AU",
"Locale": "NSW",
"type": "location"
}
where each JSON object would be a separate document with a unique document ID. (If you have some predefined data you want to load, look at cbimport for how to add it to your database. There are a few different formats for doing it. You can also have it generate document IDs for you.)
Then, what #vsr wrote is correct. You'd create an index on the Locale field. That will be optimal for the query you want. Note you could create an index on every document with CREATE INDEX ix1 ON FooBar(Locale); too. In this simple case it doesn't really make a difference. Read about the query Explain feature of the admin console to for help using that to understand optimizing queries.
Finally, the query #vsr wrote is also correct:
SELECT * FROM FooBar WHERE type = "Location" AND Locale = "QLD";
CREATE INDEX ix1 ON FooBar(Locale);
https://dzone.com/articles/designing-index-for-query-in-couchbase-n1ql
CREATE INDEX ix1 ON FooBar(Locale) WHERE type = "Location";
SELECT * FROM FooBar WHERE type = "Location" AND Locale = "QLD";
If it is array and filed name is list
CREATE INDEX ix1 ON FooBar(DISTINCT ARRAY v.Locale FOR v IN list END) WHERE type = "Location";
SELECT * FROM FooBar WHERE type = "Location" AND ANY v IN list SATISFIES v.Locale = "QLD" END;

N1QL Distinct Query on Nested Arrays

(Couchbase 4.5) Suppose I have the following object stored in my couchbase instance:
{
parentArray : [
{
childArray: [{value: 'v1'}, {value:'v2'}]
},
{
childArray: [{value: 'v1'}, {value: 'v3'}]
}
]
}
Now I want to select the distinct elements from childArray, which should return an array equal to ['v1', 'v2', 'v3'].
I have a couple solutions to this. My first thought was to go ahead and use the UNNEST operation:
SELECT DISTINCT ca.value FROM `my-bucket` AS b UNNEST b.parentArray AS pa UNNEST pa.childArray AS ca WHERE _class="someclass" AND dataType="someDataType";
With this approach I get a polynomial explosion in the number of scanned elements (due to the unnest'ing of two arrays), and the query takes a bit of time to complete (for my real data on the order of 24 seconds). When I remove unnest, and simply query for distinct elements on the top-level elements (those adjacent to parentArray), it takes on the order of milliseconds.
Another solution is to handle this in the application code, by simply iterating through the returned values and finding the distinct values my-self. This approach is bad, because it brings too much data into the application space.
Any help please!
Thank you!
UPDATE: Looks like without a "WHERE" clause using the "UNNEST" statements the performance is fast. So do I need Array Indexes here?
UPDATE: Nevermind about the previous update, since there is no index elements in the where clause. Also, but I do notice that if I remove the UNNEST OR the WHERE then the query is fast. Moreover, looking at the explain and adding a GSI for compound index (_class, dataType) I can see "IndexScan" on the provided index.
INSERT INTO default values("3",{ "parentArray" : [ { "childArray": [{"value": 'v1'}, {"value":'v2'}] }, { "childArray": [{"value": 'v1'}, {"value": 'v3'}] } ] });
SELECT ARRAY_DISTINCT(ARRAY v.`value` FOR v WITHIN parentArray END) FROM default;
OR
SELECT ARRAY_DISTINCT(ARRAY_FLATTEN(
ARRAY ARRAY v.`value` FOR v IN ca.childArray END FOR ca IN parentArray END,
2)) FROM default;
You can add where clause. If this requires across the documents use the following.
INSERT INTO default values("4",{ "parentArray" : [ { "childArray": [{"value": 'v5'}, {"value":'v2'}] }, { "childArray": [{"value": 'v1'}, {"value": 'v3'}] } ] });
SELECT ARRAY_DISTINCT(ARRAY_FLATTEN(ARRAY_AGG(ARRAY v.`value` FOR v WITHIN parentArray END),2)) FROM default;
SELECT ARRAY_DISTINCT(ARRAY_FLATTEN(ARRAY_AGG(ARRAY_FLATTEN(ARRAY ARRAY v.`value` FOR v IN ca.childArray END FOR ca IN parentArray END,2)),2)) FROM default;

Possible to chain results in N1ql?

I'm currently trying to do a bit of complex N1QL for a project I'm working on, theoretically I could do all of this processing in multiple N1QL calls and by parsing the results each time, however if possible I'd like for this to contained in one call.
What I would like to do is:
filter all documents that contain a "dataSync.test.id" field with more than 1 id
Read back all other ids in that list
Use that list to get other documents containing those ids
Get the "dataSync.test._channels" field for those documents (optionally a filter by docType might help parsing)
This would probably return a list of "dataSync.test._channels"
Is this possible in N1QL? It appears like it might be but I can't get the syntax right.
My data structures look a little like
{
"dataSync": {
"test": {
"_channels": [
"RP"
],
"id": [
"dataSync_user_1015",
"dataSync_user_1010",
"dataSync_user_1005"
],
"_lastUpdatedBy": "TEST"
}
},
...
}
{
"dataSync": {
"test": {
"_channels": [
"RSD"
],
"id": [
"dataSync_user_1010"
],
"_lastUpdatedBy": "TEST"
}
},
...
}
Yes. I think you can do all these.
Initial set of IDs with filtering can be retrieved as a subquery and then you can get subsquent documents by joins.
SELECT fulldoc
FROM (select meta().id as dockey from doc where a=1) as mydoc
INNER JOIN doc fulldoc ON KEYS mydoc.dockey;
There are optimizations that can be done here. Try the sequencing first to ensure you're get the job done.