Update multiple elements of a list using Couchbase N1QL - couchbase

context
I have somewhere in my couchbase documents, a node looking like this :
"metadata": {
"configurations": {
"AU": {
"enabled": false,
"order": 2147483647
},
"BE": {
"enabled": false,
"order": 2147483647
},
"BG": {
"enabled": false,
"order": 2147483647
} ...
}
}
and it goes along with a list country unicodes and their "enabled" state
what I want to achieve
update this document to mark is as disabled ("enabled" = false) for all countries
to do this I hoped this syntax would work (let's say I'm trying to update document with id 03c53a2d-6208-4a35-b9ec-f61e74d81dab)
UPDATE `data` t
SET country.enabled = false
FOR country IN t.metadata.configurations END
where meta(t).id = "03c53a2d-6208-4a35-b9ec-f61e74d81dab";
but it seems like it doesn't change anything on my document
any hints ? :)
thanks guys,

As the filed name is dynamic you can generate field names using OBJECT_NAMES() and use that during update of field.
UPDATE data t USE KEYS "03c53a2d-6208-4a35-b9ec-f61e74d81dab"
SET t.metadata.configurations.[v].enabled = false FOR v IN OBJECT_NAMES(t.metadata.configurations) END ;
In above example OBJECT_NAMES(t.metadata.configurations) generates ["AU", "BE","BG"]
When field of JSON is referenced .[v] it evaluates v and value become field.
So During looping construct t.metadata.configurations.[v].enabled becomes
t.metadata.configurations.`AU`.enabled,
t.metadata.configurations.`BE`.enabled,
t.metadata.configurations.`BG`.enabled
Depends on value of v.

This query should work:
update data
use keys "03c53a2d-6208-4a35-b9ec-f61e74d81dab"
set country.enabled = true for country within metadata.configurations when
country.enabled is defined end
The WITHIN allows "country" to be found at any level of the metadata.configurations structure, and we use the "WHEN country.enabled IS DEFINED" to make sure we are looking at the correct type of "country" structure.

Related

How to update a specific field in all the documents of a bucket using Couchbase UI

I have a bucket in couchbase which has many documents for example
{
"id":"1",
"isAvailable": false
},
{
"id":"2",
"isAvailable": false
},
{
"id":"3",
"isAvailable": true
},
{
"id":"4"
}
Now I want to iterate through all the document in this bucket and check if this document has isAvailable: false.
If yes then I need to update that document's isAvailable: true.
All this I want to do is from the couchbase UI
I think an UPDATE statement would work for you.
Something like:
UPDATE mybucket SET isAvailable = true
"check if this document has isAvailable: false" I don't think you don't need to check if isAvailable is false, since you're just setting all of the isAvailable to true.
If you want to just verify that isAvailable is actually in the document (no matter what its value is), you can do something like this:
UPDATE mybucket
SET isAvailable = true
WHERE isAvailable IS NOT MISSING
Index Selection is based on WHERE clause and mutation of the document is controlled by WHERE clause. If no WHERE clause all the documents are mutated. Mutations full document update and expensive. Do mutation when needed by supply the where clause (If you repeat the statement due to CAS error will not update all of them again)
CREATE INDEX ix1 ON mybucket(isAvailable);
UPDATE mybucket AS b
SET b.isAvailable = true
WHERE b.isAvailable = false;

How can Postgres extract parts of json, including arrays, into another JSON field?

I'm trying to convince PostgreSQL 13 to pull out parts of a JSON field into another field, including a subset of properties within an array based on a discriminator (type) property. For example, given a data field containing:
{
"id": 1,
"type": "a",
"items": [
{ "size": "small", "color": "green" },
{ "size": "large", "color": "white" }
]
}
I'm trying to generate new_data like this:
{
"items": [
{ "size": "small" },
{ "size": "large"}
]
}
items can contain any number of entries. I've tried variations of SQL something like:
UPDATE my_table
SET new_data = (
CASE data->>'type'
WHEN 'a' THEN
json_build_object(
'items', json_agg(json_array_elements(data->'items') - 'color')
)
ELSE
null
END
);
but I can't seem to get it working. In this case, I get:
ERROR: set-returning functions are not allowed in UPDATE
LINE 6: 'items', json_agg(json_array_elements(data->'items')...
I can get a set of items using json_array_elements(data->'items') and thought I could roll this up into a JSON array using json_agg and remove unwanted keys using the - operator. But now I'm not sure if what I'm trying to do is possible. I'm guessing it's a case of PEBCAK. I've got about a dozen different types each with slightly different rules for how new_data should look, which is why I'm trying to fit the value for new_data into a type-based CASE statement.
Any tips, hints, or suggestions would be greatly appreciated.
One way is to handle the set json_array_elements() returns in a subquery.
UPDATE my_table
SET new_data = CASE
WHEN data->>'type' = 'a' THEN
(SELECT json_build_object('items',
json_agg(jae.item::jsonb - 'color'))
FROM json_array_elements(data->'items') jae(item))
END;
db<>fiddle
Also note that - isn't defined for json only for jsonb. So unless your columns are actually jsonb you need a cast. And you don't need an explicit ... ELSE NULL ... in a CASE expression, NULL is already the default value if no other value is specified in an ELSE branch.

Get last element of array by parsing JSON with Neo4j APOC

Short task description: I need to get the last element of an array/list of one of the fields in nested JSON, here the input JSON file:
{
"origin": [{
"label": "Alcohol drinks",
"tag": [],
"type": "string",
"xpath": []
},
{
"label": "Wine",
"tag": ["red", "white"],
"type": "string",
"xpath": ["Alcohol drinks"]
},
{
"label": "Port wine",
"tag": ["Portugal", "sweet", "strong"],
"type": "string",
"xpath": ["Alcohol drinks", "Wine"]
},
{
"label": "Sandeman Cask 33",
"tag": ["red", "expensive"],
"type": "string",
"xpath": ["Alcohol drinks", "Wine", "Port wine"]
}
]
}
I need to get the last element of "xpath" field, in order to create relationship with appropriate "label". Here is the code, which creates connection to all elements mentioned in "xpath", I need just connection to the last one:
WITH "file:///D:/project/neo_proj/input.json" AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.origin as or
MERGE(label:concept{name:or.label})
ON CREATE SET label.type = or.type
FOREACH(tagName IN or.tag | MERGE(tag:concept{name:tagName})
MERGE (tag)-[r:link]-(label)
ON CREATE SET r.Weight=1
ON MATCH SET r.Weight=r.Weight+1)
FOREACH(xpathName IN or.xpath | MERGE (xpath:concept{name:xpathName})
MERGE (label)-[r:link]-(xpath))
Probably there is something like:
apoc.agg.last(or.xpath)
which returns just an array of arrays or all "xpath" from all 4 records of "origin".
I will appreciate any help, probably there some workarounds (not necessary as I proposed) to solve this issue. Thank you in advance!
N.B. All this should be done from an app, not from within Neo4j browser.
Probably the easiest way would be to split this query into two queries if you want to only take the xpath array of the last element in the origin object.
Query: 1
WITH "file:///D:/project/neo_proj/input.json" AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.origin as or
MERGE(label:concept{name:or.label})
ON CREATE SET label.type = or.type
FOREACH(tagName IN or.tag | MERGE(tag:concept{name:tagName})
MERGE (tag)-[r:link]-(label)
ON CREATE SET r.Weight=1
ON MATCH SET r.Weight=r.Weight+1)
Query 2:
WITH "file:///D:/project/neo_proj/input.json" AS url
CALL apoc.load.json(url) YIELD value
WITH value.origin[-1] as or
MATCH(label:concept{name:or.label})
FOREACH(xpathName IN or.xpath | MERGE (xpath:concept{name:xpathName})
MERGE (label)-[r:link]-(xpath))
Combining these two queries into a single one feels hacky anyway and I would avoid it, but I guess you can do the following.
WITH "file:///D:/project/neo_proj/input.json" AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.origin as or
MERGE(label:concept{name:or.label})
ON CREATE SET label.type = or.type
FOREACH(tagName IN or.tag | MERGE(tag:concept{name:tagName})
MERGE (tag)-[r:link]-(label)
ON CREATE SET r.Weight=1
ON MATCH SET r.Weight=r.Weight+1)
// Any aggregation function will break the UNWIND loop
// and return a single row as we want to write it only once
WITH value.origin[-1] as last, count(*) as agg
FOREACH(xpathName IN last.xpath |
MERGE(label:concept{name:last.label})
MERGE (xpath:concept{name:xpathName})
MERGE (label)-[r:link]-(xpath))
Sounds like you're looking for the last() function? This will return the last element of a list.
In this case, since you UNWIND the origin to 4 rows, you'll get the last element of the list for each of those rows.
WITH "file:///D:/project/neo_proj/input.json" AS url
CALL apoc.load.json(url) YIELD value
UNWIND value.origin as or
RETURN last(or.xpath) as last

Couchbase Index and N1QL Query

I have created a new bucket, FooBar on my couchbase server.
I have a Json Document which is a List with some properties and it is in my couchbase bucket as follows:
[
{
"Venue": "Venue1",
"Country": "AU",
"Locale": "QLD"
},
{
"Venue": "Venue2",
"Country": "AU",
"Locale": "NSW"
},
{
"Venue": "Venue3",
"Country": "AU",
"Locale": "NSW"
}
]
How Do i get the couchbase query to return a List of Locations when using N1QL query.
For instance, SELECT * FROM FooBar WHERE Locale = 'QLD'
Please let me know of any indexes I would need to create as well. Additionally, how can i return only results where the object is of type Location, and not say another object which may have the 'Locale' Property.
Chud
PS - I have also created some indexes, however I would like an unbiased answer on how to achieve this.
Typically you would store these as separate documents, rather than in a single document as an array of objects, which is how the data is currently shown.
Since you can mix document structures, the usual pattern to distinguish them is to have something like a 'type' field. ('type' is in no way special, just the most common choice.)
So your example would look like:
{
"Venue": "Venue1",
"Country": "AU",
"Locale": "QLD",
"type": "location"
}
...
{
"Venue": "Venue3",
"Country": "AU",
"Locale": "NSW",
"type": "location"
}
where each JSON object would be a separate document with a unique document ID. (If you have some predefined data you want to load, look at cbimport for how to add it to your database. There are a few different formats for doing it. You can also have it generate document IDs for you.)
Then, what #vsr wrote is correct. You'd create an index on the Locale field. That will be optimal for the query you want. Note you could create an index on every document with CREATE INDEX ix1 ON FooBar(Locale); too. In this simple case it doesn't really make a difference. Read about the query Explain feature of the admin console to for help using that to understand optimizing queries.
Finally, the query #vsr wrote is also correct:
SELECT * FROM FooBar WHERE type = "Location" AND Locale = "QLD";
CREATE INDEX ix1 ON FooBar(Locale);
https://dzone.com/articles/designing-index-for-query-in-couchbase-n1ql
CREATE INDEX ix1 ON FooBar(Locale) WHERE type = "Location";
SELECT * FROM FooBar WHERE type = "Location" AND Locale = "QLD";
If it is array and filed name is list
CREATE INDEX ix1 ON FooBar(DISTINCT ARRAY v.Locale FOR v IN list END) WHERE type = "Location";
SELECT * FROM FooBar WHERE type = "Location" AND ANY v IN list SATISFIES v.Locale = "QLD" END;

How to add nested json object to Lucene Index

I need a little help regarding lucene index files, thought, maybe some of you guys can help me out.
I have json like this:
[
{
"Id": 4476,
"UrlName": null,
"PhoneData": [
{
"PhoneType": "O",
"PhoneNumber": "0065898",
},
{
"PhoneType": "F",
"PhoneNumber": "0065898",
}
],
"Contact": [],
"Services": [
{
"ServiceId": 10,
"ServiceGroup": 2
},
{
"ServiceId": 20,
"ServiceGroup": 1
}
],
}
]
Adding first two fields is relatively easy:
// add lucene fields mapped to db fields
doc.Add(new Field("Id", sampleData.Id.Value.ToString(), Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("UrlName", sampleData.UrlName.Value ?? "null" , Field.Store.YES, Field.Index.ANALYZED));
But how I can add PhoneData and Services to index so it can be connected to unique Id??
For indexing JSON objects I would go this way:
Store the whole value under a payload field, named for example $json. This field would be stored but not indexed.
For each (indexable) property (maybe nested) create an indexable field with its name as a XMLPath-like expression identifying the property, for example PhoneData.PhoneType
If is ok that all nested properties will be indexed then it's simple, just iterate over all of them generating this indexable field.
But if you don't want to index all of them (a more realistic case), how to know which property is indexable is another problem; in this case you could:
Accept from the client the path expressions of the index fields to be created when storing the document, or
Put JSON Schema into play to describe your data (assuming your JSON records have a common schema), and extend it with a custom property that would allow you to tag which properties are indexable.
I have created a library doing this (and much more) that maybe can help you.
You can check it at https://github.com/brutusin/flea-db