How to update multiple json fields at root level with Postgres? - json

I am trying to update the fields age and city of one json feed using:
select jsonb_set(d,'{0,age,city}',d || '{"age":30,"city":"los angeles"}')
from (
values ('{"name":"john", "age":26,"city":"new york city"}'::jsonb)
) t(d);
but what I get back is:
{"age": 26, "city": "new york city", "name": "john"}
instead of the expected:
{"age": 30, "city": "los angeles", "name": "john"}
that means none of the wanted fields have been updated.
I have already looked at:
postgres jsonb_set multiple keys update
and went through the relative documentation but I cannot get it right. Any help?

From the documentation:
All the items of the path parameter of jsonb_set as well as jsonb_insert except the last item must be present in the target.
The path given in the query does not meet the above condition. Actually, jsonb_set() does not work for objects at the root level, and the only way is to use the || operator:
select d || '{"age":30,"city":"los angeles"}'
from (
values ('{"name":"john", "age":26,"city":"new york city"}'::jsonb)
) t(d);
?column?
----------------------------------------------------
{"age": 30, "city": "los angeles", "name": "john"}
(1 row)
Maybe it would be logical that you could use an empty path
select jsonb_set(d, '{}', d || '{"age":30,"city":"los angeles"}')
Unfortunately, jsonb developers did not provide such a possibility.

Related

Query and {JSON]

While using BigQuery's Console, we were instructed to input some data.
The following example was used for guidance
SELECT * FROM `bigquery-public-data.new_york_citibike.citibike_trips`
Afterward, I ran the query as instructed and removed the LIMIT clause.
The [JSON] results produced this information ...
[{
"tripduration": "2083",
"starttime": "2016-04-18T13:02:24",
"stoptime": "2016-04-18T13:37:08",
"start_station_id": "417",
"start_station_name": "Barclay St \\u0026 Church St",
"start_station_latitude": "40.71291224",
"start_station_longitude": "-74.01020234",
"end_station_id": "309",
"end_station_name": "Murray St \\u0026 West St",
"end_station_latitude": "40.7149787",
"end_station_longitude": "-74.013012",
"bikeid": "23641",
"usertype": "Subscriber",
"birth_year": "1945",
"gender": "male",
"customer_plan": ""
},]
Some of the results were deleted (TRIM).
I attempted to locate the longest ride on the citibikes and the shortest ride as well. (In terms of distance).
I didn't calibrate the information correctly. Therefore, I didn't receive any response for this calculation.
[JSON] is a query result-based language, will it create the same response each time it's used for public datasets?

query from MYSQL to mongodb

This is my MYSQL query:
SELECT country, vaccines, MAX(people_fully_vaccinated_per_hundred) as vaccinated_precentage
FROM country_vaccinations
WHERE people_fully_vaccinated_per_hundred > 60
GROUP BY country, vaccines
ORDER BY MAX(people_fully_vaccinated_per_hundred) DESC;
It basically lists all countries that have fully vaccinated more than 60% of its people, and the types of vaccine offered by that country.
I am trying to do the same on MongoDB:
db.country_vaccinations.aggregate([
{$project: {_id:0,
country: 1,
vaccines: 1,
people_fully_vaccinated_per_hundred: 1},
}
{$match: {"people_fully_vaccinated_per_hundred":{$gt:60}}}
])
However, I am not sure why it returns "No Records Found" when i add in the $match to retrieve rows that have "people_fully_vaccinated_per_hundred">60%.
Can someone advise me on what is my mistake? I would really appreciate it, as I am new to noSQL and am not sure why.
I am not sure it does the same, test it before using it, and if doesn't work, give us if you can some sample data and the expected output so we can test it.
country_vaccinations.aggregate(
[{"$match": {"people_fully_vaccinated_per_hundred": {"$gt": 60}}},
{"$group":
{"_id": {"country": "$country", "vaccines": "$vaccines"},
"vaccinated_precentage": {"$max": "$people_fully_vaccinated_per_hundred"}}},
{"$sort": {"vaccinated_precentage": -1}},
{"$project":
{"_id": 0,
"country": "$_id.country",
"vaccines": "$_id.vaccines",
"vaccinated_precentage": 1}}])

Postgresql - Count of elements in nested JSON blob

I have a Postgres statement that returns extracts/iterates over a json blob in the value column of a table. I am able to get a count one level deep using the query below but I can't count any deeper. I was using:
select jsonb_array_length(value -> 'team') as team_count
This returns the proper count but I cant seem to leverage this to count the names under each team.
In a perfect world I would my results to return 4 lines of results like this(title and a matching count of names):
Product Owner, 2
Technical Product Manager, 2
Data Modeler, 0
Engineer, 0
How would I go about amending this query to give me the count of names under team? I tried all sorts of stuff but nothing that got me close.
Sample Json is below.
"team":[
{
"title":"Product Owner",
"names":[
"John Smith",
"Jane Doe"
]
},
{
"title":"Technical Project Manager",
"names":[
"Fred Flintstone",
"Barney Rubble"
]
},
{
"title":"Data Modeler"
},
{
"title":"Engineer"
}
You seem to be looking for
SELECT
role -> 'title' AS team_role,
jsonb_array_length(role -> 'names') AS member_count
FROM jsonb_array_elements(value -> 'team') AS team(role)

Update nested fields in Mongodb

I have a Json for vendor:
{
"id": 1,
"contact": {
"address": "abc",
"phone": "123456"
}
}
If the update is {"contact": {"address":"xyz"}}, the address should be updated to xyz, and phone is still there, i.e. not deleted.
I know $set and dot notation (https://docs.mongodb.org/manual/reference/operator/update/set/), for example, {$set: {"contact.address":"xyz"}}, can do this.
However, what I am trying to do is to come out with a generic solution in the sense that it can be applied to models with nested depth larger than 2. In other words, given the update in json form, the solution should ONLY update the fields specified in the update and leave other fields intact.

Using N1QL with document keys

I'm fairly new to couchbase and have tried to find the answer to a particular query I'm trying to create with not much success so far.
I've debated between using a view or N1QL for this particular case and settled with N1QL but haven't managed to get it to work so maybe a view is better after all.
Basically I have the document key (Group_1) for the following document:
Group_1
{
"cbType": "group",
"ID": 1,
"Name": "Group Atlas 3",
"StoreList": [
2,
4,
6
]
}
I also have 'store' documents, their keys are listed in this document's storelist. (Store_2, Store_4, Store_6 and they have a storeID value that is 2, 4 and 6) I basically want to obtain all 3 documents listed.
What I do have that works is I obtain this document with its id by doing:
var result = CouchbaseManager.Bucket.Get<dynamic>(couchbaseKey);
mygroup = JsonConvert.DeserializeObject<Group> (result.ToString());
I can then loop through it's storelist and obtain all it's stores in the same manner, but i don't need anything else from the group, all i want are the stores and would have prefered to do this in a single operation.
Does anyone know how to do a N1QL directly unto a specified document value?
Something like (and this is total imaginary non working code I'm just trying to clearly illustrate what I'm trying to get at):
SELECT * FROM mycouchbase WHERE documentkey IN
Group_1.StoreList
Thanks
UPDATE:
So Nic's solution does not work;
This is the closest I get to what I need atm:
SELECT b from DataBoard c USE KEYS ["Group_X"] UNNEST c.StoreList b;
"results":[{"b":2},{"b":4},{"b":6}]
Which returns the list of IDs of the Stores I want for any given group (Group_X) - I haven't found a way to get the full Stores instead of just the ID in the same statement yet.
Once I have, I'll post the full solution as well as all the speed bumps I've encountered in the process.
I apologize if I have a misunderstanding of your question, but I'm going to give it my best shot. If I misunderstood, please let me know and we'll work from there.
Let's use the following scenario:
group_1
{
"cbType": "group",
"ID": 1,
"Name": "Group Atlas 3",
"StoreList": [
2,
4,
6
]
}
store_2
{
"cbType": "store",
"ID": 2,
"name": "some store name"
}
store_4
{
"cbType": "store",
"ID": 4,
"name": "another store name"
}
store_6
{
"cbType": "store",
"ID": 6,
"name": "last store name"
}
Now lets say you wan't to query the stores from a particular group (group_1), but include no other information about the group. You essentially want to use N1QL's UNNEST and JOIN operators.
This might leave you with a query like so:
SELECT
stores.name
FROM `bucket-name-here` AS groups
UNNEST groups.StoreList AS groupstore
JOIN `bucket-name-here` AS stores ON KEYS ("store_" || groupstore.ID)
WHERE
META(groups).id = 'group_1';
A few assumptions are made in this. Both your documents exist in the same bucket and you only want to select from group_1. Of course you could use a LIKE and switch the group id to a percent wildcard.
Let me know if something doesn't make sense.
Best,
Try this query:
select Name
from buketname a join bucketname b ON KEYS a.StoreList
where Name="Group Atlas 3"
Based on your update, you can do the following:
SELECT b, s
FROM DataBoard c USE KEYS ["Group_X"]
UNNEST c.StoreList b
JOIN store_bucket s ON KEYS "Store_" || TO_STRING(b);
I have a similar requirement and I got what I needed with a query like this:
SELECT store
FROM `bucket-name-here` group
JOIN `bucket-name-here` store ON KEYS group.StoreList
WHERE group.cbType = 'group'
AND group.ID = 1