I have a Postgres statement that returns extracts/iterates over a json blob in the value column of a table. I am able to get a count one level deep using the query below but I can't count any deeper. I was using:
select jsonb_array_length(value -> 'team') as team_count
This returns the proper count but I cant seem to leverage this to count the names under each team.
In a perfect world I would my results to return 4 lines of results like this(title and a matching count of names):
Product Owner, 2
Technical Product Manager, 2
Data Modeler, 0
Engineer, 0
How would I go about amending this query to give me the count of names under team? I tried all sorts of stuff but nothing that got me close.
Sample Json is below.
"team":[
{
"title":"Product Owner",
"names":[
"John Smith",
"Jane Doe"
]
},
{
"title":"Technical Project Manager",
"names":[
"Fred Flintstone",
"Barney Rubble"
]
},
{
"title":"Data Modeler"
},
{
"title":"Engineer"
}
You seem to be looking for
SELECT
role -> 'title' AS team_role,
jsonb_array_length(role -> 'names') AS member_count
FROM jsonb_array_elements(value -> 'team') AS team(role)
Related
I've done some basic JSON parsing before in TSQL but am running into something a bit more complex.
The actual field within the JSON object I'm attempting to parse is an array with two objects in it.
For example:
{
"Channel":[],
"Account":[],
"OrderId": 4568,
"ParentAccount"null,
"Groups":[
{"Name":"List 1", "Include": false, "SalesDetails"[{
"Manufacturer":[], "DateRange":{"Start":"01/01/2021", "End:"12/31/2021"},
"State":"NC"}]
},
{"Name":"List 2", "Include": true, "SalesDetails"[{
"Manufacturer":[], "DateRange":{"Start":"01/01/2022", "End:"01/10/2022"},
"State":"SC"}]
}
],
"IsCustomer":true,
"ReferenceNumber": 554673
}
What I'd like to do within SQL is parse out the account, order id, and then the groups. Does anyone know how to parse out the multiple objects within the groups array? That's the part I haven't got.
My goal is to have a report where each order is on a single row.
order
groups object 1 name
groups object 2 name
4568
list 1
list 2
I'm trying to get the other values between the names such as Include and have the SalesDetail be their own column.
So far the following has gotten me closest to what i'm after:
SELECT
JSON_QUERY(data, '$.account') AS 'Account',
JSON_QUERY(data, '$.orderid') AS 'Order',
JSON_QUERY(data, '$.groups') AS 'Group_Detail'
FROM table
I haven't gotten the info within the groups field parsed out into their own individual column though.
Assuming I correctly fixed the serialization issues, maybe something like this
declare #json nvarchar(max)=N'{
"Channel":[],
"Account":[],
"OrderId": 4568,
"ParentAccount":null,
"Groups":[
{"Name":"List 1", "Include": false, "SalesDetails":[{
"Manufacturer":[], "DateRange":{"Start":"01/01/2021", "End":"12/31/2021"},
"State":"NC"}]
},
{"Name":"List 2", "Include": true, "SalesDetails":[{
"Manufacturer":[], "DateRange":{"Start":"01/01/2022", "End":"01/10/2022"},
"State":"SC"}]
}
],
"IsCustomer":true,
"ReferenceNumber": 554673
}';
select OrderId,
grp1.[Name] [groups object 1 name],
grp2.[Name] [groups object 2 name]
from openjson(#json) with (OrderId int,
Groups nvarchar(max) as json) oj
cross apply openjson(oj.Groups, '$[0]') with ([Name] nvarchar(4000)) grp1
cross apply openjson(oj.Groups, '$[1]') with ([Name] nvarchar(4000)) grp2;
OrderId groups object 1 name groups object 2 name
4568 List 1 List 2
Here is my json data:
{
"TransactionId": "1",
"PersonApplicant": [
{
"PersonalId": "1005",
"ApplicantPhone": [
{
"PhoneType": "LANDLINE",
"PhoneNumber": "8085063644",
"IsPrimaryPhone": true
}
]
},
{
"PersonalId": "1006",
"ApplicantPhone": [
{
"PhoneType": "LANDLINE",
"PhoneNumber": "9643645364",
"IsPrimaryPhone": true
},
{
"PhoneType": "HOME",
"PhoneNumber": "987654321",
"IsPrimaryPhone": false
}
]
}
]
}
I want to get phone no of the people who have phonetype as landline.
How to do that?
I tried this approach:
#find phoneNumber when phoneType='LANDLINE'
SELECT
#path_to_name := json_unquote(json_search(applicationData, 'one', 'LANDLINE')) AS path_to_name,
#path_to_parent := trim(TRAILING '.PhoneType' from #path_to_name) AS path_to_parent,
#event_object := json_extract(applicationData, #path_to_parent) as event_object,
json_unquote(json_extract(#event_object, '$.PhoneNumber')) as PhoneNumber
FROM application;
The issue with this is that I am using 'one' so I am able to achieve results but here in my json I have 2 people who have type as landline.
Using json search I am getting array of values and I am not able to decide how to extract these array row values in a manner where I can extract paths.
SELECT
#path_to_name := json_unquote(json_search(applicationData, 'all', 'LANDLINE')) from application;
result:
as you can see at 3rd and 4th row i am getting 2 data as an array.
How do I store this data to get the appropriate result?
I also tried one more query but not able to retrieve results for array of data.
I cannot use stored procedure and I have to use mysql workbench.
Please note that I am fresher so I don't know how I can approach this solution for more complex queries where I may have to retrieve id of a person having type as landline (multiple people in single array).
SELECT test.id, jsontable.*
FROM test
CROSS JOIN JSON_TABLE(test.data,
'$.PersonApplicant[*]'
COLUMNS ( PersonalId INT PATH '$.PersonalId',
PhoneType VARCHAR(255) PATH '$.ApplicantPhone[0].PhoneType',
PhoneNumber VARCHAR(255) PATH '$.ApplicantPhone[0].PhoneNumber')) jsontable
WHERE jsontable.PhoneType = 'LANDLINE';
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=4089207ccfba5068a48e06b52865e759
I have a report in JSON format stored in a field in a PostgreSQL database table.
Say the (simplified) table format is:
Column | Type
-------------------+----------------------------
id | integer
element_id | character varying(256)
report | json
and the structure of the data in the reports is like this
{
"section1":
"test1": {
"outcome": "nominal",
"results": {
"value1": 34.,
"value2": 56.
}
},
"test2": {
"outcome": "warning",
"results": {
"avg": 4.5,
"std": 21.
}
},
...
"sectionN": {
...
}
}
That is, there are N keys at first level (the sections), each of them being an object with a set of keys (the tests), with a outcome and a variable set of results in form of (key, value) pairs.
I need to do filtering based on internal JSON keys. More specifically, in this example, I want to know if it is possible, using SQL alone, to obtain the elements that have, for example, the std value in the results section above a certain threshold, say 10. I can even know that the std is in test2, but I do not know a priori in which section. With this filter (test2.std > 10.), for example, the record with the sample data shown above will appear, since the std variable in the test2 test has this value equal to 21. (>10.).
Another, simpler, filter could be to request all the records for which the test2.outcome is not nominal.
One way is jsonb_each, like:
select section.key
, test.key
from t1
cross join
jsonb_each(t1.col1) section
cross join
jsonb_each(section.value) test
where (test.value->'results'->>'std')::int > 10
Example at SQL Fiddle.
I'm fairly new to couchbase and have tried to find the answer to a particular query I'm trying to create with not much success so far.
I've debated between using a view or N1QL for this particular case and settled with N1QL but haven't managed to get it to work so maybe a view is better after all.
Basically I have the document key (Group_1) for the following document:
Group_1
{
"cbType": "group",
"ID": 1,
"Name": "Group Atlas 3",
"StoreList": [
2,
4,
6
]
}
I also have 'store' documents, their keys are listed in this document's storelist. (Store_2, Store_4, Store_6 and they have a storeID value that is 2, 4 and 6) I basically want to obtain all 3 documents listed.
What I do have that works is I obtain this document with its id by doing:
var result = CouchbaseManager.Bucket.Get<dynamic>(couchbaseKey);
mygroup = JsonConvert.DeserializeObject<Group> (result.ToString());
I can then loop through it's storelist and obtain all it's stores in the same manner, but i don't need anything else from the group, all i want are the stores and would have prefered to do this in a single operation.
Does anyone know how to do a N1QL directly unto a specified document value?
Something like (and this is total imaginary non working code I'm just trying to clearly illustrate what I'm trying to get at):
SELECT * FROM mycouchbase WHERE documentkey IN
Group_1.StoreList
Thanks
UPDATE:
So Nic's solution does not work;
This is the closest I get to what I need atm:
SELECT b from DataBoard c USE KEYS ["Group_X"] UNNEST c.StoreList b;
"results":[{"b":2},{"b":4},{"b":6}]
Which returns the list of IDs of the Stores I want for any given group (Group_X) - I haven't found a way to get the full Stores instead of just the ID in the same statement yet.
Once I have, I'll post the full solution as well as all the speed bumps I've encountered in the process.
I apologize if I have a misunderstanding of your question, but I'm going to give it my best shot. If I misunderstood, please let me know and we'll work from there.
Let's use the following scenario:
group_1
{
"cbType": "group",
"ID": 1,
"Name": "Group Atlas 3",
"StoreList": [
2,
4,
6
]
}
store_2
{
"cbType": "store",
"ID": 2,
"name": "some store name"
}
store_4
{
"cbType": "store",
"ID": 4,
"name": "another store name"
}
store_6
{
"cbType": "store",
"ID": 6,
"name": "last store name"
}
Now lets say you wan't to query the stores from a particular group (group_1), but include no other information about the group. You essentially want to use N1QL's UNNEST and JOIN operators.
This might leave you with a query like so:
SELECT
stores.name
FROM `bucket-name-here` AS groups
UNNEST groups.StoreList AS groupstore
JOIN `bucket-name-here` AS stores ON KEYS ("store_" || groupstore.ID)
WHERE
META(groups).id = 'group_1';
A few assumptions are made in this. Both your documents exist in the same bucket and you only want to select from group_1. Of course you could use a LIKE and switch the group id to a percent wildcard.
Let me know if something doesn't make sense.
Best,
Try this query:
select Name
from buketname a join bucketname b ON KEYS a.StoreList
where Name="Group Atlas 3"
Based on your update, you can do the following:
SELECT b, s
FROM DataBoard c USE KEYS ["Group_X"]
UNNEST c.StoreList b
JOIN store_bucket s ON KEYS "Store_" || TO_STRING(b);
I have a similar requirement and I got what I needed with a query like this:
SELECT store
FROM `bucket-name-here` group
JOIN `bucket-name-here` store ON KEYS group.StoreList
WHERE group.cbType = 'group'
AND group.ID = 1
I am trying to learn mongodb. Suppose there are two tables and they are related. For example like this -
1st table has
First name- Fred, last name- Zhang, age- 20, id- s1234
2nd table has
id- s1234, course- COSC2406, semester- 1
id- s1234, course- COSC1127, semester- 1
id- s1234, course- COSC2110, semester- 1
how to insert data in the mongo db? I wrote it like this, not sure is it correct or not -
db.users.insert({
given_name: 'Fred',
family_name: 'Zhang',
Age: 20,
student_number: 's1234',
Course: ['COSC2406', 'COSC1127', 'COSC2110'],
Semester: 1
});
Thank you in advance
This would be a assuming that what you want to model has the "student_number" and the "Semester" as what is basically a unique identifier for the entries. But there would be a way to do this without accumulating the array contents in code.
You can make use of the upsert functionality in the .update() method, with the help of of few other operators in the statement.
I am going to assume you are going this inside a loop of sorts, so everything on the right side values is actually a variable:
db.users.update(
{
"student_number": student_number,
"Semester": semester
},
{
"$setOnInsert": {
"given_name": given_name,
"family_name": family_name,
"Age": age
},
"$addToSet": { "courses": course }
},
{ "upsert": true }
)
What this does in an "upsert" operation is first looks for a document that may exist in your collection that matches the query criteria given. In this case a "student_number" with the current "Semester" value.
When that match is found, the document is merely "updated". So what is being done here is using the $addToSet operator in order to "update" only unique values into the "courses" array element. This would seem to make sense to have unique courses but if that is not your case then of course you can simply use the $push operator instead. So that is the operation you want to happen every time, whether the document was "matched" or not.
In the case where no "matching" document is found, a new document will then be inserted into the collection. This is where the $setOnInsert operator comes in.
So the point of that section is that it will only be called when a new document is created as there is no need to update those fields with the same information every time. In addition to this, the fields you specified in the query criteria have explicit values, so the behavior of the "upsert" is to automatically create those fields with those values in the newly created document.
After a new document is created, then the next "upsert" statement that uses the same criteria will of course only "update" the now existing document, and as such only your new course information would be added.
Overall working like this allows you to "pre-join" the two tables from your source with an appropriate query. Then you are just looping the results without needing to write code for trying to group the correct entries together and simply letting MongoDB do the accumulation work for you.
Of course you can always just write the code to do this yourself and it would result in fewer "trips" to the database in order to insert your already accumulated records if that would suit your needs.
As a final note, though it does require some additional complexity, you can get better performance out of the operation as shown by using the newly introduced "batch updates" functionality.For this your MongoDB server version will need to be 2.6 or higher. But that is one way of still reducing the logic while maintaining fewer actual "over the wire" writes to the database.
You can either have two separate collections - one with student details and other with courses and link them with "id".
Else you can have a single document with courses as inner document in form of array as below:
{
"FirstName": "Fred",
"LastName": "Zhang",
"age": 20,
"id": "s1234",
"Courses": [
{
"courseId": "COSC2406",
"semester": 1
},
{
"courseId": "COSC1127",
"semester": 1
},
{
"courseId": "COSC2110",
"semester": 1
},
{
"courseId": "COSC2110",
"semester": 2
}
]
}