I'm fairly new to couchbase and have tried to find the answer to a particular query I'm trying to create with not much success so far.
I've debated between using a view or N1QL for this particular case and settled with N1QL but haven't managed to get it to work so maybe a view is better after all.
Basically I have the document key (Group_1) for the following document:
Group_1
{
"cbType": "group",
"ID": 1,
"Name": "Group Atlas 3",
"StoreList": [
2,
4,
6
]
}
I also have 'store' documents, their keys are listed in this document's storelist. (Store_2, Store_4, Store_6 and they have a storeID value that is 2, 4 and 6) I basically want to obtain all 3 documents listed.
What I do have that works is I obtain this document with its id by doing:
var result = CouchbaseManager.Bucket.Get<dynamic>(couchbaseKey);
mygroup = JsonConvert.DeserializeObject<Group> (result.ToString());
I can then loop through it's storelist and obtain all it's stores in the same manner, but i don't need anything else from the group, all i want are the stores and would have prefered to do this in a single operation.
Does anyone know how to do a N1QL directly unto a specified document value?
Something like (and this is total imaginary non working code I'm just trying to clearly illustrate what I'm trying to get at):
SELECT * FROM mycouchbase WHERE documentkey IN
Group_1.StoreList
Thanks
UPDATE:
So Nic's solution does not work;
This is the closest I get to what I need atm:
SELECT b from DataBoard c USE KEYS ["Group_X"] UNNEST c.StoreList b;
"results":[{"b":2},{"b":4},{"b":6}]
Which returns the list of IDs of the Stores I want for any given group (Group_X) - I haven't found a way to get the full Stores instead of just the ID in the same statement yet.
Once I have, I'll post the full solution as well as all the speed bumps I've encountered in the process.
I apologize if I have a misunderstanding of your question, but I'm going to give it my best shot. If I misunderstood, please let me know and we'll work from there.
Let's use the following scenario:
group_1
{
"cbType": "group",
"ID": 1,
"Name": "Group Atlas 3",
"StoreList": [
2,
4,
6
]
}
store_2
{
"cbType": "store",
"ID": 2,
"name": "some store name"
}
store_4
{
"cbType": "store",
"ID": 4,
"name": "another store name"
}
store_6
{
"cbType": "store",
"ID": 6,
"name": "last store name"
}
Now lets say you wan't to query the stores from a particular group (group_1), but include no other information about the group. You essentially want to use N1QL's UNNEST and JOIN operators.
This might leave you with a query like so:
SELECT
stores.name
FROM `bucket-name-here` AS groups
UNNEST groups.StoreList AS groupstore
JOIN `bucket-name-here` AS stores ON KEYS ("store_" || groupstore.ID)
WHERE
META(groups).id = 'group_1';
A few assumptions are made in this. Both your documents exist in the same bucket and you only want to select from group_1. Of course you could use a LIKE and switch the group id to a percent wildcard.
Let me know if something doesn't make sense.
Best,
Try this query:
select Name
from buketname a join bucketname b ON KEYS a.StoreList
where Name="Group Atlas 3"
Based on your update, you can do the following:
SELECT b, s
FROM DataBoard c USE KEYS ["Group_X"]
UNNEST c.StoreList b
JOIN store_bucket s ON KEYS "Store_" || TO_STRING(b);
I have a similar requirement and I got what I needed with a query like this:
SELECT store
FROM `bucket-name-here` group
JOIN `bucket-name-here` store ON KEYS group.StoreList
WHERE group.cbType = 'group'
AND group.ID = 1
Related
I have a Postgres statement that returns extracts/iterates over a json blob in the value column of a table. I am able to get a count one level deep using the query below but I can't count any deeper. I was using:
select jsonb_array_length(value -> 'team') as team_count
This returns the proper count but I cant seem to leverage this to count the names under each team.
In a perfect world I would my results to return 4 lines of results like this(title and a matching count of names):
Product Owner, 2
Technical Product Manager, 2
Data Modeler, 0
Engineer, 0
How would I go about amending this query to give me the count of names under team? I tried all sorts of stuff but nothing that got me close.
Sample Json is below.
"team":[
{
"title":"Product Owner",
"names":[
"John Smith",
"Jane Doe"
]
},
{
"title":"Technical Project Manager",
"names":[
"Fred Flintstone",
"Barney Rubble"
]
},
{
"title":"Data Modeler"
},
{
"title":"Engineer"
}
You seem to be looking for
SELECT
role -> 'title' AS team_role,
jsonb_array_length(role -> 'names') AS member_count
FROM jsonb_array_elements(value -> 'team') AS team(role)
How can I organize the json or table that the query show me? This in chouchbase with n1ql.
example:
select rol, count(*) as cantidad from PPS where type='Usuario'
group by rol
result
[
{
"cantidad": 2,
"rol": "8847cda1-cf52-4af0-880c-5f7c5a281348"
},
{
"cantidad": 2,
"rol": "ef35059f-5953-4da7-b5d5-ee0f9a1c893f"
}
]
I need rol first
I'm sorry, but what you're asking for isn't possible. Within each object the fields are returned in sorted order by name. You could rename the fields to something like "1_rol" and "2_cantidad", but that's the best that N1QL can do.
You might also alias the attributes in the select so that they auto order the way you want:
“select rol as 1, candidad as 2...”
Or, order them into an array:
“select [rol, candidad] as _res...”
I am new to couchbase and I have been going through couchbase documents and other online resources for a while but I could't get my query working. Below is the data structure and my query:
Table1:
{
"jobId" : "101",
"jobName" : "abcd",
"jobGroup" : "groupa",
"created" : " "2018-05-06T19:13:43.318Z",
"region" : "dev"
},
{
"jobId" : "102",
"jobName" : "abcd2",
"jobGroup" : "groupa",
"created" : " "2018-05-06T22:13:43.318Z",
"region" : "dev"
},
{
"jobId" : "103",
"jobName" : "abcd3",
"jobGroup" : "groupb",
"created" : " "2018-05-05T19:11:43.318Z",
"region" : "test"
}
I need to get the jobId which has the latest job information (max on created timestamp) for a given jobGroup and region (group by jobGroup and region).
My sql query doesn't help me using self-join on jobId.
Query:
/*
Idea is to pull out the job which was executed latest for all possible groups and region and print the details of that particular job
select * from (select max(DATE_FORMAT_STR(j.created,'1111-11-11T00:00:00+00:00')) as latest, j.jobGroup, j.region from table1 j
group by jobGroup, region) as viewtable
join table t
on keys meta(t).id
where viewtable.latest in t.created and t.jobGroup = viewtable.jobGroup and
viewtable.region = t.region
Error Result: No result displayed
Desired result :
{
"jobId" : "102",
"jobName":"abcd2",
"jobGroup":"groupa",
"latest" :"2018-05-06T22:13:43.318Z",
"region":"dev"
},
{
"jobId" : "103",
"jobName" : "abcd3",
"jobGroup" : "groupb",
"created" : " "2018-05-05T19:11:43.318Z",
"region" : "test"
}
If I understand your query correctly, this can be answered using 'group by' and no join. I tried entering your sample data and the following query gives the correct result:
select max([created,d])[1] max_for_group_region
from default d
group by jobGroup, region;
How does it work? It uses 'group by' to group documents by jobGroup and region, then creates a two-element array holding, for every document in the group:
the 'created' timestamp field
the document where the timestamp came from
It then applies the max function on the set of 2-element arrays. The max of a set of arrays looks for the maximum value in the first array position, and if there's a tie look at the second position, and so on. In this case we are getting the two-element array with the max timestamp.
Now we have an array [ timestamp, document ], so we apply [1] to extract just the document.
I'm seeing some inconsistencies and invalid JSON in your examples, so I'm going to do the best I can. First off, I'm using Couchbase Server 5.5 which provides the new ANSI JOIN syntax. There might be a way to do this in an earlier version of Couchbase Server.
Next, I created an index on the created field: CREATE INDEX ix_created ON bucketname(created).
Then, I use a subquery to get the latest date, aggregated by jobGroup and region. I then join the latest date from this query to the entire bucket and select the fields that (I think) you want in your desired result:
SELECT k.jobId, k.jobName, k.jobGroup, k.created AS latest, k.region
FROM (
SELECT j.jobGroup, j.region, MAX(j.created) as latestDate
FROM so j
GROUP BY j.jobGroup, j.region
) dt
LEFT JOIN so k ON k.created = dt.latestDate;
Problems with this approach:
If two documents have the exact same date, this isn't a reliable way to determine the latest. You can add a LIMIT 1 to the subquery, which would just pick one arbitrarily, or you could ORDER BY whatever your preference is.
Subquery performance: I don't know how large your data set is, but this could be pretty slow.
Requires Couchbase Server 5.5, which is currently in beta.
If you are using a different version of Couchbase Server, you may want to consider asking in the Couchbase N1QL Forums for a more expert answer.
I am trying to learn mongodb. Suppose there are two tables and they are related. For example like this -
1st table has
First name- Fred, last name- Zhang, age- 20, id- s1234
2nd table has
id- s1234, course- COSC2406, semester- 1
id- s1234, course- COSC1127, semester- 1
id- s1234, course- COSC2110, semester- 1
how to insert data in the mongo db? I wrote it like this, not sure is it correct or not -
db.users.insert({
given_name: 'Fred',
family_name: 'Zhang',
Age: 20,
student_number: 's1234',
Course: ['COSC2406', 'COSC1127', 'COSC2110'],
Semester: 1
});
Thank you in advance
This would be a assuming that what you want to model has the "student_number" and the "Semester" as what is basically a unique identifier for the entries. But there would be a way to do this without accumulating the array contents in code.
You can make use of the upsert functionality in the .update() method, with the help of of few other operators in the statement.
I am going to assume you are going this inside a loop of sorts, so everything on the right side values is actually a variable:
db.users.update(
{
"student_number": student_number,
"Semester": semester
},
{
"$setOnInsert": {
"given_name": given_name,
"family_name": family_name,
"Age": age
},
"$addToSet": { "courses": course }
},
{ "upsert": true }
)
What this does in an "upsert" operation is first looks for a document that may exist in your collection that matches the query criteria given. In this case a "student_number" with the current "Semester" value.
When that match is found, the document is merely "updated". So what is being done here is using the $addToSet operator in order to "update" only unique values into the "courses" array element. This would seem to make sense to have unique courses but if that is not your case then of course you can simply use the $push operator instead. So that is the operation you want to happen every time, whether the document was "matched" or not.
In the case where no "matching" document is found, a new document will then be inserted into the collection. This is where the $setOnInsert operator comes in.
So the point of that section is that it will only be called when a new document is created as there is no need to update those fields with the same information every time. In addition to this, the fields you specified in the query criteria have explicit values, so the behavior of the "upsert" is to automatically create those fields with those values in the newly created document.
After a new document is created, then the next "upsert" statement that uses the same criteria will of course only "update" the now existing document, and as such only your new course information would be added.
Overall working like this allows you to "pre-join" the two tables from your source with an appropriate query. Then you are just looping the results without needing to write code for trying to group the correct entries together and simply letting MongoDB do the accumulation work for you.
Of course you can always just write the code to do this yourself and it would result in fewer "trips" to the database in order to insert your already accumulated records if that would suit your needs.
As a final note, though it does require some additional complexity, you can get better performance out of the operation as shown by using the newly introduced "batch updates" functionality.For this your MongoDB server version will need to be 2.6 or higher. But that is one way of still reducing the logic while maintaining fewer actual "over the wire" writes to the database.
You can either have two separate collections - one with student details and other with courses and link them with "id".
Else you can have a single document with courses as inner document in form of array as below:
{
"FirstName": "Fred",
"LastName": "Zhang",
"age": 20,
"id": "s1234",
"Courses": [
{
"courseId": "COSC2406",
"semester": 1
},
{
"courseId": "COSC1127",
"semester": 1
},
{
"courseId": "COSC2110",
"semester": 1
},
{
"courseId": "COSC2110",
"semester": 2
}
]
}
I am trying to get my head around which direction to even start with the following..
Imaging a dynamic form (JSON) that I store in SQL Server 2016+. So far, I have seen / tried a couple of dynamic queries to take the dynamic JSON and flatten out as columns.
Given the "dynamic" nature, it is hard to "store" that flatten out data. I have been looking at temporary/temporal/memory tables to store that dynamic flattened data for a "relatively short period" of time (say an hour or two).
I have also been asked if it is possible to use the dynamic JSON data in building a cube within Analysis Services.. again given the dynamic nature of this, would something like this even be possible?
I guess my question is two-fold:
Pointers to flatten out dynamic JSON within SQL Server
Is it possible to take dynamic JSON, flatten out to columns and somehow use within Analysis Services? i.e. ultimately to use within a cube?
Realise the above is a bit vague, but any pointers to get me going in the correct direction would be appreciated!
Many thanks.
Dynamically converting JSON into columns can get tricky. Especially if you are NOT certain of the structure. That said, have you considered converting the JSON into a hierarchy via a Recursive CTE?
Example
declare #json varchar(max)='
[
{
"url": "https://www.google.com",
"image-url": "https://www.google.com/imghp",
"labels": [
{
"source": "Bob, Inc",
"name": "Whips",
"info": "Ouch"
},
{
"source": "Weezles of Oregon",
"name": "Chains",
"info": "Let me go"
}
],
"Fact": "Fictional"
}
]';
;with cte0 as (
Select *
,[Level]=1
,[Path]=convert(varchar(max),row_number() over(order by (select null)))
From OpenJSON(#json,'$')
Union All
Select R.*
,[Level]=p.[Level]+1
,[Path]=concat(P.[Path],'\',row_number() over(order by (select null)))
From cte0 p
Cross Apply OpenJSON(p.value,'$') R
Where P.[Type]>3
)
Select [Level]
,[Path]
,Title = replicate('|---',[Level]-1)+[Key]
,Item = [Key]
,Value = case when [type]<4 then Value else null end
From cte0
Order By [Path]
Returns