Mongodb how to replace for GROUP_CONCAT in mysql - mysql

This is mongo shell
db.product.aggregate([
{
$lookup:{
from: "orders",
let: { id: "$_id" ,multiId:"$multi_id"},
as: "orders1",
pipeline:[{ $match:{ $expr:{ $and:[
{$in:[{$substr:["$pid",0,-1]},{ $split:["$$multiId",","] } ]}
]
}
}
}
]
}
},
{
$project:{
price:1,
orders:1,
multi_id:1,
orders1:1
}
},
])
and the result is:
but I hope the result is: (how to convert the array "orders" to String)
This is a result what I want

Related

MongoDB nested array query how to

I am trying to query a document in my MongoDB
Document:
{
_id: '111',
subEntities: [
{
subId: '999',
dateOfStart: '2098-01-01',
dateOfTermination: '2099-12-31'
},
{
subId: '998',
dateOfStart: '2088-01-01',
dateOfTermination: '2089-12-31'
}
]
}
My Query:
{"$and": [
{"subEntities.dateOfStart": {"$lte": "2098-01-02"}},
{"subEntities.dateOfTermination": {"$gte": "2099-12-30"}},
{"subEntities.subId": {"$in": ["998"]}}
]}
As you can see, I am trying to apply a date value and an ID to the subentities.
The date value should be between dateOfStart and dateOfTermination.
The query returns a match, although the date value only matches the first subentity and the ID query matches the second subquery.
How can I make it so that there is only one match when both queries match the same subentity?
Can I aggregate the subentities?
Thanks a lot!
When you query arrays Mongo by default "flattens" them, which means each condition of the query get's executed independently.
You want to be using $elemMatch, this allows you to query full objects from within an array, like so:
db.collection.find({
subEntities: {
$elemMatch: {
dateOfStart: {
"$lte": "2098-01-02"
},
dateOfTermination: {
"$gte": "2099-12-30"
},
subId: {
"$in": [
"998"
]
}
}
}
})
Mongo Playground
If you want to filter dates between dateOfStart and dateOfTermination you should invert the $gte and $lte conditions:
{
"$and": [
{ "subEntities.dateOfStart": { "$gte": "2098-01-02" } },
{ "subEntities.dateOfTermination": { "$lte": "2099-12-30" } },
{ "subEntities.subId": { "$in": ["998"] } }
]
}

Find the average value in MongoDB from JSON

In my MongoDB (export from JSON file) I have database "dab" with structure like this:
id:"1"
datetime:"2020-05-08 5:09:56"
name:"namea"
lat:55.826738
lon:45.0423412
analysis:"[{"0":0.36965591924860347},{"5":0.10391287134268598},{"10":0.086884394..."
I'm using that db for spark analysis via MongoDB-Spark Connector.
My problem is field "analysis" - I need average result for all values from every interval ("0", "5", "10", ..., "1000"), so I have to sum 0.36965591924860347 + 0.10391287134268598 + 0.086884394 + ... and divide by number of intervals (I have 200 intervals in every column), and finally multiply the result by 100.
My solution would be this one:
db.collection.aggregate([
{
$set: {
analysis: {
$map: {
input: "$analysis",
in: { $objectToArray: "$$this" }
}
}
}
},
{
$set: {
analysis: {
$map: {
input: "$analysis",
in: { $first: "$$this.v" }
}
}
}
},
{ $set: { average: { $multiply: [ { $avg: "$analysis" }, 100 ] } } }
])
Mongo playground
You can use $reduce on that array,sum the values,and then divide with the number of elements and then multiply with 100
db.collection.aggregate([
{
"$addFields": {
"average": {
"$multiply": [
{
"$divide": [
{
"$reduce": {
"input": "$analysis",
"initialValue": 0,
"in": {
"$let": {
"vars": {
"sum": "$$value",
"data": "$$this"
},
"in": {
"$add": [
"$$sum",
{
"$arrayElemAt": [
{
"$arrayElemAt": [
{
"$map": {
"input": {
"$objectToArray": "$$data"
},
"as": "m",
"in": [
"$$m.k",
"$$m.v"
]
}
},
0
]
},
1
]
}
]
}
}
}
}
},
{
"$size": "$analysis"
}
]
},
100
]
}
}
}
])
You can test the code here
But this code has 1 problem, you save data in documents, and MongoDB
doesn't have a function like get(document,$$k), the new MongoDB v5.0 has a $getField but still accepts only constants no variables.
I mean we cant do in your case getField(doc,"5").
So we have the cost of converting each document to an array.

MongoDB Split document field into two fields

I have a MongoDB document with over 2.8m documents of common passwords (hashed in SHA1) and their popularity.
Currently I've imported the documents with the following schema
{"_id":"5ded1a559015155eb8295f48","password":"20EABE5D64B0E216796E834F52D61FD0B70332FC:2512537"}
Although I'd like to split this so I can have the popularity value and it would look something like this
{"_id":"5ded1a559015155eb8295f48","password":"20EABE5D64B0E216796E834F52D61FD0B70332FC","popularity":2512537}
Question is im unsure how I can split the password into two password, popularity using : to split the string
You can use Aggregation Framework to split current password into two fields. You need to start with $indexOfBytes to get the position of : and then you need $substr to create new fields based on evaluated position.
db.collection.aggregate([
{
$addFields: {
colonPos: { $indexOfBytes: ["$password",":"] }
}
},
{
$addFields: {
password: { $substr: [ "$password", 0, "$colonPos" ] },
popularity: { $substr: [ "$password", "$colonPos", { $strLenBytes: "$password" } ] }
}
},
{
$project: {
colonPos: 0
}
}
])
Mongo Playground
As a last step you can use $out which takes all your aggregation results and writes them into new or existing collection.
EDIT: Alternative approach using $split (thank to #matthPen):
db.collection.aggregate([
{
$addFields: {
password: { $arrayElemAt: [ { "$split": [ "$password", ":"] }, 0 ] },
popularity: { $arrayElemAt: [ { "$split": [ "$password", ":"] }, 1 ] }
}
}
])
Mongo Playground

Return this dataset as an array of user schema objects in mongoDB

I am using MongoDB Aggregation $lookup to query two different schema collections.
What I want to do is return all the users that have been added to each artist collection.
Here is the Artist Schema
{
"_id" : ObjectId("59f7a13163241a5c8a580832"),
"artistID" : "34657839393",
"artistName" : "Mc squared",
"userID" : ObjectId("599f14855e9fcf95d0fe11a7"),
"__v" : 0
}
Artist.aggregate([
{
$match: { artistID }
},
{
$lookup: {
from: "users",
localField: "userID",
foreignField: "_id",
as: "UsersWithMatchedArtist"
}
},
{
$project: {
UsersWithMatchedArtist: 1
}
}
}
])
This returns a the following data structure.
[
{
"_id": "59f8f40686f2fa623d815256",
"UsersWithMatchedArtist": [{Users Schema}]
},
{
"_id": "59f8f40686f2f12345678901",
"UsersWithMatchedArtist": [{Users Schema}}]
}
]
I wish to have the data returned in the following structure
[
{Users Schema},
{Users Schema}
]
Any Suggestions on how to do this? Suggestions would be much appreciated! Cheers!
I got the dataset that I was after using this query below:
Artist.aggregate([
{
$match: { artistID }
},
{
$lookup: {
from: "users",
localField: "userID",
foreignField: "_id",
as: "UsersWithMatchedArtist"
}
},
{
$project: {
user: {
$arrayElemAt: ["$UsersWithMatchedArtist", 0]
}
}
},
{
$replaceRoot: {
newRoot: "$user"
}
}
])
This returns the dataset
[
{Users Schema},
{Users Schema}
]

Is there a way to see if a query matches any element in an array in Elasticsearch?

I have this query (e.g. 'hello') and this id (e.g. '12345') and I want to search for something that matches both the query in a 'text' field and the id in a 'thread' field. But the given ids are in an array, so the logic is something like:
function runThisQuery(query, ids) {
client.search({
index: '_all',
type: 'text',
body: {
query: {
bool: {
must: {
match: { text: query }
},
should: [
{ match: { thread: { query: ids[0], operator: 'AND'} } },
{ match: { thread: { query: ids[1], operator: 'AND'} } }
],
minimum_should_match: 1
}
}
}
})
}
Is there like an $in operator (like in MongoDB) that matches the thread if it's in the 'ids' array? Thanks!
You can use an ids query like this
{
"filter": {
"ids": {
"type": "my_type",
"values": [
"12345","67891","12346"
]
}
}
}