Mongo query to get comma separated value - json

I have query which is traversing only in forward direction.
example:
{
"orderStatus": "SUBMITTED",
"orderNumber": "785654",
"orderLine": [
{
"lineNumber": "E1000",
**"trackingnumber": "12345,67890",**
"lineStatus": "IN-PROGRESS",
"lineStatusCode": 50
}
],
"accountNumber": 9076
}
find({'orderLine.trackingNumber' : { $regex: "^12345.*"} })**
When I use the above query I get the entire document. But I want to fetch the document when I search with 67890 value as well
At any part of time I will be always querying with single tracking number only.
12345 or 67890 Either with 12345 or 67890. There are chances tracking number value can extend it's value 12345,56789,01234,56678.
I need to pull the whole document no matter what the tracking number is in whatever position.
OUTPUT
should be whole document
{
"orderStatus": "SUBMITTED",
"orderNumber": "785654",
"orderLine": [
{
"lineNumber": "E1000",
"trackingnumber": "12345,67890",
"lineStatus": "IN-PROGRESS",
"lineStatusCode": 50
}
],
"accountNumber": 9076
}
Also I have done indexing for trackingNumber field. Need help here. Thanks in advance.

Following will search with either 12345 or 67890. It is similar to like condition
find({'orderLine.trackingNumber' : { $regex: /12345/} })
find({'orderLine.trackingNumber' : { $regex: /67890/} })
There's also an alternative way to do this
Create a text index
db.order.createIndex({'orderLine.trackingnumber':"text"})
You can make use of this index to search the value from trackingnumber field
db.order.find({$text:{$search:'12345'}})
--
db.order.find({$text:{$search:'67890'}})
--
//Do take note that you can't search using few in between characters
//like the following query won't give any result..
db.order.find({$text:{$search:'6789'}}) //have purposefully removed 0
To further understand how $text searches work, please go through the following link.

Related

How do I push values into a column Google Sheets App Script

I'm searching through the documentation as best I can, I'm just on a time limit here so if someone can tell me that would be great.
I need to insert data into a column and have it push the data in the column down when it's inserted. For example I need to add the word "Good" at the top of the column, "Bad" WAS at the top, but when I pushed in "Good", "Bad" became the number two spot, the number two spot became the number three spot, etc. It needs to do this without deleting or moving the rows themselves because I'm reading data from two columns in the sheet and then writing to a third column.
Thanks in advance!
Welcome to StackOverflow.
From what I understood reading your question is that you have already been able to read data from two column and now you just want to store some of them in a separate column. Apologies if I misunderstood your question.
If I understood your question right, I would suggest you to create a list of the requests and make a batch update of it. Which would help you refraining yourself from reaching the write quota.
So, here how it goes-
request = []
request.append({
"updateCells": {
"rows": [
{
"values":[
{
"userEnteredValue": {
"numberValue": 546564 #Assuming your value is integer
},
"userEnteredFormat": {
"horizontalAlignment": "CENTER",
"verticalAlignment": "MIDDLE"
}
}
]
}
],
"fields": "*",
"range": {
#Replace these values with actual values
"sheetId": sheetId,
"startRowIndex": startRow, #Indexing start from 0
"endRowIndex": endRow,
"startColumnIndex": startColumn,
"endColumnIndex": endColumn,
}
}
})
#You can add more requests like this in the list and then execute
body = {
"requests": request
}
response = sheet.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id,
body=body).execute()
#If you are using gspread, then you can use this
sheet.batch_update({"requests" : request})
This will update the cells with your given value. For detailed information and other formatting follow the documentation.

Possible to chain results in N1ql?

I'm currently trying to do a bit of complex N1QL for a project I'm working on, theoretically I could do all of this processing in multiple N1QL calls and by parsing the results each time, however if possible I'd like for this to contained in one call.
What I would like to do is:
filter all documents that contain a "dataSync.test.id" field with more than 1 id
Read back all other ids in that list
Use that list to get other documents containing those ids
Get the "dataSync.test._channels" field for those documents (optionally a filter by docType might help parsing)
This would probably return a list of "dataSync.test._channels"
Is this possible in N1QL? It appears like it might be but I can't get the syntax right.
My data structures look a little like
{
"dataSync": {
"test": {
"_channels": [
"RP"
],
"id": [
"dataSync_user_1015",
"dataSync_user_1010",
"dataSync_user_1005"
],
"_lastUpdatedBy": "TEST"
}
},
...
}
{
"dataSync": {
"test": {
"_channels": [
"RSD"
],
"id": [
"dataSync_user_1010"
],
"_lastUpdatedBy": "TEST"
}
},
...
}
Yes. I think you can do all these.
Initial set of IDs with filtering can be retrieved as a subquery and then you can get subsquent documents by joins.
SELECT fulldoc
FROM (select meta().id as dockey from doc where a=1) as mydoc
INNER JOIN doc fulldoc ON KEYS mydoc.dockey;
There are optimizations that can be done here. Try the sequencing first to ensure you're get the job done.

How does Simulating Joins works in Couchbase?

I have documents one is dependent to other. first:
{
"doctype": "closed_auctions",
"seller": {
"person": "person11304"
},
"buyer": {
"person": "person0"
},
"itemref": {
"item": "item1"
},
"price": 50.03,
"date": "11/17/2001",
"quantity": 1,
"type": "Featured",
"annotation": {
"author": {
"person": "person8597"
}
}
here you can see doc.buyer.person is dependent to another documents like this:
{
"doctype": "people",
"id": "person0",
"name": "Kasidit Treweek",
"profile": {
"income": 20186.59,
"interest": [
{
"category": "category251"
}
],
"education": "Graduate School",
"business": "No"
},
"watch": [
{
"open_auction": "open_auction8747"
}
]
}
How can I get buyer's name from these two documents? I means doc.buyer.person is connected with second document's id. It is join and from documentation it's not clear. http://docs.couchbase.com/couchbase-manual-2.0/#solutions-for-simulating-joins
Well, first off, let me point out that the very first sentence of the documentation section that you referenced says (I added the emphasis):
Joins between data, even when the documents being examined are
contained within the same bucket, are not possible directly within the
view system.
So, the quick answer to your question is that you have lots of options. Here are a few of them:
Assume you need only the name for a rather small subset of people. Create a view that outputs the PersonId as key and Name as value, then query the view for a specific name each time you need it.
Assume you need many people joined to many auctions. Download the full contents of the basic index from #1 and execute the join using linq.
Assume you need many properties of the person, not just the name. Download the Person document for each auction item.
Assume you need a small subset from both Auction and People. Index the fields from each that you need, include a type field, and emit all of them under the key of the Person. You will be able to query the view for all items belonging to the person.
The last approach was used in the example you linked to in your question. For performance, it will be necessary to tailor the approach to your usage scenario.
An other solution consist to merge datas in a custom reduce function.
// view
function (doc, meta) {
if (doc.doctype === "people") {
emit(doc.id, doc);
}
if (doc.doctype === "closed_auctions") {
emit(doc.buyer.person, doc);
}
}
// custom reduce
function (keys, values, rereduce) {
var peoples = values.filter(function (doc) {
return doc.doctype === "people";
});
for (var key in peoples) {
var people = peoples[key];
people.closed_auctions = (function (peopleId) {
return values.filter(function (doc) {
return doc.doctype === "closed_auctions" && doc.buyer.person === peopleId;
});
})(people.id);
}
return peoples;
}
And then you can query one user with "key" or multiple users with "keys".
After I don't know what the performances issues are with this method.

How to add nested json object to Lucene Index

I need a little help regarding lucene index files, thought, maybe some of you guys can help me out.
I have json like this:
[
{
"Id": 4476,
"UrlName": null,
"PhoneData": [
{
"PhoneType": "O",
"PhoneNumber": "0065898",
},
{
"PhoneType": "F",
"PhoneNumber": "0065898",
}
],
"Contact": [],
"Services": [
{
"ServiceId": 10,
"ServiceGroup": 2
},
{
"ServiceId": 20,
"ServiceGroup": 1
}
],
}
]
Adding first two fields is relatively easy:
// add lucene fields mapped to db fields
doc.Add(new Field("Id", sampleData.Id.Value.ToString(), Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("UrlName", sampleData.UrlName.Value ?? "null" , Field.Store.YES, Field.Index.ANALYZED));
But how I can add PhoneData and Services to index so it can be connected to unique Id??
For indexing JSON objects I would go this way:
Store the whole value under a payload field, named for example $json. This field would be stored but not indexed.
For each (indexable) property (maybe nested) create an indexable field with its name as a XMLPath-like expression identifying the property, for example PhoneData.PhoneType
If is ok that all nested properties will be indexed then it's simple, just iterate over all of them generating this indexable field.
But if you don't want to index all of them (a more realistic case), how to know which property is indexable is another problem; in this case you could:
Accept from the client the path expressions of the index fields to be created when storing the document, or
Put JSON Schema into play to describe your data (assuming your JSON records have a common schema), and extend it with a custom property that would allow you to tag which properties are indexable.
I have created a library doing this (and much more) that maybe can help you.
You can check it at https://github.com/brutusin/flea-db

Using addToSet inside an array with MongoDB

I'm trying to track daily stats for an individual.
I'm having a hard time adding a new day inside "history" and can also use a pointer on updating "walkingSteps" as new data comes in.
My schema looks like:
{
"_id": {
"$oid": "50db246ce4b0fe4923f08e48"
},
"history": [
{
"_id": {
"$oid": "50db2316e4b0fe4923f08e12"
},
"date": {
"$date": "2012-12-24T15:26:15.321Z"
},
"walkingSteps": 10,
"goalStatus": 1
},
{
"_id": {
"$oid": "50db2316e4b0fe4923f08e13"
},
"date": {
"$date": "2012-12-25T15:26:15.321Z"
},
"walkingSteps": 5,
"goalStatus": 0
},
{
"_id": {
"$oid": "50db2316e4b0fe4923f08e14"
},
"date": {
"$date": "2012-12-26T15:26:15.321Z"
},
"walkingSteps": 8,
"goalStatus": 0
}
]
}
db.history.update( ? )
I've been browsing (and attempting) the mongodb documentation but they don't quite break it all the way down to dummies like myself... I couldn't quite translate their examples to my setup.
Thanks for any help.
E = noob trying to learn programming
Adding a day:
user = {_id: ObjectId("50db246ce4b0fe4923f08e48")}
day = {_id: ObjectId(), date: ISODate("2013-01-07"), walkingSteps:0, goalStatus: 0}
db.users.update(user, {$addToSet: {history:day}})
Updating walkingSteps:
user = ObjectId("50db246ce4b0fe4923f08e48")
day = ObjectId("50db2316e4b0fe4923f08e13") // second day in your example
query = {_id: user, 'history._id': day}
db.users.update(query, {$set: {"history.$.walkingSteps": 6}})
This uses the $ positional operator.
It might be easier to have a separate history collection though.
[Edit] On the separate collections:
Adding days grows the document in size and it might need to be relocated on the disk. This can lead to performance issues and fragmentation.
Deleting days won't shrink the document size on disk.
It makes querying easier/straightforward (e.g. searching for a period of time)
Even though #Justin Case puts the right answer he doesn't explain a few things in it extremely well.
You will notice first of all that he gets rid of the resolution on dates and moves their format to merely the date instead of date and time like so:
day = {_id: ObjectId(), date: ISODate("2013-01-07"), walkingSteps:0, goalStatus: 0}
This means that all your dates will have 00:00:00 for their time instead of the exact time you are using atm. This increases the ease of querying per day so you can do something like:
db.col.update(
{"_id": ObjectId("50db246ce4b0fe4923f08e48"),
"history.date": ISODate("2013-01-07")},
{$inc: {"history.$.walkingSteps":0}}
)
and other similar queries.
This also makes $addToSet actually enforce its rules, however since the data in this sub document could change, i.e. walkingSteps will increment $addToSet will not work well here anyway.
This is something I would change from the ticked answer. I would probably use $push or something else instead since $addToSet is heavier and won't really do anything useful here.
The reason for a separate history collection in my view would be what you said earlier with:
Yes, the amount of history items for that day.
So this array contains a set of days, which is fine but it sounds like the figure that you wish to get walkingSteps from, a set of history items, should be in another collection and you set walkingSteps according to the count of the amount of items in that other collection for today:
db.history_items.find({date: ISODate("2013-01-07")}).count();
Referring to MongoDB Manual, $ is the positional operator which identifies an element in an array field to update without explicitly specifying the position of the element in the array. The positional $ operator, when used with the update() method and acts as a placeholder for the first match of the update query selector.
So, if you issue a command to update your collection like this:
db.history.update(
{someCriterion: someValue },
{ $push: { "history":
{"_id": {
"$oid": "50db2316e4b0fe4923f08e12"
},
"date": {
"$date": "2012-12-24T15:26:15.321Z"
},
"walkingSteps": 10,
"goalStatus": 1
}
}
)
Mongodb might try to identify $oid and $date as some positional parameters. $ also is part of the atomic operators like $set and $push. So, it is better to avoid use this special character in Mongodb.