I have this JSON structure:
{
"groups" : {
"-KBxo9-RoY0eowWKeHkU" : {
"author" : "rsenov",
"members" : {
"-KBxo7ZU6McsmDOxyias" : true,
"-KBxo8_TUTW6NZze6xcd" : true,
"rsenov" : true
},
"name" : "Prueba 3"
}
},
"users" : {
"-KBxo7ZU6McsmDOxyias" : {
"avatar" : "owl2",
"groups" : {
"-KBxo9-RoY0eowWKeHkU" : true
},
"isUser" : false,
"name" : "Pepa"
},
"-KBxo8_TUTW6NZze6xcd" : {
"avatar" : "monkey",
"groups" : {
"-KBxo9-RoY0eowWKeHkU" : true
},
"isUser" : false,
"name" : "Lolas"
},
"rsenov" : {
"avatar" : "guest",
"groups" : {
"-KBxo9-RoY0eowWKeHkU" : true
},
"isUser" : true,
"name" : "Ruben",
}
}
}
and the security&rules file is:
{
"rules": {
".read": true,
".write": true,
"users": {
".indexOn": ["email", "groups"]
},
"groups": {
".indexOn": ["author", "name"]
}
}
}
I'm trying to run a query in order to get the ChildChanged snapshot:
DataService.dataService.USERS_REF.queryOrderedByChild("groups").queryEqualToValue(currentGroup.groupKey).observeEventType(.ChildChanged, withBlock: {snapshot in
print(snapshot)
})
DataService.dataService.USERS_REFcorresponds to the url that point to the "users" key, and currentGroup.groupKeyis equal to -KBxo9-RoY0eowWKeHkUin this case.
According to this query, I should get the snapshot of the child that has changed. For example, if I replace the user name "Pepa" to "Test", I should get the snapshot:
"-KBxo7ZU6McsmDOxyias" : {
"avatar" : "owl2",
"groups" : {
"-KBxo9-RoY0eowWKeHkU" : true
},
"isUser" : false,
"name" : "Test"
}
but this query never get's called...
Is there something wrong in my query?
"I'm trying to run a query in order to get the ChildChanged snapshot:" is a little odd.
You can query for data, or observe a node via ChildChanged.
If you just want to be notified of changes within the users node, add an observer to that node and when Pepa changes to Test, your app will be notified and provided a snapshot of the user node that changed.
var ref = Firebase(DataService.dataService.USERS_REF)
ref.observeEventType(.ChildChanged, withBlock: { snapshot in
println("the changed user is: \(snapshot.value)")
})
Oh, and no need for queryOrderedByChild since the snapshot will only contain the single node that changed.
Related
I have a number of TSV files as Azure blobs that have following as the first four tab-separated columns:
metadata_path, document_url, access_date, content_type
I want to index them as described here: https://learn.microsoft.com/en-us/azure/search/search-howto-index-csv-blobs
My request for creating an indexer has the following body:
{
"name" : "webdata",
"dataSourceName" : "webdata",
"targetIndexName" : "webdata",
"schedule" : { "interval" : "PT1H", "startTime" : "2017-01-09T11:00:00Z" },
"parameters" : { "configuration" : { "parsingMode" : "delimitedText", "delimitedTextHeaders" : "metadata_path,document_url,access_date,content_type" , "firstLineContainsHeaders" : true, "delimitedTextDelimiter" : "\t" } },
"fieldMappings" : [ { "sourceFieldName" : "document_url", "targetFieldName" : "id", "mappingFunction" : { "name" : "base64Encode", "parameters" : "useHttpServerUtilityUrlTokenEncode" : false } } }, { "sourceFieldName" : "document_url", "targetFieldName" : "url" }, { "sourceFieldName" : "content_type", "targetFieldName" : "content_type" } ]
}
I am receiving an error:
{
"error": {
"code": "",
"message": "Data source does not contain column 'document_url', which is required because it maps to the document key field 'id' in the index 'webdata'. Ensure that the 'document_url' column is present in the data source, or add a field mapping that maps one of the existing column names to 'id'."
}
}
What do I do wrong?
What do I do wrong?
In your case, you supply the json format is invalid. The following is the request for creating an indexer. Detail info we could refer to this document
{
"name" : "Required for POST, optional for PUT. The name of the indexer",
"description" : "Optional. Anything you want, or null",
"dataSourceName" : "Required. The name of an existing data source",
"targetIndexName" : "Required. The name of an existing index",
"schedule" : { Optional. See Indexing Schedule below. },
"parameters" : { Optional. See Indexing Parameters below. },
"fieldMappings" : { Optional. See Field Mappings below. },
"disabled" : Optional boolean value indicating whether the indexer is disabled. False by default.
}
If we want to create an indexer with Rest API. We need 3 steps to do that. I also do a demo for it.
If Azure search SDK is acceptable, you also could refer to another SO thread.
1.Create datasource.
POST https://[service name].search.windows.net/datasources?api-version=2015-02-28-Preview
Content-Type: application/json
api-key: [admin key]
{
"name" : "my-blob-datasource",
"type" : "azureblob",
"credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
"container" : { "name" : "my-container", "query" : "<optional, my-folder>" }
}
2.Create an index
{
"name" : "my-target-index",
"fields": [
{ "name": "metadata_path","type": "Edm.String", "key": true, "searchable": true },
{ "name": "document_url", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false },
{ "name": "access_date", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false },
{ "name": "content_type", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
]
}
3. Create an indexer.
Below is the request body that works:
{
"name" : "webdata",
"dataSourceName" : "webdata",
"targetIndexName" : "webdata",
"schedule" :
{
"interval" : "PT1H",
"startTime" : "2017-01-09T11:00:00Z"
},
"parameters" :
{
"configuration" :
{
"parsingMode" : "delimitedText",
"delimitedTextHeaders" : "document_url,content_type,link_text" ,
"firstLineContainsHeaders" : true,
"delimitedTextDelimiter" : "\t",
"indexedFileNameExtensions" : ".tsv"
}
},
"fieldMappings" :
[
{
"sourceFieldName" : "document_url",
"targetFieldName" : "id",
"mappingFunction" : {
"name" : "base64Encode",
"parameters" : {
"useHttpServerUtilityUrlTokenEncode" : false
}
}
},
{
"sourceFieldName" : "document_url",
"targetFieldName" : "document_url"
},
{
"sourceFieldName" : "content_type",
"targetFieldName" : "content_type"
},
{
"sourceFieldName" : "link_text",
"targetFieldName" : "link_text"
}
]
}
I have three Post in my collection of three different User
I am trying to fetch the post in my Views Session (Html, Css) Part
But I need to filter the other two post Posted from other two User
because I have some block functionality in my View Section. So all post are
allowed but the user I have blocked her/his post not visible to me and mine to him.
BlockedByUser : (This is my Post Json Data)
{
"_id" : ObjectId("591729b52bb30a19afc9b89d"),
"createdTime" : ISODate("2017-05-13T15:43:49.381Z"),
"isDeleted" : false,
"Message" : "Message Two",
"postedBy" : ObjectId("598adbefb3bf0b85f92edc3b"),
"recipient" : [
ObjectId("598ae453b3bf0b85f92ee331"),
ObjectId("5910691ae2bcdeab80e875f0")
],
"updatedTime" : ISODate("2017-05-20T09:24:39.124Z")
}
Below two user Post Data that I have blocked And
In his recipient Array Key stores my Id recipient [598adbefb3bf0b85f92edc3b]
Block User One :
{
"_id" : ObjectId("591729b52bb30a19afc9b89d"),
"createdTime" : ISODate("2017-05-13T15:43:49.381Z"),
"isDeleted" : false,
"Message" : "Message One",
"postedBy" : ObjectId("598ae453b3bf0b85f92ee331"),
"recipient" : [
ObjectId("598adbefb3bf0b85f92edc3b"),
ObjectId("5910691ae2bcdeab80e875f0"),
ObjectId("598ac93cb3bf0b85f92ece44"),
],
"updatedTime" : ISODate("2017-05-20T09:24:39.124Z")
}
Same as above
Block User Two :
{
"_id" : ObjectId("591729b52bb30a19afc9b89d"),
"createdTime" : ISODate("2017-05-13T15:43:49.381Z"),
"isDeleted" : false,
"Message" : "Message One",
"postedBy" : ObjectId("598ac93cb3bf0b85f92ece44"),
"recipient" : [
ObjectId("598adbefb3bf0b85f92edc3b"),
ObjectId("5910691ae2bcdeab80e875f0"),
ObjectId("598ae453b3bf0b85f92ee331")
],
"updatedTime" : ISODate("2017-05-20T09:24:39.124Z")
}
This is Block Collection that I have created and two blocked user Id with blockUserId key
Block Details One :
{
"_id" : ObjectId("598da2f0b88b0c2b0c735234"),
"blockUserId" : ObjectId("598ae453b3bf0b85f92ee331"),
"blockById" : ObjectId("598adbefb3bf0b85f92edc3b"),
"updatedDate" : ISODate("2017-08-11T12:28:32.145Z"),
"createdDate" : ISODate("2017-08-11T12:28:32.145Z"),
"isBlock" : "true",
"__v" : 0
Block Details Two
{
"_id" : ObjectId("598da558b88b0c2b0c735236"),
"blockUserId" : ObjectId("598ac93cb3bf0b85f92ece44"),
"blockById" : ObjectId("598adbefb3bf0b85f92edc3b"),
"updatedDate" : ISODate("2017-08-11T12:38:48.772Z"),
"createdDate" : ISODate("2017-08-11T12:38:48.772Z"),
"isBlock" : "true",
"__v" : 0
}
I have fetch these blocked collection and store two user blockUserId id in array
arrOne = ["598ae453b3bf0b85f92ee331", "598ac93cb3bf0b85f92ece44"]
And i am applying this query in mongoose db:
query = {$or: [{$and: [{ $or: [{ postedBy: req.params.id},
{recipient: req.params.id}
]
}, { createdTime: { $gt: endTime, $lt: startTime } }
]},{postedBy: {$ne: arrOne}}
]
};
}
But it will return undefined value
I am trying to fetch only my Post or Other user Post that not Blocked my Me, And Blocked User Post will not visible to me
I have to Implement where clause and it works
var query = { $and: [{ $or: [{ postedBy: req.params.id }, { recipient: req.params.id }] }, { "createdTime": { $gt: endTime, "$lt": startTime } }] };
CollectionName.find(query).where('postedBy').nin(blocklist).exec(function(err, response);
I'm trying to determine the best way to calculate the elapsed time it took for each operation, series of actions. Looking at my example data below, how might I take the min/max for the "actions" array, for each corresponding operation, which includes 'take' and 'throw' actions:
{
"name" : "test",
"location" : "here",
"operation" "hammer use",
"actions" : [
{
"action" : "take",
"object" : "hammer",
"timestamp" : "12332234234"
},
{
"action" : "drop",
"object" : "hammer",
"timestamp" : "12332234255"
},
{
"action" : "take",
"object" : "hammer",
"timestamp" : "12332234266"
},
{
"action" : "throw",
"object" : "hammer",
"timestamp" : "12332234277"
}
},
{
"name" : "test 2",
"location" : "there",
"operation" : "rock use",
"actions" : [
{
"action" : "take",
"object" : "rock",
"timestamp" : "12332534277"
},
{
"action" : "drop",
"object" : "rock",
"timestamp" : "12332534288"
},
{
"action" : "take",
"object" : "rock",
"timestamp" : "12332534299"
},
{
"action" : "throw",
"object" : "rock",
"timestamp" : "12332534400"
},
{
"name" : "test 3",
"location" : "elsewhere",
"operation" : "seal hose",
"actions" : [
{
"action" : "create",
"object" : "grommet",
"timestamp" : "12332534277"
},
{
"action" : "place",
"object" : "grommet",
"timestamp" : "12332534288"
},
{
"action" : "tighten",
"object" : "hose",
"timestamp" : "12332534299"
}
}
Expected output:
{
"operation" : "hammer use",
"elapsed_time" : 123
},
{
"operation" : "rock use",
"elapsed_time" : 123
}
I'm still new to rethinkdb and trying to get a hang for it. So far, I've come up with the following query to pick the specific records, i'm interested in, from the table:
r.db('test').table('operations').filter(function(row) {
return row('actions').contains(function(x) {
return x('action').eq('take')}).and(
row('actions').contains(function(x) { return x('action').eq('throw') })
);
});
I'm still trying to figure out how to aggregate the results by taking the min/max of the timestamp and subtracting them from each other.
I hope there's enough detail there to get an idea for the goal at hand. Let me know otherwise. Any help greatly appreciated.
Well, nobody tugged on this so I had to solve it without any help. Took a bit longer but finally figured out how. Here's the pseudocode for finding min/max on the nested fields above, and elapsed_time:
r.db('test').table('operations').filter(function(row) {
return row('actions').contains(function(x) { return x('action').eq("take") }).and(
row('actions').contains(function(x) { return x('action').eq("throw") })
);
}).map(function(doc) {
return {
operation: doc('operation'),
min: doc('actions')('timestamp').min(),
max: doc('actions')('timestamp').max(),
elapsed_time: doc('actions')('timestamp').max().sub(doc('actions')('timestamp').min())
}
})
I'm trying mongoDB and I need translate this following SQL query.
SELECT * FROM infos_cli
WHERE MATCH(denomination) AGAINST('cafe')
WHERE code_postal LIKE '34%'
My full text index definition:
db.infos_cli.createIndex(
{ "code_postal": 1,
"denomination": "text"
},
{default_language: "french"},
{name: "indexSerch"}
)
And my query in mongoDb:
db.infos_cli.find({code_postal : /34/, $text: {$search: "cafe"}})
But it's not working.
Can anyone explain how I've to do ?
In this case please create separate index for postal field and for text search
db.articles.createIndex({ author : 1}) //postal... in your case
db.articles.createIndex({
"denomination" : "text"
}, {
default_language : "french"
}, {
name : "indexSerch"
})
my query
db.getCollection('articles').find({
$text : {
$search : "coffee",
$language : "french"
}
}).explain()
Result shows that there is TEXT phase and IXSCAN which is desired!
result from explain:
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "testCode.articles",
"indexFilterSet" : false,
"parsedQuery" : {
"$text" : {
"$search" : "coffee",
"$language" : "french",
"$caseSensitive" : false,
"$diacriticSensitive" : false
}
},
"winningPlan" : {
"stage" : "TEXT",
"indexPrefix" : {},
"indexName" : "denomination_text",
"parsedTextQuery" : {
"terms" : [
"coffe"
],
"negatedTerms" : [],
"phrases" : [],
"negatedPhrases" : []
},
"inputStage" : {
"stage" : "TEXT_MATCH",
"inputStage" : {
"stage" : "TEXT_OR",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"_fts" : "text",
"_ftsx" : 1
},
"indexName" : "denomination_text",
"isMultiKey" : false,
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 1,
"direction" : "backward",
"indexBounds" : {}
}
}
}
},
"rejectedPlans" : []
},
"serverInfo" : {
"host" : "gbernas3-lt",
"port" : 27017,
"version" : "3.2.0",
"gitVersion" : "45d947729a0315accb6d4f15a6b06be6d9c19fe7"
},
"ok" : 1
}
any comments welcome!
I have a mongo json object as follows
{
"_id" : new BinData(3, "RDHABb22XESWvP83FplqJw=="),
"name" : "NEW NODE",
"host" : null,
"aet" : null,
"studies" : ["1.3.12.2.1107.5.99.3.30000008061114424970500000589"],
"testcases" : [new BinData(3, "Zhl+zIXomkqAd8NIkRiTjQ==")],
"sendentries" : [{
"_id" : "1.3.12.2.1107.5.99.3.30000008061114424970500000589",
"Index" : 0,
"Type" : "Study"
}, {
"_id" : "cc7e1966-e885-4a9a-8077-c3489118938d",
"Index" : 1,
"Type" : "TestCase"
}]
}
The fields "Studies" and "TestCases" are now obsolete and I am now storing that information in a new field called SendEntries. I would like to get rid of the Studies and TestCases from the old entries and unmap those fields going forward. I want to know how I can update my current collections to get rid of the Studies and TestCases fields.
I'm just few weeks into Mongo.
You can use the $unset operator with update.
db.collection.update({},
{ $unset: {
"studies": "",
"testcases": ""
},
{ "upsert": false, "muti": true }
)
And that will remove all of the fields from all of your documents in your collection
Use $unset, there's a manual page e.g.
db.yourCollection.update( { },
{ $unset: {
Studies: "",
testcases: ""
}
},
{ multi: true }
)