MongoDB : Update Modifier semantics of "$unset" - json

In MongoDB, the update modifier unset works as follows:
Consider a Mongo DB Database db with a collection users. Users contain a Document, of the following format:
//Document for a user with username: joe
{
"_id" : ObjectId("4df5b9cf9f9a92b1584fff16"),
"relationships" : {
"enemies" : 2,
"friends" : 33,
"terminated" : "many"
},
"username" : "joe"
}
If I want to remove the terminated key, I have to specify the $unset update modifier as follows:
>db.users.update({"username":"joe"},{"$unset":{"relationships.terminated": "many"}});
My Question is, why do I have to specify the ENTIRE KEY VALUE PAIR for the $unset to work, instead of simply specifying:
>db.users.update({"username":"joe"},{"$unset":{"relationships.terminated"}});
Mon Jun 13 13:25:57 SyntaxError: missing : after property id (shell):1
Why not?
EDIT:
If the way to $unset is to specify the entire key value pair, in accordance with JSON specifications, or to add "1" as the value to the statement, why can't the Shell do the "1" substitution itself? Why isn't such a feature provided? Are there any pitfalls of providing such support?

The short answer is because {"relationships.terminated"} is not a valid json/bson object. A JSON object is composed of a key and a value, and {"relationships.terminated"} only has a key (or value, depends on how you look it).
Affortunately to unset a field in Mongo you do not need to set the actual value of the field you want to remove. You can use any value (1 is commonly used in Mongo docs) no matter the actual value of relationships.terminated:
db.users.update({"username":"joe"},{"$unset":{"relationships.terminated" : 1}});

Related

Json contains with exclusion or wildcard

Is there a way to either wildcard or exclude fields in a json_contains statement (postgres w/ sqlalchemy)?
For example, lets say one of the rows of my database has a field called MyField which has a typical json value of ...
MyField : {Store: "HomeDepot", Location: "New York"}
Now, I am doing a json contains on that with a larger json variable called larger_json...
larger_json : {Store: "HomeDepot", Location: "New York", Customer: "Bob" ... }
In sqlalchemy, I could use a MyTable.MyField.comparator.contained_by(larger_json) and in this case, that would work fine. But what if, for example, I later removed Location as a field in my variable... so I still have the value in my database, but it no longer exists in larger_json:
MyField : {Store: "HomeDepot", Location: "New York"}
larger_json : {Store: "HomeDepot", Customer: "Bob" ... }
Assume that I know when this happens, i.e. I know that the database has Location but the larger_json does not. Is there a way for me to either wildcard Location, i.e. something like this...
{Store: "HomeDepot", Location: "*", Customer: "Bob" ... }
or to exclude it from the json value? Something like this?
MyTable.MyField.exclude_fields().comparator.contained_by(larger_json)
Or is there another recommended approach for dealing with this?
Not sure if that's what you need, but you could remove Location as a key from the values you search:
... WHERE (tab.myfield - 'Location') <# larger_json

How to differentiate the explicitly assigning null to the field or field that is not in the json file for #PATCH

I am using #PATCH for the partial update of the record.
My table is emp:
empId - Int
lastName -String
firstName - String
city - String
desc - String
Patch json file for emp/1:
{"city" : "NULL", "lastName": "newLastName"}
How can I pass this to the pl/sql procedure (how the sql query can be constructed only to update city and lastName) and how to differentiate between explicity city set to NULL and desc is not found in json.
There are multiple ways on how to deal with that kind of situation, I implemented at least two different ones:
Send a JSON document that represents the updated resource
Either you require the client to always send the complete JSON document (not a partial one) so you always get all values in a single request (easy). Absent values mean that the field is supposed to be null.
Or you allow the client to send a partial JSON document but then you need a unique identifier to distinguish between an absent field (not to touch) and a field that needs to be overridden with null (harder but not impossible).
Send a JSON document that contains operations on a resource, e. g. { "operation": "update", "field": "city", "value": "New City" } which replaces the old value of city with New City or { "operation": "delete", "field": "description" } which deletes description (requires parsing patches differently).

how to build a URL for a json object consisting of a multiple column primary key?

Step 1:
If i have a json object
{
"person" : {
"id" : 1,
"lastName" : "Hammer",
"firstName" : "Mike",
...
},
}
I can address the object by its name and ID
GET http://host/persons/1/
Step 2:
Now I have a data model containing a primary key consisting of multiple attributes.
for example primary key is (firstName,lastName). There is no single primary key like "id".
{
"person" : {
"lastName" : "Hammer",
"firstName" : "Mike",
...
},
}
What is the syntax to build a URL for this?
GET http://host/persons/???
I believe the rendering of :
GET http://host/persons/1/
is based on custom code.
then you could use the following scheme for multi-field keys :
GET http://host/persons/field1/field2
in the given case:
GET http://host/persons/lastName/firstname
In general URIs as defined by RFC 3986 (see Section 2: Characters) may contain any of the following characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~:/?#[]#!$&'()*+,;=. Any other character needs to be encoded with the percent-encoding (%hh) .
Your could read up on good practices for REST ful URIs on the web such as : http://blog.2partsmagic.com/restful-uri-design/
If you would like to include meta-data such as field-names in the url , you would have to put it in as key=value format for each field in the primary key of a multi-part key separated by a semicolon.
ex: GET http://host/persons/lastName=Doe;firstname=John/address/home

MongoDB: how to select an empty-key subdocument?

Ahoy! I'm having a very funny issue with MongoDB and, possibly more in general, with JSON. Basically, I accidentally created some MongoDB documents whose subdocuments contain an empty key, e.g. (I stripped ObjectIDs to make the code look nicer):
{
"_id" : ObjectId("..."),
"stats" :
{
"violations" : 0,
"cost" : 170,
},
"parameters" :
{
"" : "../instances/comp/comp20.ectt",
"repetition" : 29,
"time" : 600000
},
"batch" : ObjectId("..."),
"system" : "Linux 3.5.0-27-generic",
"host" : "host3",
"date_started" : ISODate("2013-05-14T16:46:46.788Z"),
"date_stopped" : ISODate("2013-05-14T16:56:48.483Z"),
"copy" : false
}
Of course the problem is line:
"" : "../instances/comp/comp20.ectt"
since I cannot get back the value of the field. If I query using:
db.experiments.find({"batch": ObjectId("...")}, { "parameters.": 1 })
what I get is the full content of the parameters subdocument. My guess is that . is probably ignored if followed by an empty selector. From the JSON specification (15.12.*) it looks like empty keys are allowed. Do you have any ideas about how to solve that?
Is that a known behavior? Is there a use for that?
Update I tried to $rename the field, but that won't work, for the same reasons. Keys that end with . are not allowed.
Update filed issue on MongoDB issue tracker.
Thanks,
Tommaso
I have this same problem. You can select your sub-documents with something like this:
db.foo.find({"parameters.":{$exists:true}})
The dot at the end of "parameters" tells Mongo to look for an empty key in that sub-document. This works for me with Mongo 2.4.x.
Empty keys are not well supported by Mongo, I don't think they are officially supported, but you can insert data with them. So you shouldn't be using them and should find the place in your system where these keys are inserted and eliminate it.
I just checked the code and this does not currently seem possible for the reasons you mention. Since it is allowed to create documents with zero length field names I would consider this a bug. You can report it here : https://jira.mongodb.org
By the way, ironically you can query on it :
> db.c.save({a:{"":1}})
> db.c.save({a:{"":2}})
> db.c.find({"a.":1})
{ "_id" : ObjectId("519349da6bd8a34a4985520a"), "a" : { "" : 1 } }

mongodb find() order is different from schema order

db.blog.save({ title : "My First Post", author: {name : "Jane", id : 1}})
what should below return as the key order does not match?
db.blog.find({"author" : {"id" : 1, "name" : "Jane"}})
EDIT:
based on official mongodb documentation , the keyorder must match (at least for findOne()). It wont return the match-only object using db.blog.findOne({"author" : {"id" : 1, "name" : "Jane"}})
The order of the keys in your query selector is irrelevant. It doesn't need to match the order of the keys you used when adding the document you're searching for.
UPDATE
If you're just looking for an order-independent way to query based on an embedded document, you need to use dot notation:
db.blog.find({"author.id" : 1, "author.name" : "Jane"})
Normally, as #JohnnyHK states the order of the query keys does not matter except for the example you have shown:
db.blog.find({"author" : {"id" : 1, "name" : "Jane"}})
This query will not return results that do not match exactly. Using the query he shows of:
db.blog.find({"author.id" : 1, "author.name" : "Jane"})
Will be key order independent. The reasons for this difference is because in the first query you are searching by an object as such the querier actually searches for exactly that object (in the simplest terms). The same applies for indexes created on the field which contains a set of sub documents, the order does matter.
According to the JSON definition, the key order doesn't matter.
An object is an unordered collection of zero or more name/value pairs
I don't know anything about MongoDB, but I assume it follows the normal rules of JSON, at which point it should return the "My First Post" entry.