how to find and rename field name in mongodb 3.2 - json

how can I rename field name in MongoDB?
I want to replace all the field names that start with $ to &.
Thanks!

I saw some link and make a proper solution for you problem
First you need to get your all column, you could do this with MapReduce:
mr = db.runCommand({
"mapreduce" : "my_collection",
"map" : function() {
for (var key in this) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "my_collection" + "_keys"
});
Then run distinct on the resulting collection so as to find all the keys:
columns = db[mr.result].distinct("_id")
And rename the all matching columns
columns.forEach(function(columnName) {
if (columnName.indexOf('$') == 0) {
var newColumnName = columnName.replace('$', '&');
rename_query = { '$rename': {} };
rename_query['$rename'][columnName] = newColumnName;
db.my_collection.updateMany({}, rename_query)
}
})
Reference link are
MongoDB Get names of all keys in collection
MongoDB $rename javascript variable for key name

Related

node.js - if statement not working as expected

This piece of node.js code is run against a Spark History Server API.
What its supposed to do is find any jobs where the name matches the value passed in by uuid and return the id for only that job.
What the below code actually does is if the uuid is found in any job name, the id for every job is returned.
I think this has something to do with the way I'm parsing the JSON but I'm not entirely sure.
How do I change this so it works as I would like it to?
var arrFound = Object.keys(json).filter(function(key) {
console.log("gel json[key].name" + json[key].name);
return json[key].name;
}).reduce(function(obj, key){
if (json[key].name.indexOf(uuid)) {
obj = json[key].id;
return obj;
}
reduce is the wrong method for that. Use find or filter. You can even do that in the filter callback that you already have. And then you can chain a map to that to get the id property values for each matched key:
var arrFound = Object.keys(json).filter(function(key) {
console.log("gel json[key].name " + json[key].name);
return json[key].name && json[key].name.includes(uuid);
}).map(function(key) {
return json[key].id;
});
console.log (arrFound); // array of matched id values
Note also that your use of indexOf is wrong. You need to compare that value with -1 (not found). But nowadays you can use includes which returns a boolean.
Note that with Object.values you list the objects instead of the keys, which is more interesting in your case:
var arrFound = Object.values(json).filter(function(obj) {
console.log("gel obj.name " + obj.name);
return obj.name && obj.name.includes(uuid);
}).map(function(obj) {
return obj.id;
});
console.log (arrFound); // array of matched id values
While the accepted answer provides working code, I feel it's worth pointing out that reduce is a good way to solve this problem, and to me makes more sense than chaining filter and map:
const jobs = {
1: {
id: 1,
name: 'job: 2a2912c5-9ec8-4ead-9a8f-724ab44fc9c7'
},
2: {
id: 2,
name: 'job: 30ea8ab2-ae3f-4427-8e44-5090d064d58d'
},
3: {
id: 3,
name: 'job: 5f8abe54-8417-4b3c-90f1-a7f4aad67cfb'
},
4: {
id: 4,
name: 'job: 30ea8ab2-ae3f-4427-8e44-5090d064d58d'
}
}
const matchUUID = uuid =>
(acc, job) => job.name.includes(uuid) ? [ ...acc, job.id ] : acc
const target = '30ea8ab2-ae3f-4427-8e44-5090d064d58d'
const matchTarget = matchUUID(target)
// [ 2, 4 ]
console.log(Object.values(jobs).reduce(matchTarget, []))
reduce is appropriate for these kinds of problems: taking a larger, more complex or complete value, and reducing it to the data you require. On large datasets, it could also be more efficient since you only need to traverse the collection once.
If you're Node version-constrained or don't want to use array spread, here's a slightly more 'traditional' version:
var result = Object.keys(jobs).reduce(
function (acc, key) {
if (jobs[key].name.includes(uuid)) {
acc.push(jobs[key].id)
}
return acc
},
[]
)
Note use of Object.keys, since Object.values is ES2017 and may not always be available. String.prototype.includes is ES2015, but you could always use indexOf if necessary.

Firebase + Aurelia: how to process the returned key=>value format by Firebase?

I'm retrieving the following structure from Firebase:
"bills" : {
"1" : { // the customer id
"orders" : {
"-KVMs10xKfNdh_vLLj_k" : [ { // auto generated
"products" : [ {
"amount" : 3,
"name" : "Cappuccino",
"price" : 2.6
} ],
"time" : "00:15:14"
} ]
}
}
}
I'm looking for a way to process this with Aurelia. I've written a value converter that allows my repeat.for to loop the object keys of orders, sending each order to an order-details component. The problem is, this doesn't pass the key, which I need for deleting a certain order ("-KVMs10xKfNdh_vLLj_k")
Should I loop over each order and add the key as an attribute myself?
Is there a better/faster way?
This answer might be a little late (sorry OP), but for anyone else looking for a solution you can convert the snapshot to an array that you can iterate in your Aurelia views using a repeat.for, for example.
This is a function that I use in all of my Aurelia + Firebase applications:
export const snapshotToArray = (snapshot) => {
const returnArr = [];
snapshot.forEach((childSnapshot) => {
const item = childSnapshot.val();
item.uid = childSnapshot.key;
returnArr.push(item);
});
return returnArr;
};
You would use it like this:
firebase.database().ref(`/bills`)
.once('value')
.then((snapshot) => {
const arr = snapshotToArray(snapshot);
});

Update/Insert an element in a nested stringified array of a mongoDB document

My collection has this structure:
{
"_id" : "7ZEc8dkbs4tLwhW24",
"title" : "title",
"json" :
{
\"cells\":[
{
\"type\":\"model\",
\"size\":{\"width\":100,\"height\":40},
\"id\":\"11dc3b6f-2f61-473c-90d7-08f16e7d277a\",
\"attrs\":{
\"text\":{\"text\":\"content\"},
\"a\":{\"xlink:href\":\"http://website.com\",\"xlink:show\":\"replace\",\"cursor\":\"pointer\"}
}
}
]
}
}
Now I need to insert/update the field json.cells.attrs.a in my meteor app. All informations I got are _id (document ID) and id (id of element in cells). If a doesn't exist, the element should be created.
My attempt is not correct, as the query isn't looking for the elemID to get the correct element in the cells-array:
var linkObject = {"xlink:href":"http://newURL.com","xlink:show":"replace","cursor":"pointer"};
var docID = '7ZEc8dkbs4tLwhW24';
var elemID = '11dc3b6f-2f61-473c-90d7-08f16e7d277a'; // How to search for this 'id'?
var result = Collection.findOne({ _id: docID });
var json = JSON.parse(result.json);
// find id = elemID in 'json'
// add/update 'a'
// update json in mongoDB document
In your code you are only adding the docID as criteria in your query. You must add the elemID to your query criteria. Then after that you can use dot.notation and the $(Positional Operator) to update the a property of the attrs object. You can do it like this:
Collection.update(
{"_id" : docID , "json.cells.id" : elemID },
{$set: { "json.cells.$.attrs.a" : linkObject}}
}
$set, will create the field if the field does not exist. You can use the $ Positional Operator if you don't know the array position.

Creating CSV view from CouchDB

I know this should be easy, but I just can't work out how to do it despite having spent several hours looking at it today. There doesn't appear to be a straightforward example or tutorial online as far as I can tell.
I've got several "tables" of documents in a CouchDB database, with each "table" having a different value in a "schema" field in the document. All documents with the same schema contain an identical set of fields. All I want to do is be able to view the different "tables" in CSV format, and I don't want to have to specify the list of fieldnames in each schema.
The CSV output is going to be consumed by an R script, so I don't want any additional headers in the output if I can avoid them; just the list of fieldnames, comma separated, with the values in CSV format.
For example, two records in the "table1" format might look like:
{
"schema": "table1",
"field1": 17,
"field2": "abc",
...
"fieldN": "abc",
"timestamp": "2012-03-30T18:00:00Z"
}
and
{
"schema": "table1",
"field1": 193,
"field2": "xyz",
...
"fieldN": "ijk",
"timestamp": "2012-03-30T19:01:00Z"
}
My view is pretty simple:
"all": "function(doc) {
if (doc.schema == "table1") {
emit(doc.timestamp, doc)
}
}"
as I want to sort my records in timestamp order.
Presumably the list function will be something like:
"csv": "function(head, req) {
var row;
...
// Something here to iterate through the list of fieldnames and print them
// comma separated
for (row in getRow) {
// Something here to iterate through each row and print the field values
// comma separated
}
}"
but I just can't get my head around the rest of it.
If I want to get CSV output looking like
"timestamp", "field1", "field2", ..., "fieldN"
"2012-03-30T18:00:00Z", 17, "abc", ..., "abc"
"2012-03-30T19:01:00Z", 193, "xyz", ..., "ijk"
what should my CouchDB list function look like?
Thanks in advance
The list function that works with your given map should look something like this:
function(head,req) {
var headers;
start({'headers':{'Content-Type' : 'text/csv; charset=utf-8; header=present'}});
while(r = getRow()) {
if(!headers) {
headers = Object.keys(r.value);
send('"' + headers.join('","') + '"\n');
}
headers.forEach(function(v,i) {
send(String(r.value[v]).replace(/\"/g,'""').replace(/^|$/g,'"'));
(i + 1 < headers.length) ? send(',') : send('\n');
});
}
}
Unlike Ryan's suggestion, the fields to include in the list are not configurable in this function, and any changes in order or included fields would have to be written in. You would also have to rewrite any quoting logic needed.
Here some generic code that Max Ogden has written. While it is in node-couchapp form, you probably can get the idea:
var couchapp = require('couchapp')
, path = require('path')
;
ddoc = { _id:'_design/csvexport' };
ddoc.views = {
headers: {
map: function(doc) {
var keys = [];
for (var key in doc) {
emit(key, 1);
}
},
reduce: "_sum"
}
};
ddoc.lists = {
/**
* Generates a CSV from all the rows in the view.
*
* Takes in a url encoded array of headers as an argument. You can
* generate this by querying /_list/urlencode/headers. Pass it in
* as the headers get parameter, e.g.: ?headers=%5B%22_id%22%2C%22_rev%5D
*
* #author Max Ogden
*/
csv: function(head, req) {
if ('headers' in req.query) {
var headers = JSON.parse(unescape(req.query.headers));
var row, sep = '\n', headerSent = false, startedOutput = false;
start({"headers":{"Content-Type" : "text/csv; charset=utf-8"}});
send('"' + headers.join('","') + '"\n');
while (row = getRow()) {
for (var header in headers) {
if (row.value[headers[header]]) {
if (startedOutput) send(",");
var value = row.value[headers[header]];
if (typeof(value) == "object") value = JSON.stringify(value);
if (typeof(value) == "string") value = value.replace(/\"/g, '""');
send("\"" + value + "\"");
} else {
if (startedOutput) send(",");
}
startedOutput = true;
}
startedOutput = false;
send('\n');
}
} else {
send("You must pass in the urlencoded headers you wish to build the CSV from. Query /_list/urlencode/headers?group=true");
}
}
}
module.exports = ddoc;
Source:
https://github.com/kanso/kanso/issues/336

How do you return lower-cased JSON from a CFCin ColdFusion?

I have a ColdFusion component that will return some JSON data:
component
{
remote function GetPeople() returnformat="json"
{
var people = entityLoad("Person");
return people;
}
}
Unfortunately, the returned JSON has all the property names in upper case:
[
{
FIRSTNAME: "John",
LASTNAME: "Doe"
},
{
FIRSTNAME: "Jane",
LASTNAME: "Dover
}
]
Is there any way to force the framework to return JSON so that the property names are all lower-case (maybe a custom UDF/CFC that someone else has written)?
Yeah, unfortunately, that is just the way ColdFusion works. When setting some variables you can force lowercase, like with structs:
<cfset structName.varName = "test" />
Will set a the variable with uppercase names. But:
<cfset structName['varname'] = "test" />
Will force the lowercase (or camelcase depending on what you pass in).
But with the ORM stuff you are doing, I don't think you are going to be able to have any control over it. Someone correct me if I am wrong.
From http://livedocs.adobe.com/coldfusion/8/htmldocs/help.html?content=functions_s_03.html
Note: ColdFusion internally represents structure key names using
all-uppercase characters, and, therefore, serializes the key names to
all-uppercase JSON representations. Any JavaScript that handles JSON
representations of ColdFusion structures must use all-uppercase
structure key names, such as CITY or STATE. You also use the
all-uppercase names COLUMNS and DATA as the keys for the two arrays
that represent ColdFusion queries in JSON format.
If you're defining the variables yourself, you can use bracket notation (as Jason's answer shows), but with built-in stuff like ORM I think you're stuck - unless you want to create your own struct, and clone the ORM version manually, lower-casing each of the keys, but that's not really a great solution. :/
This should work as you described.
component
{
remote function GetPeople() returnformat="json"
{
var people = entityLoad("Person");
var rtn = [];
for ( var i = 1; i <= arrayLen( people ); i++ ) {
arrayAppend( rtn, {
"firstname" = people[i].getFirstname(),
"lastname" = people[i].getLastname()
} );
}
return rtn;
}
}
If any of your entity properties return null, the struct key wont exist.
To work around that try this
component
{
remote function GetPeople() returnformat="json"
{
var people = entityLoad("Person");
var rtn = [];
for ( var i = 1; i <= arrayLen( people ); i++ ) {
var i_person = {
"firstname" = people[i].getFirstname(),
"lastname" = people[i].getLastname()
};
if ( !structKeyExists( i_person, "firstname" ) ) {
i_person["firstname"] = ""; // your default value
}
if ( !structKeyExists( i_person, "lastname" ) ) {
i_person["lastname"] = ""; // your default value
}
arrayAppend( rtn, i_person );
}
return rtn;
}
}