I am trying to loop through a list of bands that I have in a json file (Using Swifty-Json), for some reason when I get down to looping through the names, it returns null.
Json
Small snippet of the JSON
[
{
"Band":{
"ID":"1",
"Name":"The Kooks"
}
},
{
"Band":{
"ID":"2",
"Name":"The Killers"
}
}
]
Swift Code
for (_, value) in json {
for (_,band) in value["Band"] {
for (_,bandname) in band["Name"] {
print("Band name: \(bandname)")
}
}
}
The above code returns:
Band name: null
Band name: null
Band name: null
Band name: null
When I try this:
for (_, value) in json {
for (_,brand) in value["Band"] {
print(band)
}
}
I get this result:
The Kooks
1
The Killers
2
Can anyone tell me what the issue is?
Since the value associated with the key "Name" is a simple string, you want to use:
for (_, value) in json {
for (_,band) in value["Band"] {
if let bandname = band["Name"].string {
print("Band name: \(bandname)")
} else {
print("No name specified")
}
}
}
Related
Trying to project the date difference between two dates, but I am getting error -
Invalid $project :: caused by :: Unknown expression $dateDiff
db.books.aggregate([{
$project:{
Date_diff:
{$dateDiff:{
start_dt:'$borrowers_list.borrowed_dt',
endDate:'$borrowers_list.return_dt',
unit: "day"
}
}
}
}])
The json document structure is like this -
_id:6188a5283543f7cc2f77c73f
branch_id:1
borrowers_list:Object
0:Object
borrowed_dt:2021-08-15T06:00:00.000+00:00
card_no:"ID000067"
return_dt:2021-08-25T06:00:00.000+00:00
I have no idea why the error is unknown expression $dateDiff, as my syntax is correct. Does anyone have any suggestions?
Based on your provided JSON document, the document should be as below (correct me if it is incorrect):
{
_id: ObjectId("6188a5283543f7cc2f77c73f"),
branch_id: 1,
borrowers_list: {
0: {
borrowed_dt: ISODate("2021-08-15T06:00:00.000+00:00"),
card_no: "ID000067",
return_dt: ISODate("2021-08-25T06:00:00.000+00:00")
}
}
}
]
There is no start_dt in $dateFiff field, it is startDate.
Query
db.collection.aggregate([
{
$project: {
Date_diff: {
$dateDiff: {
startDate: "$borrowers_list.0.borrowed_dt",
endDate: "$borrowers_list.0.return_dt",
unit: "day"
}
}
}
}
])
Note: Above query will perform the $dateDiff for the first document in borrowers_list.
Sample Mongo Playground
In case you need to iterate each document (with key-value pair) in borrowers_list to perform $dateDiff.
$set - Convert from object to array (via $objectToArray) for borrowers_list to new field borrowers.
$set - Iterate each document in borrowers array (1) and perform $dateDiff.
$project - Decorate the output document, convert Date_diff from array to object (via $objectToArray).
Query
db.collection.aggregate([
{
$set: {
borrowers: {
"$objectToArray": "$borrowers_list"
}
}
},
{
$set: {
Date_diff: {
$map: {
input: "$borrowers",
as: "b",
in: {
k: "$$b.k",
v: {
$dateDiff: {
startDate: "$$b.v.borrowed_dt",
endDate: "$$b.v.return_dt",
unit: "day"
}
}
}
}
}
}
},
{
$project: {
Date_diff: {
"$arrayToObject": "$Date_diff"
}
}
}
])
Sample Mongo Playground (Iterate document with key-value pair)
New to graphQL, I'm Using the following schema:
type Item {
id: String,
valueA: Float,
valueB: Float
}
type Query {
items(ids: [String]!): [Item]
}
My API can return multiple items on a single request of each type (A & B) but not for both, i.e:
REST Request for typeA : api/a/items?id=[1,2]
Response:
[
{"id":1,"value":100},
{"id":2,"value":30}
]
REST Request for typeB : api/b/items?id=[1,2]
Response:
[
{"id":1,"value":50},
{"id":2,"value":20}
]
I would like to merge those 2 api endpoints into a single graphQL Response like so:
[
{
id: "1",
valueA: 100,
valueB: 50
},
{
id: "2",
valueA: 30,
valueB: 20
}
]
Q: How would one write a resolver that will run a single fetch for each type (getting multiple items response) making sure no unnecessary fetch is triggered when the query is lacking the type i.e:
{items(ids:["1","2"]) {
id
valueA
}}
The above example should only fetch api/a/items?id=[1,2] and the graphQL response should be:
[
{
id: "1",
valueA: 100
},
{
id: "2",
valueA: 30
}
]
So I assumed you are using JavaScript as the language. What you need in this case is not to use direct query, rather use fragments
So the query would become
{
items(ids:["1","2"]) {
...data
}}
fragment data on Item {
id
valueA
}
}
Next in the resolver we need to access these fragments to find the fields which are part of the fragment and then resolve the data based on the same. Below is a simple nodejs file with same
const util = require('util');
var { graphql, buildSchema } = require('graphql');
var schema = buildSchema(`
type Item {
id: String,
valueA: Float,
valueB: Float
}
type Query {
items(ids: [String]!): [Item]
}
`);
var root = { items: (source, args, root) => {
var fields = root.fragments.data.selectionSet.selections.map(f => f.name.value);
var ids = source["ids"];
var data = ids.map(id => {return {id: id}});
if (fields.indexOf("valueA") != -1)
{
// Query api/a/items?id=[ids]
//append to data;
console.log("calling API A")
data[0]["valueA"] = 0.12;
data[1]["valueA"] = 0.15;
}
if (fields.indexOf("valueB") != -1)
{
// Query api/b/items?id=[ids]
//append to data;
console.log("calling API B")
data[0]["valueB"] = 0.10;
data[1]["valueB"] = 0.11;
}
return data
},
};
graphql(schema, `{items(ids:["1","2"]) {
...data
}}
fragment data on Item {
id
valueA
}
`, root).then((response) => {
console.log(util.inspect(response, {showHidden: false, depth: null}));
});
If we run it, the output is
calling API A
{ data:
{ items: [ { id: '1', valueA: 0.12 }, { id: '2', valueA: 0.15 } ] } }
If we change the query to
{
items(ids:["1","2"]) {
...data
}}
fragment data on Item {
id
valueA
valueB
}
}
The output is
calling API A
calling API B
{ data:
{ items:
[ { id: '1', valueA: 0.12, valueB: 0.1 },
{ id: '2', valueA: 0.15, valueB: 0.11 } ] } }
So this demonstrates how you can avoid call for api A/B when their fields are not needed. Exactly as you had asked for
I have a mongo collection where documents have aprox the following structure:
item{
data{"emailBody":
"{\"uniqueKey\":\" this is a stringified json\"}"
}
}
What I want to do is to use 'uniqueKey' as an indexed field, to make an "inner join" equivalant with items in a different collection.
I was thinking about running a loop on all the documents -> parsing the json -> Saving them as new property called "parsedEmailBody".
Is there a better way to handle stringified json in mongo?
The only way is to loop through the collection, parse the field to JSON and update the document in the loop:
db.collection.find({ "item.data.emailBody": { "$type": 2 } })
.snapshot().forEach(function(doc){
parsedEmailBody = JSON.parse(doc.item.data.emailBody);
printjson(parsedEmailBody);
db.collection.updateOne(
{ "_id": doc._id },
{ "$set": { "item.data.parsedEmailBody": parsedEmailBody } }
);
});
For large collections, leverage the updates using the Bulk API:
var cursor = db.collection.find({ "item.data.emailBody": { "$type": 2 } }).snapshot(),
ops = [];
cursor.forEach(function(doc){
var parsedEmailBody = JSON.parse(doc.item.data.emailBody);
ops.push({
"updateOne": {
"filter": { "_id": doc._id },
"update": { "$set": { "item.data.parsedEmailBody": parsedEmailBody } }
}
});
if (ops.length === 500) {
db.collection.bulkWrite(ops);
ops = [];
}
});
if (ops.length > 0) { db.collection.bulkWrite(ops); }
I have a couple of Json objects and I need to delete one of them if this Json contains specific information. For an example I need to delete if state of the Json object is RUNNING.
INPUT
projects {
key: "ads_evenflow.opt"
value {
name: "ads_evenflow.opt"
state: COMPLETE
result: PASSED
}
}
projects {
key: "alexandria.opt"
value {
name: "alexandria.opt"
state: RUNNING
result: PASSED
}
}
projects {
key: "android.opt"
value {
name: "android.opt"
state: COMPLETE
result: PASSED
}
}
OUTPUT
projects {
key: "ads_evenflow.opt"
value {
name: "ads_evenflow.opt"
state: COMPLETE
result: PASSED
}
}
projects {
key: "android.opt"
value {
name: "androids.opt"
state: COMPLETE
result: PASSED
}
}
Your structure isn't an valid JSON. For such structures you need some more relaxed parser. Fortunately, the JSONY perl module could parse it. From the doc:
JSONY is a data language that is simlar to JSON, just more chill. All
valid JSON is also valid JSONY (and represents the same thing when
loaded), but JSONY lets you omit a lot of the syntax that makes JSON a
pain to write.
The following perl code does what you want.
#!/usr/bin/env perl
use 5.014;
use warnings;
use JSONY;
my $string = slurp_file();
my $data = JSONY->new->load( $string );
for my $proj (#{$data}) {
next unless ref($proj);
next if $proj->{value}->{state} eq 'RUNNING';
pretty_print_proj($proj);
}
sub pretty_print_proj {
my $p = shift;
say "project {";
say qq{\tkey: "$p->{key}"};
say "\tvalue {";
say "\t\t$_: ", $p->{value}->{$_} for (qw(name state result));
say "\t}";
say "}";
}
sub slurp_file {
#change this for your real case...
return do { local $/; <DATA>};
}
__DATA__
projects {
key: "ads_evenflow.opt"
value {
name: "ads_evenflow.opt"
state: COMPLETE
result: PASSED
}
}
projects {
key: "alexandria.opt"
value {
name: "alexandria.opt"
state: RUNNING
result: PASSED
}
}
projects {
key: "android.opt"
value {
name: "android.opt"
state: COMPLETE
result: PASSED
}
}
prints:
project {
key: "ads_evenflow.opt"
value {
name: ads_evenflow.opt
state: COMPLETE
result: PASSED
}
}
project {
key: "android.opt"
value {
name: android.opt
state: COMPLETE
result: PASSED
}
}
Here's an SQL query I had to execute in Groovy:
def resultset_bio = sql.rows("SELECT author, isbn FROM Book WHERE genre = 'biography'")
I'm trying to convert this data into JSON. For that, I am using this code:
def json = new groovy.json.JsonBuilder()
json {
Biographies(resultset_bio.collect{[id: it]})
}
println json.toPrettyString()
}
The JSON output I expect should be like this:
{
"Biographies":
{
"SSS": ["XXX",456988]
}
}
But instead, I'm getting this:
{
"Biographies": [
{
"id": {
"author": "XXX",
"isbn": 456988,
}
}
]
}
How should I change my code? Please help.
Now id is passed as a static key.
Try:
json {
Biographies(resultset_bio.collect{[(it.id): it]})
}
You do not have the book title in your select so you can't make a mapping of title to book info.
In order to get the list layout of each row (which is a map) grouped by author:
def authorBios = resultset_bio.groupBy { it.author }
def biographies = authorBios.collectEntries { author, row ->
[author, row*.values()]
}
json {
Biographies(biographies)
}