How can we structure array of dictionary in firebase realtime database? - json

I want to create below json structure like in firebase realtime databse
{
"doner": [
{
"firstName": "Sandesh",
"lastName": "Sardar",
"location": [50.11, 8.68],
"mobile": "100",
"age": 21
},
{
"firstName": "Akash",
"lastName": "saw",
"location": [50.85, 4.35],
"mobile": "1200",
"age": 22
},
{
"firstName": "Sahil",
"lastName": "abc",
"location": [48.85, 2.35],
"mobile": "325846",
"age": 23
},
{
"firstName": "ram",
"lastName": "abc",
"location": [46.2039, 6.1400],
"mobile": "3257673",
"age": 34
}]
}
But when I imported file in firebase realtime database, it turns into below structure where ]1
I believe this is not array of dictionary, but it is a dictionary of multiple dictionaries.
Is there any way to structure array of dictionaries in firebase?

The Firebase Realtime Database doesn't natively store arrays in the format you want. It instead stores arrays as key-value pairs, with the key being the (string representation) of the index of each item in the array.
When you read the data from Firebase (either through an SDK, or through the REST API), it converts this map back into an array.
So what you're seeing is the expected behavior, and there's no way to change it.
If you'd like to learn more about how Firebase deals with arrays, and why, I recommend checking out Kato's blog post here: Best Practices: Arrays in Firebase.

Related

AWS Glue - Crawl Json file and insert into Redshift

Hi I am trying to use Aws Glue to load a S3 file into Redshift. When I try to crawl a Json file from my S3 bucket into a table it doesn't seem to work: the result is a table with a single array column as seen in the picture below. I have already tried using a Json Classifier with the path as "$[*]" but that doesnt seem to work either. Any ideas?
The structure of the Json file is the below:
[
{
"firstname": "andrew",
"lastname": "johnson",
"subject": "Mathematics",
"mark": 49
},
{
"firstname": "mary",
"lastname": "james",
"subject": "Physics",
"mark": ""
},
{
"firstname": "Peter",
"lastname": "Lloyd",
"subject": "Soc. Studies",
"mark": 89
}
]
The below is a screenshot for the resulted table, which is a single array column which cant be mapped to the table in Redshift:

How to parse nested json and write in Redshift?

I have a following json structure like this:
{
"firstname": "A",
"lastname": "B",
"age": 24,
"address": {
"streetAddress": "123",
"city": "San Jone",
"state": "CA",
"postalCode": "394221"
},
"phonenumbers": [
{ "type": "home", "number": "123456789" }
{ "type": "mobile", "number": "987654321" }
]
}
I need to copy this json from S3 to a Redshift table.
I am currently using copy command with a path file but it loads array as a single column.
I wanted the nested array to be parsed and the table should like this:
firstname|lastname|age|streetaddress|city |state|postalcode|type|number
-----------------------------------------------------------------------------
A | B |24 |123 |SanJose|CA |394221 |home|123456789
-----------------------------------------------------------------------------
A | B |24 |123 |SanJose|CA |394221 |mob|987654321
Is there a way to do that?
You can do use nested JSON paths by making use of JSON path files. However, this does not work with the multiple phone number types.
If you can modify the dataset to have multiple records (one for mobile, one for home) then your file would look similar to the below.
{
"jsonpaths": [
"$.firstname",
"$.lastname",
"$.venuestate",
"$.age",
"$.address.streetAddress",
"$.address.city",
"$.address.state",
"$.address.postalCode",
"$.phonenumbers[0].type",
"$.phonenumbers[0].number"
]
}
If you are unable to change the format you will need to perform an ETL task upon load before it can be consumed by Redshift. For this you could use an event for creation of objects to trigger a Lambda function and then perform the ETL process for you before it loads into Redshift.

get by id from local json http

I have fake users.json file and I can http.get to list the array of json.
Since I want to get the particular user by id and haven't stored the data in the database, instead just use the fake json data.
[
{
"id": "cb55524d-1454-4b12-92a8-0437e8e6ede7",
"name": "john",
"age": "25",
"country": "germany"
},
{
"id": "ab55524d-1454-4b12-92a8-0437e8e6ede8",
"name": "tom",
"age": "28",
"country": "canada"
}
]
I can do this stuff if the data is stored in the database, but not sure how to proceed with the fake json data.
Any help is appreciated.
Thanks
If you need the json as raw data, for just fake data, You can simply require it and use it as object..
const JsonObj = require('path/to/file.json')
console.log(JsonObj[0].id) // <-- cb55524d-1454-4b12-92a8-0437e8e6ede7
Plus, if you need more dynamic solution, there is a good JSON-server you can easily use for testing and so: check this git repo
var _ = require('underscore');
var dummyJson = [
{
"id": "cb55524d-1454-4b12-92a8-0437e8e6ede7",
"name": "john",
"age": "25",
"country": "germany"
},
{
"id": "ab55524d-1454-4b12-92a8-0437e8e6ede8",
"name": "tom",
"age": "28",
"country": "canada"
}
]
var requiredID = "cb55524d-1454-4b12-92a8-0437e8e6ede7";
var reuiredObject = _.find(dummyJson, function (d) {
return d.id === requiredID;
})
Get JSON object using JSON.parse('users.json') and store it in a variable users.
Loop through array of users using for .. in and using if condition on id update the object if required.
Stringify the updated users object using JSON.stringify(users); and write this string to users.json file using fs.write() module in NodeJS so you will have updated objects in your json file.

Reading complex json data without iteration

I am working with some data and often the data is nested and i am required to perform some CRUD operations based on the structure of the data i have. For instance i have this json structure
{
"_id": "KnNLkJEhrDsvWedLu",
"createdAt": {
"$date": "2016-10-13T11:24:13.843Z"
},
"services": {
"password": {
"bcrypt": "$2a$30$1/cniPwPNCuwZ/MQDPQkLej..cAATkoGX.qD1TS4iHgf/pwZYE.j."
},
"email": {
"verificationTokens": [
{
"token": "qxe_T9IS7jW7gntpK0Q7UQ35RJ9jO9m2lclnokO3z87",
"address": "drwho#gmail.com",
"when": {
"$date": "2016-10-13T11:24:14.428Z"
}
}
]
},
"resume": {
"loginTokens": []
}
},
"username": "doctorwho",
"emails": [
{
"address": "drwho#gmail.com",
"verified": false
}
],
"persodata": {
"lastlogin": {
"$date": "2016-10-13T11:29:36.816Z"
},
"fname": "Doctor",
"lname": "Who",
"mobile": "+4480000000",
"identity": "1",
"email": "drwho#gmail.com",
"gender": null
}
}
I have several data sets with such complex structure. I need to read the data, edit and also delete. Before i get to iteration, i was wondering how i can read the data without iteration then iterate when i absolutely have to.
What are the rules i should keep in mind when reading such complex json structures to enable me read any complex structure i come across?.
I am currently using javascript but i am looking for rules that apply in other languages as well.
Parsing Json in JavaScript should be easy. http://www.json.org/js.html.
"Since JSON is a proper subset of JavaScript, the compiler will correctly parse the text and produce an object structure". Just follow the examples on that page.
If you want to use another language, in Java you could use Jackson or Gson to map those json strings to objects. Then using them becomes easy. Both libraries are annotation based, and wouldn't be difficult to implement.

Can we add array of objects in amazon cloudsearch in json format?

I am trying to create a domain and uploading a sample data which is like :
[
{
"type": "add",
"id": "1371964",
"version": 1,
"lang": "eng",
"fields": {
"id": "1371964",
"uid": "1200983280",
"time": "2013-12-23 13:00:26",
"orderid": "1200983280",
"callerid": "66580662",
"is_called": "1",
"is_synced": "1",
"is_sent": "1",
"allcaller": [
{
"sno": "1085770",
"uid": "1387783883.30547",
"lastfun": null,
"callduration": "00:00:46",
"request_id": "1371964"
}
]
}
}]
when I am uploading sample data while creating a domain, cloudsearch is not taking it.
If I remove allcaller array then it takes it smoothly.
If cloudsearch does not allowing object arrays, then how should I format this json??
Just found after searching on aws forums, cloudsearch doesnot allow nested json (object arrays) :(
https://forums.aws.amazon.com/thread.jspa?messageID=405879&#405879
Time to try Elastic search.