Nodejs Create excel (csv/xlsx) buffer file from json - json

I am looking for library which will create excel file ( csv and xlsx ) from from json
I have array of objects
var arr=[
{
"name": "Ivy Dickson",
"date": "2013-05-27T11:04:15-07:00",
"number": 10
},
{
"date": "2014-02-07T22:09:58-08:00",
"name": "Walker Lynch",
"number": 2
},
{
"number": 5,
"date": "2013-06-16T05:29:13-07:00",
"name": "Maxwell U. Holden"
},
{
"name": "Courtney Short",
"date": "2014-03-14T07:32:34-07:00",
"number": 6
}
]
and I want to convert it into excel file buffer.
I am not getting which library is best for my condition.
please suggest better library.

You have that package json2xls, it's really easy to use, but it's only for convert json in excel file.
But you have a other package for convert json in csv, it's json2csv.

Related

How to import an image with the local path listed in a json file?

I am having a problem setting up a an image in json file with local path.
I have a folder named Images and a Data json file. I want to json file to hold my pictures in json locally.
{
"name": "John",
"id": "001",
"price": "$995",
"image": "./Images/picture1.jpg"
},
{
"name": "John",
"id": "001",
"price": "$995",
"image": "./Images/picture2.jpg"
},
{
"name": "John",
"id": "001",
"price": "$995",
"image": "./Images/picture1.jpg"
},
{
"name": "John",
"id": "001",
"price": "$995",
"image": "./Images/picture2.jpg"
},
Please tell my why this does not work.
JSON files only contain strings without any linking. So "./Images/picture2.jpg" is just a string without any meaning for your editor.
Some apps or editors might present some eye candy features, like maybe what you are looking for. But this is totally an option of the editor and not something of JSON itself.
JSON are plain text files that store key, values.
I hope it helps

Copy JSON Array data from REST data factory to Azure Blob as is

I have used REST to get data from API and the format of JSON output that contains arrays. When I am trying to copy the JSON as it is using copy activity to BLOB, I am only getting first object data and the rest is ignored.
In the documentation is says we can copy JSON as is by skipping schema section on both dataset and copy activity. I followed the same and I am the getting the output as below.
https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#export-json-response-as-is
Tried copy activity without schema, using the header as first row and output files to BLOB as .json and .txt
Sample REST output:
{
"totalPages": 500,
"firstPage": true,
"lastPage": false,
"numberOfElements": 50,
"number": 0,
"totalElements": 636,
"columns": {
"dimension": {
"id": "variables/page",
"type": "string"
},
"columnIds": [
"0"
]
},
"rows": [
{
"itemId": "1234",
"value": "home",
"data": [
65
]
},
{
"itemId": "1235",
"value": "category",
"data": [
92
]
},
],
"summaryData": {
"totals": [
157
],
"col-max": [
123
],
"col-min": [
1
]
}
}
BLOB Output as the text is below: which is only first object data
totalPages,firstPage,lastPage,numberOfElements,number,totalElements
500,True,False,50,0,636
If you want to write the JSON response as is, you can use an HTTP connector. However, please note that the HTTP connector doesn't support pagination.
If you want to keep using the REST connector and to write a csv file as output, can you please specify how you want the nested objects and arrays to be written ?
In csv files, we can not write arrays. You could always use a custom activity or an azure function activity to call the REST API, parse it the way you want and write to a csv file.
Hope this helps.

Read first line of huge Json file with Spark using Pyspark

I'm pretty new to Spark and to teach myself I have been using small json files, which work perfectly. I'm using Pyspark with Spark 2.2.1 However I don't get how to read in a single data line instead of the entire json file. I have been looking for documentation on this but it seems pretty scarce. I have to process a single large (larger than my RAM) json file (wikipedia dump: https://archive.org/details/wikidata-json-20150316) and want to do this in chuncks or line by line. I thought Spark was designed to do just that but can't find out how to do it and when I request the top 5 observations in a naive way I run out of memory. I have tried RDD .
SparkRDD= spark.read.json("largejson.json").rdd
SparkRDD.take(5)
and Dataframe
SparkDF= spark.read.json("largejson.json")
SparkDF.show(5,truncate = False)
So in short:
1) How do I read in just a fraction of a large JSON file? (Show first 5 entries)
2) How do I filter a large JSON file line by line to keep just the required results?
Also: I don't want to predefine the datascheme for this to work.
I must be overlooking something.
Thanks
Edit: With some help I have gotten a look at the first observation but it by itself is already too huge to post here so I'll just put a fraction of it here.
[
{
"id": "Q1",
"type": "item",
"aliases": {
"pl": [{
"language": "pl",
"value": "kosmos"
}, {
"language": "pl",
"value": "\\u015bwiat"
}, {
"language": "pl",
"value": "natura"
}, {
"language": "pl",
"value": "uniwersum"
}],
"en": [{
"language": "en",
"value": "cosmos"
}, {
"language": "en",
"value": "The Universe"
}, {
"language": "en",
"value": "Space"
}],
...etc
That's very similar to Select only first line from files under a directory in pyspark
Hence something like this should work :
def read_firstline(filename):
with open(filename, 'rb') as f:
return f.readline()
# files is a list of filenames
rdd_of_firstlines = sc.parallelize(files).flatMap(read_firstline)

Accessing JSON array within JSON object in VBA Excel using VBA-JSON

I am working on VBA macros to access live data from a website through API calls. I am using VBA-JSON library for it. There are mainly two scenarios when fetching data:
Scenario #1 (we can see data is a JSON array):
{
"success": true,
"data": [
{
"id": 4,
"creator_user_id": {
"id": 162,
"name": "ASD",
"email": "email id"
},
"user_id": {
"id": 787878,
"name": "XYZ",
"email": "email id"
},
...
}
]
}
In the scenario 1 I can fetch values within data array by using For Each loop in VBA Excel through VBA-JSON library.
Scenario #2:
{
"success": true,
"data": {
"id": 123,
"name": "ABC",
"options": [
{
"id": 119,
"label": "Name1"
},
{
"id": 120,
"label": "Name2"
}
],
"mandatory_flag": false
}
}
But in 2nd scenario I can not access data within data array, because JSON data array is within JSON object. For example I want the value of id: 120 which must return Name2, thus I want to access value of label which will return Name2.
I tried many ways, but can not get it. Please anyone can tell me how this can be done in VBA Excel?
Any help appreciated.
Using the VBA-JSON library, you can access the value you want by calling:
Set Parsed = JsonConverter.ParseJson(<yourjsonstring>)
Debug.Print Parsed("data")("options")(2)("label")

Can we add array of objects in amazon cloudsearch in json format?

I am trying to create a domain and uploading a sample data which is like :
[
{
"type": "add",
"id": "1371964",
"version": 1,
"lang": "eng",
"fields": {
"id": "1371964",
"uid": "1200983280",
"time": "2013-12-23 13:00:26",
"orderid": "1200983280",
"callerid": "66580662",
"is_called": "1",
"is_synced": "1",
"is_sent": "1",
"allcaller": [
{
"sno": "1085770",
"uid": "1387783883.30547",
"lastfun": null,
"callduration": "00:00:46",
"request_id": "1371964"
}
]
}
}]
when I am uploading sample data while creating a domain, cloudsearch is not taking it.
If I remove allcaller array then it takes it smoothly.
If cloudsearch does not allowing object arrays, then how should I format this json??
Just found after searching on aws forums, cloudsearch doesnot allow nested json (object arrays) :(
https://forums.aws.amazon.com/thread.jspa?messageID=405879&#405879
Time to try Elastic search.