Convert json to xls and after get back data to json - json

I have a JSON example to i would like to transform to Excell file to be able to modify all fields and after that, be able to export Excell file to retrieve the new JSON file updated.
I tryed some online tool like (http://www.convertcsv.com/csv-to-json.htm) but the result is not good : I am able to create a csv file, but not able to convert csv file to json.
Do you know a tool with which i will be able to convert to csv / convert to json ?
JSON example :
[
{
"key": "keyExample",
"type": "typeExample",
"ref": "refExample",
"items": [
{
"itemRef": "aaa",
"count": 1,
"desc": "aaaaaaaaa"
},
{
"itemRef": "bbb",
"count": 2,
"desc": "bbbbbbb"
},
{
"itemRef": "ccc",
"count": 2,
"desc": "ccccccc"
}
]
},
{
"key": "keyExample2",
"type": "typeExample2",
"ref": "refExample2",
"items": [
{
"itemRef": "aaa",
"count": 1,
"desc": "aaaaaaaaa"
},
{
"itemRef": "bbb",
"count": 2,
"desc": "bbbbbbb"
},
{
"itemRef": "ccc",
"count": 2,
"desc": "ccccccc"
}
]
}
]

Related

How can I build a json payload from a csv file that has string data separated by line using Python?

In an overview, let's say I have a CSV file that has 5 entries of data (I will have a large number of entries in the CSV file) that I need to use dynamically while building the JSON payload using python (in Databricks).
test.csv
1a2b3c
2n3m6g
333b4c
2m345j
123abc
payload.json
{
"records": {
"id": "37c8323c",
"names": [
{
"age": "1",
"identity": "Dan",
"powers": {
"key": "plus",
"value": "1a2b3c"
}
},
{
"age": "2",
"identity": "Jones",
"powers": {
"key": "minus",
"value": "2n3m6g"
}
},
{
"age": "3",
"identity": "Kayle",
"powers": {
"key": "multiply",
"value": "333b4c"
}
},
{
"age": "4",
"identity": "Donnis",
"powers": {
"key": "divide",
"value": "2m345j"
}
},
{
"age": "5",
"identity": "Layla",
"powers": {
"key": "power",
"value": "123abc"
}
}
]
}
}
The above payload that I need to construct as a result of multiple names objects in the array and I also would like the value property to read dynamically from the CSV file.
I basically need to append the below JSON object to the existing names array considering the value for the power object from the CSV file.
{
"age": "1",
"identity": "Dan",
"powers": {
"key": "plus",
"value": "1a2b3c"
}
}
Since I'm a newbie in Python, any guides would be appreciated. Thanks to the StackOverflow team in advance.

Parsing Git Json with Regular Express

I am taking a Github json file and parsing it with Java's regular expression library JsonPath. I am having a problem parsing arrays that do not have labels.
I need to send a email every time a particular file is changed in our repository.
Here is the Git Json:
{
"trigger": "push",
"payload": {
"type": "GitPush",
"before": "xxxxxxxx",
"after": "yyyyyyyy",
"branch": "branch-name",
"ref": "refs/heads/branch-name",
"repository": {
"id": 42,
"name": "repo",
"title": "repo",
"type": "GitRepository"
},
"beanstalk_user": {
"type": "Owner",
"id": 42,
"login": "username",
"email": "user#example.org",
"name": "Name Surname"
},
"commits": [
{
"type": "GitCommit",
"id": "ffffffff",
"message": "Important changes.",
"branch": "branch-name",
"author": {
"name": "Name Surname",
"email": "user#example.org"
},
"beanstalk_user": {
"type": "Owner",
"id": 42,
"login": "username",
"email": "user#example.org",
"name": "Name Surname"
},
"changed_files": {
"added": [
"NEWFILE",
],
"deleted": [
"Gemfile",
"NEWFILE"
],
"modified": [
"README.md",
"NEWFILE"
],
"copied": [
]
},
"changeset_url": "https://subdomain.github.com/repository-name/changesets/ffffffff",
"committed_at": "2014/08/18 13:30:29 +0000",
"parents": [
"afafafaf"
]
}
]
}
}
This is the expression I am using: to get the commits
$..changed_files
This return the whole changed files part but I can not explicitly choose the name "NEWFILE"
I tried
$..changed_files.*[?(#.added == "NEWFILE")]
$..changed_files.*[?(#.*== "NEWFILE")]
It just returns a empty array.
I just want it to return Newfile and what type of change. Any Ideas?
You can use the following JsonPath to retrieve the commits which list "NEWFILE" as an added file :
$.payload.commits[?(#.changed_files.added.indexOf("NEWFILE") != -1)]

How to convert csv to JSON nested structure

I have a json file to store my data and I convert it to CSV to edit my data. But when i convert it to json again it all goes unconstructed. How can i convert my csv to same structure as my old json.
JSON
{
"product": [
{
"id": "item0001",
"category": "12",
"name": "Name1",
"tag": "tag1",
"more": [
{
"id": "1",
"name": "AL"
},
{
"id": "1",
"name": "BS"
}
],
"active": true
},
{
"id": "item0002",
"categoryId": "13",
"name": "Name2",
"tag": "tag2",
"size": "2",
"more": [
{
"id": "2",
"name": "DL"
},
{
"id": "2",
"name": "AS"
}
],
"active": true
}
]
}
CSV
id,categoryId,name,shortcut,more/0/optionId,more/0/price,more/1/optionId,more/1/price,active,more/2/optionId,more/2/price,spanSize
item0001,ab92d2c6-010e-4182-844d-65050e746617,Name1,Shortcut1,1,60,1,70,TRUE,,,
item0002,ab92d2c6-010e-4182-844d-65050e746617,Name2,Shortcut2,2,60,2,70,TRUE,2,2,4
You can use Miller (mlr) to convert you file both ways
https://miller.readthedocs.io/en/latest/flatten-unflatten/
first from JSON to CSV
mlr --ijson --ocsv cat test.json > test.csv
then edit CSV (Visidata is a very nice command line tool for the job)
then convert it back to CSV
mlr --icsv --ojson cat test.csv > test_v2.json
If you want to have some JSON lines structure instead, use --ojonl

Convert nested json to csv to sheets json api

I'm want to make my json to csv so that i can upload it on google sheets and make it as json api. Whenever i have change data i will just change it on google sheets. But I'm having problems on converting my json file to csv because it changes the variables whenever i convert it. I'm using https://toolslick.com/csv-to-json-converter to convert my json file to csv.
What is the best way to convert json nested to csv ?
JSON
{
"options": [
{
"id": "1",
"value": "Jumbo",
"shortcut": "J",
"textColor": "#FFFFFF",
"backgroundColor": "#00000"
},
{
"id": "2",
"value": "Hot",
"shortcut": "D",
"textColor": "#FFFFFF",
"backgroundColor": "#FFFFFF"
}
],
"categories": [
{
"id": "1",
"order": 1,
"name": "First Category",
"active": true
},
{
"id": "2",
"order": 2,
"name": "Second Category",
"shortcut": "MT",
"active": true
}
],
"products": [
{
"id": "03c6787c-fc2a-4aa8-93a3-5e0f0f98cfb2",
"categoryId": "1",
"name": "First Product",
"shortcut": "First",
"options": [
{
"optionId": "1",
"price": 23
},
{
"optionId": "2",
"price": 45
}
],
"active": true
},
{
"id": "e8669cea-4c9c-431c-84ba-0b014f0f9bc2",
"categoryId": "2",
"name": "Second Product",
"shortcut": "Second",
"options": [
{
"optionId": "1",
"price": 11
},
{
"optionId": "2",
"price": 20
}
],
"active": true
}
],
"discounts": [
{
"id": "1",
"name": "S",
"type": 1,
"amount": 20,
"active": true
},
{
"id": "2",
"name": "P",
"type": 1,
"amount": 20,
"active": true
},
{
"id": "3",
"name": "G",
"type": 2,
"amount": 5,
"active": true
}
]
}
Using python, this can be easily done or almost done. Maybe this code will help you in some way to understand that.
import json,csv
data = []
with open('your_json_file_here.json') as file:
for line in file:
data.append(json.loads(line))
length = len(data)
with open('create_new_file.csv','w') as f:
writer = csv.writer(f)
writers = csv.DictWriter(f, fieldnames=['header1','header2'])
writers.writeheader()
for iter in range(length):
writer.writerow((data[iter]['specific_col_name1'],data[iter]['specific_col_name2']))
f.close()

How to Index & Search Nested Json in Solr 4.9.0

I want to index & search nested json in solr. Here is my json code
{
"id": "44444",
"headline": "testing US",
"generaltags": [
{
"type": "person",
"name": "Jayalalitha",
"relevance": "0.334",
"count": 1
},
{
"type": "person",
"name": "Kumar",
"relevance": "0.234",
"count": 1
}
],
"socialtags": {
"type": "SocialTag",
"name": "US",
"importance": 2
},
"topic": {
"type": "Topic",
"name": "US",
"score": "0.936"
}
}
When I try to Index, I'm getting the error "Error parsing JSON field value. Unexpected OBJECT_START"
When we tried to use Multivalued Field & index, we couldn't able to search using the multivalued field? Its returning "Undefined Field"
Also Please advice if I need to do any changes in schema.xml file?
You are nesting child documents within your document. You need to use the proper syntax for nested child documents in JSON:
[
{
"id": "1",
"title": "Solr adds block join support",
"content_type": "parentDocument",
"_childDocuments_": [
{
"id": "2",
"comments": "SolrCloud supports it too!"
}
]
},
{
"id": "3",
"title": "Lucene and Solr 4.5 is out",
"content_type": "parentDocument",
"_childDocuments_": [
{
"id": "4",
"comments": "Lots of new features"
}
]
}
]
Have a look at this article which describes JSON child documents and block joins.
Using the format mentioned by #qux you will face "Expected: OBJECT_START but got ARRAY_START at [16]",
"code": 400
as when JSON starting with [....] will parsed as a JSON array
{
"id": "44444",
"headline": "testing US",
"generaltags": [
{
"type": "person",
"name": "Jayalalitha",
"relevance": "0.334",
"count": 1
},
{
"type": "person",
"name": "Kumar",
"relevance": "0.234",
"count": 1
}
],
"socialtags": {
"type": "SocialTag",
"name": "US",
"importance": 2
},
"topic": {
"type": "Topic",
"name": "US",
"score": "0.936"
}
}
The above format is correct.
Regarding searching. Kindly use the index to search for the elements of the JSON array.
The workaround for this can be keeping the whole JSON object inside other JSON object and the indexing it
I was suggesting to keep the whole data inside another JSON object. You can try the following way
{
"data": [
{
"id": "44444",
"headline": "testing US",
"generaltags": [
{
"type": "person",
"name": "Jayalalitha",
"relevance": "0.334",
"count": 1
},
{
"type": "person",
"name": "Kumar",
"relevance": "0.234",
"count": 1
}
],
"socialtags": {
"type": "SocialTag",
"name": "US",
"importance": 2
},
"topic": {
"type": "Topic",
"name": "US",
"score": "0.936"
}
}
]
}
see the syntax in http://yonik.com/solr-nested-objects/
$ curl http://localhost:8983/solr/demo/update?commitWithin=3000 -d '
[
{id : book1, type_s:book, title_t : "The Way of Kings", author_s : "Brandon Sanderson",
cat_s:fantasy, pubyear_i:2010, publisher_s:Tor,
_childDocuments_ : [
{ id: book1_c1, type_s:review, review_dt:"2015-01-03T14:30:00Z",
stars_i:5, author_s:yonik,
comment_t:"A great start to what looks like an epic series!"
}
,
{ id: book1_c2, type_s:review, review_dt:"2014-03-15T12:00:00Z",
stars_i:3, author_s:dan,
comment_t:"This book was too long."
}
]
}
]'
supported from solr 5.3