I'm trying to import a json file (titled, 'filename.json') into my firebase database using 'Import JSON' under 'Database.'
However, i am getting an Invalid JSON file error.
The foll is the structure of my JSON that i wish to import. Can you pls help me with where i am going wrong with this:
{
"checklist": "XXX",
"notes": ""
}
{ "checklist": "XXX",
"notes": ""
}
{
"checklist": "XXX",
"notes": ""
}
{
"checklist": "XXX",
"notes": ""
}
Your objects need commas between them. Basically, any line where you've got an } here (except for the last one), throw a comma after it. Then wrap the whole thing in a [] so it's a valid json array.
Related
I'm playing around with python and JSON files. I'm doing a simple game as a learning project, but I can't fetch a nested key in a list on demand when I want to. In the below example I'm trying to get the name of the player.
This is the JSON file (player_sheet_daniel.json):
[
{
"sheet_header": {
"player name": "Daniel",
"character name": "Ulrik the Blob"
}
},
{
"prim_attr": {
"STR": "11",
"DEX": "12",
"HP": "15",
"SKI": "16"
}
}
]
I've tried:
import json
with open('player_sheet_daniel.json','r') as sheet_json:
sheet_py = json.load(sheet_json)
for section in sheet_py:
print(section['sheet_header']['player name'])
I get: KeyError: 'sheet_header'.
Your JSON example is an array which wraps two objects. So, the correct python syntax would be :
import json
with open('player_sheet_daniel.json','r') as sheet_json:
sheet_py = json.load(sheet_json)
section = sheet_py[0]
print(section['sheet_header']['player name'])
I am able to index a simple JSON using solr but for complex JSON which are having nested structures like below I am getting an error. I am using the curl command to index the JSON file using solr:
curl 'https://localhost:8983/solr/json_collection/update?commit=true' --data-binary #/home/mic.json -H 'Content-type:application/json'
Error:
Error - {"responseHeader":{"status":400,"QTime":12},"error":{"metadata":["error-class","org.apache.solr.common.SolrException"],"msg":"Error parsing JSON field value. Unexpected OBJECT_START","code":400}}
JSON:
[
{
"PART I, ITEM 1. BUSINESS": {
"GENERAL": {
"Our vision": {
"text": [
"Microsoft world."
]
},
"The ambitions that drive us": {
"text": [
"To carry ambitions:",
"* Create more personal computing."
],
"Create more personal computing": {
"text": [
"We strive available. website."
]
}
}
},
"ITEM 1A. RISK FACTORS": "Our opk."
}
}
]
Error
JSON
Your JSON seems to be erroneous. In either of the cases, single object or array of JSON, your JSON should follow basic conventions.
In case of single object, the syntax should be-
{ "key":"value"}
In case of Array of JSON, the syntax can be-
{
"key1":["value1", "value2",...],
"key2":["value12", "value22",...]
}
I have a huge .json file like the one below. I want to convert that JSON to a dataframe on Spark.
{
"movie": {
"id": 1,
"name": "test"
}
}
When I execute the following code, I get a _corrupt_record error:
val df = sqlContext.read.json("example.json")
df.first()
Lately I learned that Spark only supports one-line JSON files, like:
{ "movie": { "id": 1, "name": "test test" } }
How can I convert a JSON text from multiple lines to a single line.
I have the following JSON and i have to import it to Big Query. What schema should i specify for the below JSON? What should be the field names of the table? I am using BigQuery WebUI.
{
"users": {
"userid1mohan": {
"password": "123456",
"username": "mohan"
},
"userid2kutubuddin": {
"password": "234567",
"username": "kutubuddin"
},
"userid3pankaj": {
"password": "345678",
"username": "pankaj"
},
"userid4vivek": {
"password": "456789",
"username": "vivek"
}
}
}
Please note that BigQuery will easily ingest CSVs and newline delimited JSONs, but not a plain JSON file like the one provided in the question.
Find a specification on the newline delimited JSON format here: http://dataprotocols.org/ndjson/
For a use case like this one, the nljson would need to look like:
{"username":"kutubuddin","password":"456789"}
{"username":"pankaj","password":"312231"}
{"username":"vivek","password":"123h1"}
So you'll need to transform the json object you have into multiple json objects, one in each line, before ingesting it into BigQuery.
Should be a no brainer, but I'm can't seem to access the elements returned from Newtonsoft's json deserializer.
Example json:
{
"ns0:Test": {
"xmlns:ns0": "http:/someurl",
"RecordCount": "6",
"Record": [{
"aaa": "1",
"bbb": "2",
},
{
"aaa": "1",
"bbb": "2",
}]
}
}
var result = Newtonsoft.Json.JsonConvert.DeserializeObject<dynamic>(somestring);
Stripping out the json up to the Record text, i can access the data without issue.
i.e. result.Recordcount
If i leave the json as shown above, can someone enlighten me how to access Recordcount?
All inputs appreciated. Thanks!
For those JSON properties that have punctuation characters or spaces (such that they cannot be made into valid C# property names), you can use square bracket syntax to access them.
Try this:
int count = result["ns0:Test"].RecordCount;