Invalid JSON syntax error in configuration file on homebridge - json

{
"bridge":{
"name":"Homebridge F8F5",
"username":"0E:8F:12:8D:F8:F5",
"port":51739,
"pin":"670-48-238"
},
"accessories":[
],
"platforms":[
{
"name":"Config",
"port":8581,
"platform":"config"
}
]
}{
"accessories":[
{
"name":"Roku",
"accessory":"Roku",
"ip":"http://10.204.1.238:8060",
}
I am getting an error when I try to run this config file in homebridge. What am I doing wrong? When I try to submit it through the web interface it will not allow me to and says “Config JSON error: invalid json syntax” Any help will be welcome! I have tried to put it through an online json error finder and it narrowed it down to this snippet.

Ummm... looks like you tried to edit this file without knowing the basic concepts of JSON.
Start by reading JSON - Introduction on W2Schools.com
Also, if you're not sure, use an online JSON validator. Use your fav. search engine to look for "JSON cleaner". (I use JSON Formatter & Validator at Curious Concept.)
Off the bat I can see a few issues with the JSON you provided.
the "}{" string ... what's that for? JSON cannot parse that ... either add "," between (if you wanted a new set) or (in this case) remove it.
you have two "accessories". JSON usually get parsed into an object or array ... one cannot have duplicates on the ket names. (In this case) remove the first one.
the second "accessories" array (denoted by "[") has no end (no "]")
the whole set (started with "{") has no end (no "}")

Related

Processing JSON from a .txt file and converting to a DataFrame in Julia

Cross posting from Julia Discourse in case anyone here has any leads.
I’m just looking for some insight into why the below code is returning a dataframe containing just the first line of my json file. If you’d like to try working with the file I’m working with, you can download the aminer_papers_0.zip from the Microsoft Open Academic Graph site, I’m using the first file in that group of files.
using JSON3, DataFrames, CSV
file_name = "path/aminer_papers_0.txt"
json_string = read(file_name, String)
js = JSON3.read(json_string)
df = DataFrame([js])
The resulting DataFrame has just one line, but the column titles are correct, as is the first line. To me the mystery is why the rest isn’t getting processed. I think I can rule out that read() is only reading the first JSON object, because I can index into the resulting object and see many JSON objects:
enter image description here
My first guess was maybe the newline \n was causing escape issues, and tried to use chomp to get rid of them, but couldn’t get it to work.
Anyway - any help would be greatly appreciated!
I think the problem is that the file is in JSON Lines format, and the JSON3 library only returns the first valid JSON value that it finds at the start of a string unless told otherwise.
tl;dr
Call JSON3.read with the keyword argument jsonlines=true.
Why?
By default, JSON3 interprets a string passed to its read function as a single "JSON text", defined by RFC 8259 section 1.3.2:
A JSON text is a serialized value....
(My emphasis on the use of the indefinite singular article "a.") A "JSON value" is defined in section 1.3.3:
A JSON value MUST be an object, array, number, or string, or one of the following three literal names: false, null, true.
A string with multiple JSON values in it is technically multiple "JSON texts." It is up to the parser to determine what part of the string argument you give it is a JSON text, and the authors of JSON3 chose as the default behavior to parse from the start of the string to the end of the first valid JSON value.
In order to get JSON3 to read the string as multiple JSON values, you have to give it the keyword option jsonlines=true, which is documented as:
jsonlines: A Bool indicating that the json_str contains newline delimited JSON strings, which will be read into a JSON3.Array of the JSON values. See jsonlines for reference. [default false]
Example
Take for example this simple string:
two_values = "3.14\n2.72"
Each one of these lines is a valid JSON serialization of a number. However, when passed to JSON3.read, only the first is parsed:
using JSON3
#assert JSON3.read(two_values) == 3.14
Using jsonlines=true, both values are parsed and returned as a JSON3.Array struct:
#assert JSON3.read(two_values, jsonlines=true) == [3.14, 2.72]
Other Packages
The JSON.jl library, which people might use by default given the name, does not implement parsing of JSON Lines strings at all, leaving it up to the caller to properly split the string as needed:
using JSON
JSON.parse(two_values)
# ERROR: Expected end of input
# Line: 1
# Around: ...3.14 2.72...
# ^
A simple way to implement reading multiple values is to use eachline:
#assert [JSON.parse(line) for line in eachline(IOBuffer(two_values))] == [3.14, 2.72]

Error trying to parse odata4 from API REST using NIFI

I'm using a Microsoft REST API to query a Azure application, oauth and request goes without problem.
The response from InvokeHTTP has this format
{"#odata.context":"https://****.dynamics.com/api/data/v9.1/$metadata#endpoint","value":[ here comes the actual JSON result in format {
"#odata_etag" : "W/\"555598\"", "field":"value...},...]
,"#odata.nextLink":"https://****.dynamics.com/api/data/v9.1/endpoint?$skiptoken.....}
I need to extract the nextLink for pagination and Value to continue the flow and store the result.
When I try to parse with inferAvroSchema so I can start working with it throws this error "Illegal initial character: #odata.etag"
My Idea was to inferAvroSchema, then EvaluateJsonPath to extract the odata tags and then extract the values.
I tried using EvaluateJsonPath on the result asking to create an attribute for $.#odata.context but it doesn't find the item either, I'm sure is something about the #.
I can also replace all the # of the incoming flow for another char, but don't know if that makes sense.
I'm feeling that i'm not using a correct approach, but NIFI + odata doesn't give me results on google or here.
I'm open to any suggestions!
thank you!
Schema fields cannot contain #. You could replace the #, however you must be sure not to replace it in actual content like email addresses. Another solution is to transform the API response using JoltTransformJSON processor, such that your flow can work with it:
GenerateFlowFile:
For the JoltTransformJSON processor provide following Jolt specification:
[
{
"operation": "shift",
"spec": {
"\\#odata.nextLink": "next"
}
}
]
Leave the default values for the other properties. You can play around with Jolt here: http://jolt-demo.appspot.com/
EvaluateJsonPath:
Result:
Notice that the url is now part of the flowfile attributes.
Your hunch is correct, you can only have valid characters for the field names on the schema type you are using, avro or JSON.
You could get NiFi to remove illegal characters with the replacetext proceasor, have a read here on what is valid: http://avro.apache.org/docs/current/spec.html#names

is there any way where we can load deformed json into python object?

i am getting a json data after hitting an API .
when i try to load that json into python using json.loads(response.text), I am getting a delimiter error .
when checked few fields in json dose not have "," separating them.
{
"id":"142379",
"label":"1182_Mailer_PROD",
"location":"Bangalore, India",
"targetType":"HTTPS performance",
"frequency":"15",
"fails":"2764",
"totalUptime":"85.32"
"tests":[
{"date":"09-24-2019 09:31","status":"Could not resolve: mailer.accenture.com (DNS server returned answer with no data)","responseTime":"0.000","dnsTime":"0.000","connectTime":"0.000","redirectTime":"0.000","firstbyteTime":"0.000","lastbyteTime":"0.000","pingLoss":"0.00","pingMin":"0.000","pingAvg":"0.000","pingMax":"0.000","size":"0","md5hash":"(null)"}
]
}
,
{
"id":"158651",
"label":"11883-GR dd-WSP",
"location":"Chicago, IL",
"targetType":"Performance transaction",
"frequency":"15",
"fails":"5919",
"totalUptime":"35.14"
,"tests":[
{"date":"09-24-2019 09:26","status":"Keywords not found - Working","responseTime":"0.669","stepresults":[
{"stepid":"1","date":"09-24-2019 09:26","status":"OK","responseTime":"0.453","dnsTime":"0.000","connectTime":"0.025","redirectTime":"0.264","firstbyteTime":"0.141","lastbyteTime":"0.024","size":"22351","md5hash":"ca002cf662980511a9faa88286f2ee96"},
{"stepid":"2","date":"09-24-2019 09:26","status":"Keywords not found - Working","responseTime":"0.216","dnsTime":"0.000","connectTime":"0.023","redirectTime":"0.000","firstbyteTime":"0.171","lastbyteTime":"0.022","size":"22457","md5hash":"38327404e4f2392979aa7dfa27118f4e"}
]}]
}
This is a small chunk of data from the response , as you could see "totalUptime":"85.32" doesn't have a comma separating it.
could you please let me know how can we load the data into python object even though the json is deformed
Deformed JSON is not JSON, so obviously you can't load it with a standard procedure. There are only two possibilities to load it:
Create your own parser
Modify the input to conform to the JSON standard
Both possibilities need you to define what format do you want to import. If it is OK for your format not to have commas then you have to define what your delimiters are.
From the example you posted is difficult to make any definitive assessment about how the input format is defined. So you probably will have to write a rudimentary parser and approximate it by try and error to the input you are trying to parse.

Parsing large JSON file with Scala and JSON4S

I'm working with Scala in IntelliJ IDEA 15 and trying to parse a large twitter record json file and count the total number of hashtags. I am very new to Scala and the idea of functional programming. Each line in the json file is a json object (representing a tweet). Each line in the file starts like so:
{"in_reply_to_status_id":null,"text":"To my followers sorry..
{"in_reply_to_status_id":null,"text":"#victory","in_reply_to_screen_name"..
{"in_reply_to_status_id":null,"text":"I'm so full I can't move"..
I am most interested in a property called "entities" which contains a property called "hastags" with a list of hashtags. Here is an example:
"entities":{"hashtags":[{"text":"thewayiseeit","indices":[0,13]}],"user_mentions":[],"urls":[]},
I've browsed the various scala frameworks for parsing json and have decided to use json4s. I have the following code in my Scala script.
import org.json4s.native.JsonMethods._
var json: String = ""
for (line <- io.Source.fromFile("twitter38.json").getLines) json += line
val data = parse(json)
My logic here is that I am trying to read each line from twitter38.json into a string and then parse the entire string with parse(). The parse function is throwing an error claiming:
"Type mismatch, expected: Nothing, found:String."
I have seen examples that use parse() on strings that hold json objects such as
val jsontest =
"""{
|"name" : "bob",
|"age" : "50",
|"gender" : "male"
|}
""".stripMargin
val data = parse(jsontest)
but I have received the same error. I am coming from an object oriented programming background, is there something fundamentally wrong with the way I am approaching this problem?
You have most likely incorrectly imported dependencies to your Intellij project or modules into your file. Make sure you have the following lines imported:
import org.json4s.native.JsonMethods._
Even if you correctly import this module, parse(String: json) will not work for you, because you have incorrectly formed a json. Your json String will look like this:
"""{"in_reply_...":"someValue1"}{"in_reply_...":"someValues2"}"""
but should look as follows to be a valid json that can be parsed:
"""{{"in_reply_...":"someValue1"},{"in_reply_...":"someValues2"}}"""
i.e. you need starting and ending brackets for the json, and a comma between each line of tweets. Please read the json4s documenation for more information.
Although being almost 6 years old, I think this question deserves another try.
JSON format has a few misunderstandings in people's minds, especially how they are stored and how they are read back.
JSON documents, are stored as either a single object having all the other fields, or an array of multiple object possibly in same format. this second part is important because arrays in almost every programming language are defined by angle brackets and values separated by commas (note here I used a person object as my single value):
[
{"name":"John","surname":"Doe"},
{"name":"Jane","surname":"Doe"}
]
also note that everything except brackets, numbers and booleans are enclosed in quotes when written into file.
however, there is another use that is not official but preferred to transfer datasets easily where every object, or document as in nosql/mongo language, are stored in a new line like this:
{"name":"John","surname":"Doe"}
{"name":"Jane","surname":"Doe"}
so for the question, OP has a document written in this second form, but tries an algorithm written to read the first form. following code has few simple changes to achieve this, and the user must read the file knowing that:
var json: String = "["
for (line <- io.Source.fromFile("twitter38.json").getLines) json += line + ","
json=json.splitAt(json.length()-1)._1
json+= "]"
val data = parse(json)
PS: although #sbrannon, has the correct idea, the example he/she gave has mistakenly curly braces instead of angle brackets to surround the data.
EDIT: I have added json=json.splitAt(json.length()-1)._1 because the code above ends with a trailing comma which will cause parse error per the JSON format definition.

Verify whole json response in jmeter by value or sort Json

I'm not using JMeter too often, and I've run into very specific issue.
My REST response is always "the same", but nodes are not in the same order due to various reasons. As well, I can't put here whole response due to sensitive data, but let's use these dummy one:
First time response might be:
{
"properties":{
"prop1":false,
"prop2":false,
"prop3":165,
"prop4":"Audi",
"prop5":true,
"prop6":true,
"prop7":false,
"prop8":"1",
"prop9":"2.0",
"prop10":0
}
}
Then other time it might be like this:
{
"properties":{
"prop2":false,
"prop1":false,
"prop10":0,
"prop3":165,
"prop7":false,
"prop5":true,
"prop6":true,
"prop8":"1",
"prop9":"2.0",
"prop4":"Audi"
}
}
As you can see, the content it self is the same, but order of nodes it's not. I have 160+ nodes and thousand of possible response orders.
Is there an easy way to compare two JSON responses comparing matching key - values, or at least to sort the response, and then compare it with sorted one in assertion patterns?
I'm not using any plugins, just basic Apache JMeter.
Thanks
I've checked using Jython, you need to download the Jython Library and save to your jmeter lib directory.
I've checked 2 JSONs with Sampler1 and Sampler2, on Sampler1 I've add a BeanShell PostProcessor with this code:
vars.put("jsonSampler1",prev.getResponseDataAsString());
On Sampler2 I've add a BSF Assertion, specifying jython as the language and with the following code:
import json
jsonSampler1 = vars.get("jsonSampler1")
jsonSampler2 = prev.getResponseDataAsString()
objectSampler1 = json.loads(jsonSampler1)
objectSampler2 = json.loads(jsonSampler2)
if ( objectSampler1 != objectSampler2 ):
AssertionResult.setFailure(True)
AssertionResult.setFailureMessage("JSON data didn't match")
Yoy can find the whole jmx in this GistHub
You will most probably have to do this with a JSR223 Assertion and Groovy.
http://jmeter.apache.org/usermanual/component_reference.html#JSR223_Assertion
http://docs.groovy-lang.org/latest/html/api/groovy/json/JsonSlurper.html
Note that if you know Python, you might look at using Jython + JSR223.
I would just set up 10 jp#gc - JSON Path Assertions. Documentation for figuring out JSON Path format is here and you can test how it would work here.
For your example you would the assertion (Add > Assertion > jp#gc - JSON Path Assertions), then to test the prop 1 put:
$.properties.prop1
in the JSON Path field, click the Validate Against Expected Value checkbox, and put
false
in the expected value field. Repeat those steps for the other 9 changing the last part of the path to each key and the value you expected back in the expected value field.
This extractor is jmeter add on found here.