I've a large JSON string, I want to convert this string into Erlang record.
I found jiffy library but it doesn't completely convert to record.
For example:
jiffy:decode(<<"{\"foo\":\"bar\"}">>).
gives
{[{<<"foo">>,<<"bar">>}]}
but I want the following output:
{ok,{obj,[{"foo",<<"bar">>}]},[]}
Is there any library that can be used for the desired output?
Or is there any library that can be used in combination of jiffy for further modifying the output of it.
Consider the fact the JSON string is large, and I want the output is minimum time.
Take a look at ejson, from the documentation:
JSON library for Erlang on top of jsx. It gives a declarative interface for jsx by which we need to specify conversion rules and ejson will convert tuples according to the rules.
I made this library to make easy not just the encoding but rather the decoding of JSONs to Erlang records...
In order for ejson to take effect the source files need to be compiled with parse_transform ejson_trans. All record which has -json attribute can be converted to JSON later.
Related
I am using the Yason library in common-lisp, I want to parse a json string but would like the parser to keep one a its node unparsed.
Typically with an example like that:
{
"metadata1" : "mydata1",
"metadata2" : "mydata2",
"payload" : {...my long payload object},
"otherNodesToParse" : {...}
}
How can I set the yason parser to parse my json but skip the payload node and keep it as a string in the json format.
Use: let's say I just want the envelope data (everything that's not the payload), and to forward the payload as-is (as json string) to another system.
If I parse the whole json (so including payload) and then re-encode the payload to json, it is inefficient. The payload size could also be pretty big.
How do you know where the end of the payload object is in the stream? You do so by parsing the stream: if you don't parse the stream you simply can't know where the end of the object is: that's the nature of JSON's syntax (as it is the nature of CL's default syntax). For instance the only way you can know the difference between where to continue after
{x:1}
and after
{x:1.2}
is by parsing the two things.
So you must necessarily parse the whole thing.
So the answer to your question is: you can't do this.
You could (but not, I think, with YASON) decide that you did not want to build an object as a result of the parse. And perhaps, if the stream you are parsing corresponds to something with random access like a string or a file, you could note the start and end positions in the stream to later extract a string from it corresponding to the unparsed data (or you could perhaps build it up as you go).
It looks as if some or all of this might be possible with CL-JSON, but you'd have to work at it.
Unless the objects you are reading are vast the benefit of this seems questionable-to-none. If you really do want to do something like this efficiently you need a serialisation scheme which tells you how long things are.
Currently testing system where the output is in the form of formatted json.
As part of my tests I need to extract and validate two values from the json record.
The values both have individual identifiers on them but don't appear in the same part of the record, so I can't just grab a single long string.
Loose format of the information in both cases:
"identifier1": [{"identifier2":"idname","values":["bit_I_want!]}]
In the case of the bit I want, this can either be a single quoted value (e.g. "12345") or multiple quoted values (e.g. "12345","23456","98765").
In both cases I'm only interested in validating the whole string of values, not individual values from the set.
Can anyone recommend which of the various extractors in Jmeter would be best to achieve this?
Many Thanks!
The most obvious choicse seems to be JSON Path Assertion (available via JMeter Plugins), it allows not only executing arbitrary JSON queries but conditionally failing the sampler basing on actual and expected result match.
The recommended way of installing JMeter Plugins and keeping them up-to-date is using JMeter Plugins Manager
JMeter 3.1 comes with JSON Extractor to parse JSON response. you could use this expression $.identifier1[0].values
as the JSON Path to extract the values.
If your JSON response is going to simple always as shown in your question, you could use Regular Expression Extractor as well. Advantage is it is faster than JSON extractor. The regular expression would be "values":\[(.*?)\]
Reference: http://www.testautomationguru.com/jmeter-response-data-extractors-comparison/
Using MarkLogic 8, I'm using a custom XML to JSON conversion for json:transform-to-json, and I've got it working just about right except the conversion is outputting numbers as strings.
Is there a way to specify that the value of a particular element should be a number value, not a string?
I don't see anything in the doc for json:config, but just in case there's something I've missed, or if you have a neat post-processing trick, I'd love to hear about how to solve this problem.
You can do that by defining an XML Schema for the non-string type elements. Just make sure it is available in the context (by loading it into xdmp:schemas-database()), and that it is recognized (your XML needs to have a namespace that matches the XML Schema, and you might wanna use import schema)..
HTH!
JSON could mean JSON type or json string.
It starts confuse me when different library use json in two different meanings.
I wonder how other people name those variables differently.
For all practical purposes, "JSON" has exactly one meaning, which is a string representing a JavaScript object following certain specific syntax.
JSON is parsed into a JavaScript object using JSON.parse, and an JavaScript object is converted into a JSON string using JSON.stringify.
The problem is that all too many people have gotten into the bad habit of referring to plain old JavaScript objects as JSON. That is either confused or sloppy or both. {a: 1} is a JS object. '{"a": 1}' is a JSON string.
In the same vein, many people use variable names like json to refer to JavaScript objects derived from JSON. For example:
$.getJSON('foo.php') . then(function(json) { ...
In the above case, the variable name json is ill-advised. The actual payload returned from the server is a JSON string, but internally $.getJSON has already transformed that into a plain old JavaScript object, which is what is being passed to the then handler. Therefore, it would be preferable to use the variable name data, for example.
If a library uses the term "json" to refer to things which are not JSON, but actually are JavaScript objects, it is a mark of poor design, and I'd suggest looking around for a different library.
I want to display a formatted JSON string in an NSTextView.
If I set it directly it will display as continues string, but I need it as formatted like
{
"0":"N",
"J":5
}
Is there any inbuilt method or 3rd party opensource library to format json?
A lot of JSON parsing frameworks have the option of pretty printing.
For instance SBJsonWriter. It has a humanReadable property.
If set to YES, generates human-readable JSON with linebreaks after
each array value and dictionary key/value pair, indented two spaces
per nesting level.
If you just want the indentation logic you can dig in and just use that part and strip your project of unnecessary SBJson files.
EDIT:
From OS X 10.7, NSJSONSerialization is available natively in Cocoa.
You can pass the NSJSONWritingPrettyPrinted as an option to
+dataWithJSONObject:options:error: class method of NSJSONSerialization to achieve what you want.