I'm writing a function that generates a json string, this function is aimed to replace old one. So I need to make sure that JSON that my function outputs is identical to JSON of the old function. Is there an utility to check identity of two JSON trees?
I've used JSON Diff before, just compare the output from the old JSON function and your new one to see if they match up. Make sure to test with more complex data structures too.
Related
suppose we are developing an application that pulls Avro records from a source
stream (e.g. Kafka/Kinesis/etc), parses them into JSON, then further processes that
JSON with additional transformations. Further assume these records can have a
varying schema (which we can look up and fetch from a registry).
We would like to use Spark's built in from_avro function, But it is pretty clear that
Spark from_avro wants you to hard code a >Fixed< schema into your code. It doesn't seem
to allow the schema to vary row by incoming row.
That sort of makes sense if you are parsing the Avro to Internal row format.. One would need
a consistent structure for the dataframe. But what if we wanted something like
from_avro which grabbed the bytes from some column in the row and also grabbed the string
representation of the Avro schema from some other column in the row, and then parsed that Avro
into a JSON string.
Does such built-in method exist? Or is such functionality available in a 3rd party library ?
Thanks !
I was wondering if there was a way to parse a lua table into an javascript object, without using any libraries i.e require("json") haven't seen one yet, but if someone knows how please answer
If you want to know how to parse Lua tables to JSON strings take a look into the source code of any of the many JSON libraries available for Lua.
http://lua-users.org/wiki/JsonModules
For example:
https://github.com/rxi/json.lua/blob/master/json.lua
or
https://github.com/LuaDist/dkjson/blob/master/dkjson.lua
If you do not want to use any library and want to do it with pure Lua code the most convenient way for me is to use table.concat function:
local result
for key, value in ipairs(tableWithData) do
-- prepare json key-value pairs and save them in separate table
table.insert(result, string.format("\"%s\":%s", key, value))
end
-- get simple json string
result = "{" .. table.concat(result, ",") .. "}"
If your table has nested tables you can do this recursively.
The are a lot of pure-Lua JSON libraries.
Even me have one.
How to include pure-Lua module into your script without using require():
Download the Lua JSON module (for example, go to my json.lua, right-click on Raw and select Save Link as in context menu)
Delete the last line return json from this file
Insert the whole file at the beginning of your script
Now you can use local json_as_string = json.encode(your_Lua_table) in your script.
I've a large JSON string, I want to convert this string into Erlang record.
I found jiffy library but it doesn't completely convert to record.
For example:
jiffy:decode(<<"{\"foo\":\"bar\"}">>).
gives
{[{<<"foo">>,<<"bar">>}]}
but I want the following output:
{ok,{obj,[{"foo",<<"bar">>}]},[]}
Is there any library that can be used for the desired output?
Or is there any library that can be used in combination of jiffy for further modifying the output of it.
Consider the fact the JSON string is large, and I want the output is minimum time.
Take a look at ejson, from the documentation:
JSON library for Erlang on top of jsx. It gives a declarative interface for jsx by which we need to specify conversion rules and ejson will convert tuples according to the rules.
I made this library to make easy not just the encoding but rather the decoding of JSONs to Erlang records...
In order for ejson to take effect the source files need to be compiled with parse_transform ejson_trans. All record which has -json attribute can be converted to JSON later.
Just to check, the default JSON view which changes python objects to JSON seems to include whitespace between the variables, i.e.
"field": [[110468, "Octopus_vulgaris", "common octopus"...
rather than
"field":[[110468,"Octopus_vulgaris","common octopus"...
Is that right? If so, is there an easy way to output the JSON without the extra spaces, and is this for any reason (other than readability) a bad idea.
I'm trying to make some API calls return the fastest and most concise JSON representation, so any other tips gratefully accepted. For example, I see the view calls from gluon.serializers import json - does that get re-imported every time the view is used, or is python clever enough to use it once-only. I'm hoping the latter.
The generic.json view calls gluon.serializers.json, which ultimately calls json.dumps from the Python standard library. By default, json.dumps inserts spaces after separators. If you want no spaces, you will not be able to use the generic.json view as is. You can instead do:
import json
output = json.dumps(input, separators=(',', ':'))
If input includes some data that are not JSON serializable and you want to take advantage of the special data type conversions implemented in gluon.serializers.json (i.e., datetime objects and various web2py specific objects), you can do the following:
import json
from gluon.serializers import custom_json
output = json.dumps(input, separators=(',', ':'), default=custom_json)
Using the above, you can either edit the generic.json view, create your own custom JSON view, or simply return the JSON directly from the controller.
Also, no need to worry about re-importing modules in Python -- the interpreter only loads the module once.
I trying to format my output from a lambda function into JSON. The lambda function queries my Amazon Aurora RDS instance and returns an array of rows in the following format:
[[name,age,town,postcode]]
which gives the an example output:
[["James", 23, "Maidenhead","sl72qw"]]
I understand that mapping templates are designed to translate one format to another but I don't understand how I can take the output above and map in to a JSON format using these mapping templates.
I have checked the documentation and it only covers converting one JSON to another.
Without seeing the code you're specifically using, it's difficult to give you a definitely correct answer, but I suspect what you're after is returning the data from python as a dictionary then converting that to JSON.
It looks like this thread contains the relevant details on how to do that.
More specifically, using the DictCursor
cursor = connection.cursor(pymysql.cursors.DictCursor)