How to parse lua table object into json? - json

I was wondering if there was a way to parse a lua table into an javascript object, without using any libraries i.e require("json") haven't seen one yet, but if someone knows how please answer

If you want to know how to parse Lua tables to JSON strings take a look into the source code of any of the many JSON libraries available for Lua.
http://lua-users.org/wiki/JsonModules
For example:
https://github.com/rxi/json.lua/blob/master/json.lua
or
https://github.com/LuaDist/dkjson/blob/master/dkjson.lua

If you do not want to use any library and want to do it with pure Lua code the most convenient way for me is to use table.concat function:
local result
for key, value in ipairs(tableWithData) do
-- prepare json key-value pairs and save them in separate table
table.insert(result, string.format("\"%s\":%s", key, value))
end
-- get simple json string
result = "{" .. table.concat(result, ",") .. "}"
If your table has nested tables you can do this recursively.

The are a lot of pure-Lua JSON libraries.
Even me have one.
How to include pure-Lua module into your script without using require():
Download the Lua JSON module (for example, go to my json.lua, right-click on Raw and select Save Link as in context menu)
Delete the last line return json from this file
Insert the whole file at the beginning of your script
Now you can use local json_as_string = json.encode(your_Lua_table) in your script.

Related

Reading the Json string which is put together in one field

I have a Json pattern string in a text file, I have to pharse the below string like below and put it in to a external file.
Please let me know how this can be handled with Informatica Powercenter or Unix or Python?
{"CONTACTID":"3b2a25b2","ANI":"+16146748702","DNIS":"+18006081123","START_TIME":"01/22/2023 03:31:42","MODULE":[{"Name":"MainIVR","Time":"01/22/2023 03:31:42",Dialog":[{"name":"offer_Spanish","dialogeresult":"(|raw:7|R|7|1.0|nm=0|ni=0|2023/22/21 03:02:01)"}],"backend":[{"Time":"01/22/2023)"}],"END_STATE":"XC"}
In The above sample string the special charcters should be removed and the values should be assigned to the corresponding columns like below 2 o/p formats
Output:
CONTACTID, ANI, DNIS, START_TIME, MODULE, Time,Dialog,dialogeresult,END_STATE
3b2a25b2,+16146748702 +18006081123 01/22/2023 03:31:42,Name:MainIVR,
or
Output:
CONTACTID : 3b2a25b2
ANI:16146748702
DNI :+18006081123
I tried this to read thru Informatica powercenter and using the expression tranformations but nothing worked and tried with Python too.
For a start, your JSON is invalid. The opening double quotes for Dialog are missing and it's not properly closed - MODULE array is not closed and root is not closed. Here's the fixed JSON:
{"CONTACTID":"3b2a25b2","ANI":"+16146748702","DNIS":"+18006081123","START_TIME":"01/22/2023 03:31:42","MODULE":[{"Name":"MainIVR","Time":"01/22/2023 03:31:42","Dialog":[{"name":"offer_Spanish","dialogeresult":"(|raw:7|R|7|1.0|nm=0|ni=0|2023/22/21 03:02:01)"}],"backend":[{"Time":"01/22/2023)"}],"END_STATE":"XC"}]}
Use some JSON validation tool, like this one - it helps a lot.
Next, here's some starter code you may use to achieve the required result:
import json
# some JSON:
x = '{"CONTACTID":"3b2a25b2","ANI":"+16146748702","DNIS":"+18006081123","START_TIME":"01/22/2023 03:31:42","MODULE":[{"Name":"MainIVR","Time":"01/22/2023 03:31:42","Dialog":[{"name":"offer_Spanish","dialogeresult":"(|raw:7|R|7|1.0|nm=0|ni=0|2023/22/21 03:02:01)"}],"backend":[{"Time":"01/22/2023)"}],"END_STATE":"XC"}]}'
# parse x:
y = json.loads(x)
# the result is a Python dictionary:
print(y.keys())
You may test it on Replit
Finally regarding Informatica Powercenter - it is a terrible choice for complex string processing. You would need a Hierarchy Parser Transformation. Long story short: it's very tedious, but possible. I would highly recommend picking up a differen approach, if this is not a regular data loading process you will need to build.

Do web2py json returns have extraneous whitespace, if so, how to remove

Just to check, the default JSON view which changes python objects to JSON seems to include whitespace between the variables, i.e.
"field": [[110468, "Octopus_vulgaris", "common octopus"...
rather than
"field":[[110468,"Octopus_vulgaris","common octopus"...
Is that right? If so, is there an easy way to output the JSON without the extra spaces, and is this for any reason (other than readability) a bad idea.
I'm trying to make some API calls return the fastest and most concise JSON representation, so any other tips gratefully accepted. For example, I see the view calls from gluon.serializers import json - does that get re-imported every time the view is used, or is python clever enough to use it once-only. I'm hoping the latter.
The generic.json view calls gluon.serializers.json, which ultimately calls json.dumps from the Python standard library. By default, json.dumps inserts spaces after separators. If you want no spaces, you will not be able to use the generic.json view as is. You can instead do:
import json
output = json.dumps(input, separators=(',', ':'))
If input includes some data that are not JSON serializable and you want to take advantage of the special data type conversions implemented in gluon.serializers.json (i.e., datetime objects and various web2py specific objects), you can do the following:
import json
from gluon.serializers import custom_json
output = json.dumps(input, separators=(',', ':'), default=custom_json)
Using the above, you can either edit the generic.json view, create your own custom JSON view, or simply return the JSON directly from the controller.
Also, no need to worry about re-importing modules in Python -- the interpreter only loads the module once.

Parse JSON in to Strings with escape characters for GWT Test Case

I've come up with a doubt around JSON files.
So, we're building a test case for a GWT application. The data it feeds from is in JSON files generated from a SQL database.
When testing the methods that work with data, we do it from sources held in String files, so to keep integrity with the original data, we just clone the original JSON values in to a String with escape sequences.
The result of this being that if a JSON entry shows like this:
{"country":"India","study_no":87}
The parsed result will come up like this in order for our tools to recognise them:
"[" + "{\"country\":\"India\",\"study_no\":87}" + "]"
The way we do it now is taking each JSON object and putting it between "" in IntelliJ, which automatically parses all double quotes in to escape sequences. This is ok if we only wanted a few objects, but What if we wanted a whole dataset?
So my question is, does anyone know or has created an opensource script to automate this tedious task?
One thing you could do is to wrap window.escape() using JsInterop or JSNI. For example:
#JsType(isNative="true", name="window")
public class window {
public native String escape(String toBeEscape);
}
and then apply to your results.

Is there a utility to compare two JSON strings?

I'm writing a function that generates a json string, this function is aimed to replace old one. So I need to make sure that JSON that my function outputs is identical to JSON of the old function. Is there an utility to check identity of two JSON trees?
I've used JSON Diff before, just compare the output from the old JSON function and your new one to see if they match up. Make sure to test with more complex data structures too.

Read a Text File into R

I apologize if this has been asked previously, but I haven't been able to find an example online or elsewhere.
I have very dirty data file in a text file (it may be JSON). I want to analyze the data in R, and since I am still new to the language, I want to read in the raw data and manipulate as needed from there.
How would I go about reading in JSON from a text file on my machine? Additionally, if it isn't JSON, how can I read in the raw data as is (not parsed into columns, etc.) so I can go ahead and figure out how to parse it as needed?
Thanks in advance!
Use the rjson package. In particular, look at the fromJSON function in the documentation.
If you want further pointers, then search for rjson at the R Bloggers website.
If you want to use the packages related to JSON in R, there are a number of other posts on SO answering this. I presume you searched on JSON [r] already on this site, plenty of info there.
If you just want to read in the text file line by line and process later on, then you can use either scan() or readLines(). They appear to do the same thing, but there's an important difference between them.
scan() lets you define what kind of objects you want to find, how many, and so on. Read the help file for more info. You can use scan to read in every word/number/sign as element of a vector using eg scan(filename,""). You can also use specific delimiters to separate the data. See also the examples in the help files.
To read line by line, you use readLines(filename) or scan(filename,"",sep="\n"). It gives you a vector with the lines of the file as elements. This again allows you to do custom processing of the text. Then again, if you really have to do this often, you might want to consider doing this in Perl.
Suppose your file is in JSON format, you may try the packages jsonlite ou RJSONIO or rjson. These three package allows you to use the function fromJSON.
To install a package you use the install.packages function. For example:
install.packages("jsonlite")
And, whenever the package is installed, you can load using the function library.
library(jsonlite)
Generally, the line-delimited JSON has one object per line. So, you need to read line by line and collecting the objects. For example:
con <- file('myBigJsonFile.json')
open(con)
objects <- list()
index <- 1
while (length(line <- readLines(con, n = 1, warn = FALSE)) > 0) {
objects[[index]] <- fromJSON(line)
index <- index + 1
}
close(con)
After that, you have all the data in the objects variable. With that variable you may extract the information you want.