LUA load JSON configuration from file - json

I'm trying to move old LUA method which was loading some JSON content from file into global variable into "class". But I get following errors all the time:
attempt to call field 'decode' (a nil value)
attempt to index global 'cjson' (a nil value)
I don't know lua well but i tried almost all combinations without result so can you explain why this errors occurs?
Current implementation of module looks like:
Config = {}
Config.__index = Config
function Config.create(config_filename)
local cjson = require("cjson")
local config = {}
setmetatable(config,Config)
local f = io.open(config_filename, "r")
local content = f:read("*a")
f:close()
config = cjson.decode(content)
return config
end
return Config
As final result I want to execute something like this from other file:
local config_class = require("config")
local config = config_class.create("/path/to/file.json")
ngx.say(config:some_configuration_data())

As the error message tells you cjson and decode are nil values which cannot be indexed or called.
require will load some file and run the contained code and pass the return value through. If you run a Lua script it behaves like a function which returns nil by default. So unless you specify what the script returns require will return nil.
I don't know what is inside your cjson file that you require but it obviously does not return the wanted json implementation but nil.
So the code in cjson should return a Lua table with a function stored under key "decode".

Related

Django FileField saving empty file to database

I have a view that should generate a temporary JSON file and save this TempFile to the database. The content to this file, a dictionary named assets, is created using DRF using serializers. This file should be written to the database in a model called CollectionSnapshot.
class CollectionSnapshotCreate(generics.CreateAPIView):
permission_classes = [MemberPermission, ]
def create(self, request, *args, **kwargs):
collection = get_collection(request.data['collection_id'])
items = Item.objects.filter(collection=collection)
assets = {
"collection": CollectionSerializer(collection, many=False).data,
"items": ItemSerializer(items, many=True).data,
}
fp = tempfile.TemporaryFile(mode="w+")
json.dump(assets, fp)
fp.flush()
CollectionSnapshot.objects.create(
final=False,
created_by=request.user,
collection_id=collection.id,
file=ContentFile(fp.read(), name="assets.json")
)
fp.close()
return JsonResponse({}, status=200)
Printing assets returns the dictionary correctly. So I am getting the dictionary normally.
Following the solution below I do get the file saved to the db, but without any content:
copy file from one model to another
Seems that json.dump(assets, fp) is failing silently, or I am missing something to actually save the content to the temp file prior to sending it to the database.
The question is: why is the files in the db empty?
I found out that fp.read() throws content based on the current pointer in the file. At least, that is my understanding. So after I dump assets dict as json the to temp file, I have to bring back the cursor to the beggining using fp.seek(0). This way, when I call fp.read() inside file=ContentFile(fp.read(), ...) it actually reads all the content. It was giving me empty because there was nothing to read since the cursor was at the end of the file.
fp = tempfile.TemporaryFile(mode="w+")
json.dump(assets, fp)
fp.flush()
fp.seek(0) // important
CollectionSnapshot.objects.create // stays the same

How we can read a JSON file with Go Programming Language?

I'm working on a translation project on my Angular app. I already create all the different keys for that. I try now to use Go Programming Language to add some functionalities in my translation, to work quickly after.
I try to code a function in Go Programming Language in order to read an input user on the command line. I need to read this input file in order to know if there is missing key inside. This input user must be a JSON file. I have a problem with this function, is blocked at functions.Check(err), in order to debug my function I displayed the different variable with fmt.Printf(variable to display).
I call this function readInput() in my main function.
The readInput() function is the following :
// this function is used to read the user's input on the command line
func readInput() string {
// we create a reader
reader := bufio.NewReader(os.Stdin)
// we read the user's input
answer, err := reader.ReadString('\n')
// we check if any errors have occured while reading
functions.Check(err)
// we trim the "\n" from the answer to only keep the string input by the user
answer = strings.Trim(answer, "\n")
return answer
}
In my main function I call readInput() for a specific command I created. This command line is usefull to update a JSON file and add a missing key automatically.
My func main is :
func main() {
if os.Args[1] == "update-json-from-json" {
fmt.Printf("please enter the name of the json file that will be used to
update the json file:")
jsonFile := readInput()
fmt.Printf("please enter the ISO code of the locale for which you want to update the json file: ")
// we read the user's input
locale := readInput()
// we launch the script
scripts.AddMissingKeysToJsonFromJson(jsonFile, locale)
}
I can give you the command line I use for this code go run mis-t.go update-json-from-json
Do you what I'm missing in my code please ?
Presuming that the file contains dynamic and unknown keys and values, and you cannot model them in your application. Then you can do something like:
func main() {
if os.Args[1] == "update-json-from-json" {
...
jsonFile := readInput()
var jsonKeys interface{}
err := json.Unmarshal(jsonFile, &jsonKeys)
functions.Check(err)
...
}
}
to load the contents into the empty interface, and then use the go reflection library (https://golang.org/pkg/reflect/) to iterate over the fields, find their names and values and update them according to your needs.
The alternative is to Unmarshal into a map[string]string, but that won't cope very well with nested JSON, whereas this might (but I haven't tested it).

how to Validate if JSON Path Exists in JSON

In a given json document, how to validate if a json path exists ?
I am using jayway-jsonpath and have the below code
JsonPath.read(jsonDocument, jsonPath)
The above code can potentially throw below exception
com.jayway.jsonpath.PathNotFoundException: No results for path:
$['a.b.c']
In order to mitigate it, I intend to validate if the path exists before trying to read it with JsonPath.read
For reference I went through the following 2 documentations, but couldn't really get what I want.
http://www.baeldung.com/guide-to-jayway-jsonpath
https://github.com/json-path/JsonPath
Whilst it is true that you can catch an exception, like it is mentioned in the comments there might be a more elegant way to check if a path exists without writing try catch blocks all over the code.
You can use the following configuration option with jayway-jsonpath:
com.jayway.jsonpath.Option.SUPPRESS_EXCEPTIONS
With this option active no exception is thrown. If you use the read method, it simply returns null whenever a path is not found.
Here is an example with JUnit 5 and AssertJ showing how you can use this configuration option, avoiding try / catch blocks just for checking if a json path exists:
#ParameterizedTest
#ArgumentsSource(CustomerProvider.class)
void replaceStructuredPhone(JsonPathReplacementArgument jsonPathReplacementArgument) {
DocumentContext dc = jsonPathReplacementHelper.replaceStructuredPhone(
JsonPath.parse(jsonPathReplacementArgument.getCustomerJson(),
Configuration.defaultConfiguration().addOptions(Option.SUPPRESS_EXCEPTIONS)),
"$.cps[5].contactPhoneNumber", jsonPathReplacementArgument.getUnStructuredPhoneNumberType());
UnStructuredPhoneNumberType unstructRes = dc.read("$.cps[5].contactPhoneNumber.unStructuredPhoneNumber");
assertThat(unstructRes).isNotNull();
// this path does not exist, since it should have been deleted.
Object structRes = dc.read("$.cps[5].contactPhoneNumber.structuredPhoneNumber");
assertThat(structRes).isNull();
}
You can also create a JsonPath object or ReadContext with a Configuration if you have a use case to check multiple paths.
// Suppress errors thrown by JsonPath and instead return null if a path does not exist in a JSON blob.
Configuration suppressExceptionConfiguration = Configuration
.defaultConfiguration()
.addOptions(Option.SUPPRESS_EXCEPTIONS);
ReadContext jsonData = JsonPath.using(suppressExceptionConfiguration).parse(jsonString);
for (int i = 0; i < listOfPaths.size(); i++) {
String pathData = jsonData.read(listOfPaths.get(i));
if (pathData != null) {
// do something
}

PowerShell changes return object's type

I am using PowerShell v3 and the Windows PowerShell ISE. I have the following function that works fine:
function Get-XmlNode([xml]$XmlDocument, [string]$NodePath, [string]$NamespaceURI = "", [string]$NodeSeparatorCharacter = '.')
{
# If a Namespace URI was not given, use the Xml document's default namespace.
if ([string]::IsNullOrEmpty($NamespaceURI)) { $NamespaceURI = $XmlDocument.DocumentElement.NamespaceURI }
# In order for SelectSingleNode() to actually work, we need to use the fully qualified node path along with an Xml Namespace Manager, so set them up.
[System.Xml.XmlNamespaceManager]$xmlNsManager = New-Object System.Xml.XmlNamespaceManager($XmlDocument.NameTable)
$xmlNsManager.AddNamespace("ns", $NamespaceURI)
[string]$fullyQualifiedNodePath = Get-FullyQualifiedXmlNodePath -NodePath $NodePath -NodeSeparatorCharacter $NodeSeparatorCharacter
# Try and get the node, then return it. Returns $null if the node was not found.
$node = $XmlDocument.SelectSingleNode($fullyQualifiedNodePath, $xmlNsManager)
return $node
}
Now, I will be creating a few similar functions, so I want to break the first 3 lines out into a new function so that I don't have to copy-paste them everywhere, so I have done this:
function Get-XmlNamespaceManager([xml]$XmlDocument, [string]$NamespaceURI = "")
{
# If a Namespace URI was not given, use the Xml document's default namespace.
if ([string]::IsNullOrEmpty($NamespaceURI)) { $NamespaceURI = $XmlDocument.DocumentElement.NamespaceURI }
# In order for SelectSingleNode() to actually work, we need to use the fully qualified node path along with an Xml Namespace Manager, so set them up.
[System.Xml.XmlNamespaceManager]$xmlNsManager = New-Object System.Xml.XmlNamespaceManager($XmlDocument.NameTable)
$xmlNsManager.AddNamespace("ns", $NamespaceURI)
return $xmlNsManager
}
function Get-XmlNode([xml]$XmlDocument, [string]$NodePath, [string]$NamespaceURI = "", [string]$NodeSeparatorCharacter = '.')
{
[System.Xml.XmlNamespaceManager]$xmlNsManager = Get-XmlNamespaceManager -XmlDocument $XmlDocument -NamespaceURI $NamespaceURI
[string]$fullyQualifiedNodePath = Get-FullyQualifiedXmlNodePath -NodePath $NodePath -NodeSeparatorCharacter $NodeSeparatorCharacter
# Try and get the node, then return it. Returns $null if the node was not found.
$node = $XmlDocument.SelectSingleNode($fullyQualifiedNodePath, $xmlNsManager)
return $node
}
The problem is that when "return $xmlNsManager" executes the following error is thrown:
Cannot convert the "System.Object[]" value of type "System.Object[]" to type "System.Xml.XmlNamespaceManager".
So even though I have explicitly cast my $xmlNsManager variables to be of type System.Xml.XmlNamespaceManager, when it gets returned from the Get-XmlNamespaceManager function PowerShell is converting it to an Object array.
If I don't explicitly cast the value returned from the Get-XmlNamespaceManager function to System.Xml.XmlNamespaceManager, then the following error is thrown from the .SelectSingleNode() function because the wrong data type is being passed into the function's 2nd parameter.
Cannot find an overload for "SelectSingleNode" and the argument count: "2".
So for some reason PowerShell is not maintaining the data type of the return variable. I would really like to get this working from a function so that I don't have to copy-paste those 3 lines all over the place. Any suggestions are appreciated. Thanks.
What's happening is PowerShell is converting your namespace manager object to a string array.
I think it has to do with PowerShell's nature of "unrolling" collections when sending objects down the pipeline. I think PowerShell will do this for any type implementing IEnumerable (has a GetEnumerator method).
As a work around you can use the comma trick to prevent this behavior and send the object as a whole collection.
function Get-XmlNamespaceManager([xml]$XmlDocument, [string]$NamespaceURI = "")
{
...
$xmlNsManager.AddNamespace("ns", $NamespaceURI)
return ,$xmlNsManager
}
More specifically, what is happening here is that your coding habit of strongly typing $fullyQualifiedModePath is trying to turn the result of the Get (which is a list of objects) into a string.
[string]$foo
will constrain the variable $foo to only be a string, no matter what came back. In this case, your type constraint is what is subtly screwing up the return and making it Object[]
Also, looking at your code, I would personally recommend you use Select-Xml (built into V2 and later), rather than do a lot of hand-coded XML unrolling. You can do namespace queries in Select-Xml with -Namespace #{x="..."}.

How to capture error message from third-party-library in Lua?

I have adopted the LuaJSON to parse JSON. The parse call seems like that:
-- file.lua
local res = json.decode.decode(json_str)
if res == nil then
throw('invalid JSON')
end
...
But if the json_str is badly formated, the decode() will stop within LuaJSON and interrupt the execution of file.lua. I want the control flow to return to my function instead, so I can provide a custom error notification.
I have browsed the LuaJSON APIs, and there is no callback-like error handling. I want to know is there any Lua mechanism that allows me to handle errors occuring within LuaJSON from within file.lua?
The problem here is that the decode function calls error if it encounters an error.
This is Lua's equivalent to an exception handling mechanism. What you want to do is call the decode function in protected mode:
local success, res = pcall(json.decode.decode, json_str);
if success then
-- res contains a valid json object
...
else
-- res contains the error message
...
end
In your example, if you are using CJSON version 2.1.0, there is a new "cjson.safe" module, which will return nil and error msg if any exception occurred in encode or decode procedure.
local decoder = require("cjson.safe").decode
local decoded_data, err = decoder(data)
if err then
ngx.log(ngx.ERR, "Invalid request payload:", data)
ngx.exit(400)
end