I am having problems with the JSON file in my Corona game. Basically, the game gives you trophies (cards) when you reach certain points. The card information is then written into a JSON file. When you start the game, it checks if the file "playerCards.json" already exists, and if not, it creates such file with the following structure:
{"common":[],"uncommon":[],"rare":[]}
Later in the game, the player finally receives one card. The (simplified) code below runs:
local category = "common"
local rdm = math.random(1,20)
array = loadFile("playerCards.json")
array[category][rdm] = collection[category][rdm]
writeFile("playerCards.json", array)
Collection is a preloaded Lua table structured like this: {"common" = {"001", "002", "003",..., "020"}}. For the sake of the question, I've restricted the cards to a certain category (common). Let's suppose the player won card number 3, so the code should run like this:
array["common"][3] = collection["common"][3]
And the resulting table array would be:
array = {"common" = {nil, nil, "003"}}
When I use the function writeFile("playerCards.json", array) I am encoding the table above into the file playerCards.json. For now, this code works perfectly, and the resulting JSON file is as follows:
{"common":[null,null,"003"],"uncommon":[],"rare":[]}
The problem comes when the player gets a card above 9, for example, 15. When written, the JSON file becomes this:
{"common":{"3":"003","15":"015"},"uncommon":[],"rare":[]}
How can the same code produce such different results? Can you help me solve this problem? If you need it, here is the code for the load and write functions:
local loadFile = function(name)
local data = nil
local path = system.pathForFile(name, system.DocumentsDirectory)
handle = io.open(path, "r")
if handle then
data = json.decode(handle:read("*a"))
io.close(handle)
end
return data
end
local writeFile = function(name, data)
local path = system.pathForFile(name, system.DocumentsDirectory)
local handle = io.open(path, "w+")
if handle then
handle:write(json.encode(data))
io.close(handle)
end
end
The problem is that Lua does not distinguish between list and mapping objects (tables are used for both) while JSON does.
This creates ambiguity when serializing to JSON; the serializer has to determine whether a Lua table should serialize to an array or an object. Most serializers do this by checking if the keys of the table are roughly sequential integers, and only serializing to an array if so. If your array is too 'sparse', as in your second example, then the serializer thinks its a map with numerical keys instead.
I don't know about Corona, but some Lua JSON libraries I have seen include a method to mark tables explicitly as arrays. Alternatively, you can fill empty slots of the array with a placeholder value like false instead of nil.
Related
Might be a dumb question or a small typo. I'm iterating through a JSON object I loaded in and my goal is to MTL some text deep inside. I'm iterating through a list of objects for a specified code, then I translate the text on a correct match. So Iterate through objects > Match Code > Translate Text
The problem is when I try to alter the object by replacing the text with the translated version, that data isn't changed in the returned data object.
def findMatch(data):
# Search Events
for event in data['events']:
if (event is not None):
pages = event['pages']
for page in pages:
lists = page['list']
for list in lists:
if(list['code'] == 401):
for parameter in list['parameters']:
parameter = checkLine(parameter)
return data
checkLine(parameter) will return a string of the text translated.
I'm guessing parameter isn't connected to the data object which is why it's not working but unsure how exactly it should be written out.
Also any suggestions on how to make this faster/more efficient are welcome.
for i, parameter in enumerate(list['parameters']):
list['parameters'][i] = checkLine(parameter)
Does the trick.
Appreciate if someone can point me to the right direction in here, bit new to python :)
I have a json file that looks like this:
[
{
"user":"user5",
"games":"game1"
},
{
"user":"user6",
"games":"game2"
},
{
"user":"user5",
"games":"game3"
},
{
"user":"user6",
"games":"game4"
}
]
And i have a small csv file that looks like this:
module_a,module_b
10,20
15,16
1,11
2,6
I am trying to append the csv data into the above mentioned json so it looks this, keeping the order as it is:
[
{
"user":"user5",
"module_a":"10",
"games":"game1",
"module_b":"20"
},
{
"user":"user6",
"module_a":"15",
"games":"game2",
"module_b":"16"
},
{
"user":"user5",
"module_a":"1",
"games":"game3",
"module_b":"11"
},
{
"user":"user6",
"module_a":"2",
"games":"game4",
"module_b":"6"
}
]
what would be the best approach to achive this keep the output order as it is.
Appreciate any guidance.
JSON specification doesn't prescribe orderness and it won't be enforced (unless it's a default mode of operation of the underlying platform) by any JSON parser so going a long way just to keep the order when processing JSON files is usually pointless. To quote:
An object is an unordered collection of zero or more name/value
pairs, where a name is a string and a value is a string, number,
boolean, null, object, or array.
...
JSON parsing libraries have been observed to differ as to whether or
not they make the ordering of object members visible to calling
software. Implementations whose behavior does not depend on member
ordering will be interoperable in the sense that they will not be
affected by these differences.
That being said, if you really insist on order, you can parse your JSON into a collections.OrderedDict (and write it back from it) which will allow you to inject data at specific places while keeping the overall order. So, first load your JSON as:
import json
from collections import OrderedDict
with open("input_file.json", "r") as f: # open the JSON file for reading
json_data = json.load(f, object_pairs_hook=OrderedDict) # read & parse it
Now that you have your JSON, you can go ahead and load up your CSV, and since there's not much else to do with the data you can immediately apply it to the json_data. One caveat, tho - since there is no direct map between the CSV and the JSON one has to assume index as being the map (i.e. the first CSV row being applied to the first JSON element etc.) so we'll use enumerate() to track the current index. There is also no info on where to insert individual values so we'll assume that the first column goes after the first JSON object entry, the second goes after the second entry and so on, and since they can have different lenghts we'll use itertools.izip_longest() to interleave them. So:
import csv
from itertools import izip_longest # use zip_longest on Python 3.x
with open("input_file.csv", "rb") as f: # open the CSV file for reading
reader = csv.reader(f) # build a CSV reader
header = next(reader) # lets store the header so we can get the key values later
for index, row in enumerate(reader): # enumerate and iterate over the rest
if index >= len(json_data): # there are more CSV rows than we have elements in JSO
break
row = [(header[i], v) for i, v in enumerate(row)] # turn the row into element tuples
# since collections.OrderedDict doesn't support random access by index we'll have to
# rebuild it by mixing in the CSV elements with the existing JSON elements
# use json_data[i].items() on Python 3.x
data = (v for p in izip_longest(json_data[index].iteritems(), row) for v in p)
# then finally overwrite the current element in json_data with a new OrderedDict
json_data[index] = OrderedDict(data)
And with our CSV data nicely inserted into the json_data, all that's left is to write back the JSON (you may overwrite the original file if you wish):
with open("output_file.json", "w") as f: # open the output JSON file for writing
json.dump(json_data, f, indent=2) # finally, write back the modified JSON
This will produce the result you're after. It even respects the names in the CSV header so you can replace them with bob and fred and it will insert those keys in your JSON. You can even add more of them if you need more elements added to your JSON.
Still, just because it's possible, you really shouldn't rely on JSON orderness. If it's user-readibility you're after, there are far more suitable formats with optional orderness like YAML.
I'm using webwrite to post to an api. One of the field names in the json object I'm trying to setup for posting is odata.metadata. I'm making a struct that looks like this for the json object:
json = struct('odata.metadata', metadata, 'odata.type', type, 'Name', name,);
But I get an error
Error using struct
Invalid field name "odata.metadata"
Here's the json object I'm trying to use in Matlab. All strings for simplicity:
{
"odata.metadata": "https://website.com#Element",
"odata.type": "Blah.Blah.This.That",
"Name": "My Object"
}
Is there a way to submit this json object or is it a lost cause?
Field names are not allowed to have dots in them. The reason why is because this will be confused with accessing another nested structure within the structure itself.
For example, doing json.odata.metadata would be interpreted as json being a struct with a member whose field name is odata where odata has another member whose field name is metadata. This would not be interpreted as a member with the combined field name as odata.metadata. You're going to have to rename the field to something else or change the convention of your field name slightly.
Usually, the convention is to replace dots with underscores. An automated way to take care of this if you're not willing to manually rename the field names yourself is to use a function called matlab.lang.makeValidName that takes in a string and converts it into a valid field name. This function was introduced in R2014a. For older versions, it's called genvarname.
For example:
>> matlab.lang.makeValidName('odata.metadata')
ans =
odata_metadata
As such, either replace all dots with _ to ensure no ambiguities or use matlab.lang.makeValidName or genvarname to take care of this for you.
I would suggest using a a containers.Map instead of a struct to store your data, and then creating your JSON string by iterating over the Map filednames and appending them along with the data to your JSON.
Here's a quick demonstration of what I mean:
%// Prepare the Map and the Data:
metadata = 'https://website.com#Element';
type = 'Blah.Blah.This.That';
name = 'My Object';
example_map = containers.Map({'odata.metadata','odata.type','Name'},...
{metadata,type,name});
%// Convert to JSON:
JSONstr = '{'; %// Initialization
map_keys = keys(example_map);
map_vals = values(example_map);
for ind1 = 1:example_map.Count
JSONstr = [JSONstr '"' map_keys{ind1} '":"' map_vals{ind1} '",'];
end
JSONstr =[JSONstr(1:end-1) '}']; %// Finalization (get rid of the last ',' and close)
Which results in a valid JSON string.
Obviously if your values aren't strings you'll need to convert them using num2str etc.
Another alternative you might want to consider is the JSONlab FEX submission. I saw that its savejson.m is able to accept cell arrays - which can hold any string you like.
Other alternatives may include any of the numerous Java or python JSON libraries which you can call from MATLAB.
I probably shouldn't add this as an answer - but you can have '.' in a struct fieldname...
Before I go further - I do not advocate this and it will almost certainly cause bugs and a lot of trouble down the road... #rayryeng method is a better approach
If your struct is created by a mex function which creates a field that contains a "." -> then you will get what your after.
To create your own test see the Mathworks example and modify accordingly.
(I wont put the full code here to discourage the practice).
If you update the char example and compile to test_mex you get:
>> obj = test_mex
obj =
Doublestuff: [1x100 double]
odata.metadata: 'This is my char'
Note: You can only access your custom field in Matlab using dynamic fieldnames:
obj.('odata.metadata')
You need to use a mex capability to update it...
I'm trying to convert a saved Lua table into something I can parse more easily for inclusion on a web page. I'm using Lua for windows from code.google's luaforwindows. It has included in it this harningt's luajson for handling this conversion. I've been able to figure out how to load in the contents of the file. I get no errors, but the "json" it produces is invalid. it just encloses the entire thing in quotes and adds \n and \t. The file I'm reading is a .lua file, which follows the format:
MyBorrowedData = {
["first"] = {
["firstSub"] = {
["firstSubSub"] = {
{
["stuffHere"]="someVal"
},
{
["stuffHere2"]="some2Val"
},
},
},
},
}
Note the , following the final item in each "row" of the table, is that the issue? Is it valid Lua data? I feel like given the output, Lua is unable to parse the table when I read it in. I believe this even more when I try to just require the lua data file, and I seem to be unable to iterate through the table manually.
Can anyone tell me if it's a bug in the code or poorly formatted data that's causing the issue?
The Lua lifting is easy:
local json = require("json")
local file = "myBorrowedData.lua"
local jsonOutput = "myBorrowedOutput.json"
r = io.input(file)
t = io.read('*all')
u = io.output(jsonOutput)
s = json.encode(t)
io.write(s)
You're reading the file as plain text, and not loading the Lua code contained inside of it. t becomes a string with the Lua file's contents, which of course serializes to a plain JSON string.
To serialize the data in the Lua file, you need to run it.
local func = loadstring(lua_code_string) -- Load Lua code from string
local data = {} -- Create table to store data
setfenv(func, data) -- Set the loaded function's environment, so any global writes that func does will go to the data table instead.
func() -- Run the loaded function.
local serialized_out = json.encode(data)
Also, ending the last item of a table with a comma (or semicolon) is perfectly valid syntax. In fact, I recommend it; you don't need to worry about adding a comma to the former last object when adding new items to the table.
I'm trying to pass data from a FSharp.Data.CsvProvider to a Deedle.Frame.
I'm almost new in F#, and I need to convert some CSV files from culture "it-IT" to "en-US", so I can use the data.
I found Deedle, and I want to learn how to use it, but I was not able to directly convert the data from a CSV file in Deedle (at least is what is printed in F# interactive).
I noticed that the CsvProvider makes the conversion, but after some days of attempts I am not able to pass the data.
I believe that Deedle should be able to deal with CSV files that use non-US culture:
let frame = Frame.ReadCsv("C:\\test.csv", culture="it-IT")
That said, if you want to use the CSV type provider for some reason, you can use:
let cs = new CsvProvider<"C:/data/fb.csv">()
cs.Rows
|> Frame.ofRecords
|> Frame.indexColsWith cs.Headers.Value
This uses Frame.ofRecords which creates a data frame from any .NET collection and expands the properties of the objects as columns. The CSV provider represents data as tuples, so this does not name the headers correctly - but the Frame.indexColsWith function lets you name the headers explicity.