JSON Not converting long numbers appropriately - json

I have a simple JSON where number is not getting parsed properly.
[
{
"orderNumber": 1,
"customerId": 228930314431312345,
"shoppingCartId": 22893031443137109,
"firstName": "jjj"
}
]
I tried it # http://www.utilities-online.info/xmltojson/ and the result was
<?xml version="1.0" encoding="UTF-8" ?>
<orderNumber>1</orderNumber>
<customerId>228930314431312350</customerId>
<shoppingCartId>22893031443137108</shoppingCartId>
<firstName>jjj</firstName>
As you can see....the XML is different from JSON. I'm new to JSON. Am I missing something?

This is a Javascript precision problem.
According to Mozilla Developer Network:
ECMA-262 only requires a precision of up to 21 significant digits. Other implementations may not support precisions higher than required by the standard.
Source: https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Number/toPrecision
I pasted your array into Google Chrome's Javascript console and got back this:
So it looks like Javascript is rounding the values before they are being converted to XML. Since your conversion is being done via Javascript in the browser at http://www.utilities-online.info/xmltojson/, it makes sense why the number was changed.
(Note: I tested on Google Chrome version 26.0.1410.43 m using Windows 7 Professional)
Edit:
Is there any reason why you cannot pass these values to Javascript as strings?
Try this:
[
{
"orderNumber": "1",
"customerId": "228930314431312345",
"shoppingCartId": "22893031443137109",
"firstName": "jjj"
}
]
I was able to do this and save the values successfully. However, you will not be able to run a math calculation on them in Javascript without losing some precision, unless you are doing something like multiplying by 0, of course.
This also converted to XML correctly using your reference http://www.utilities-online.info/xmltojson/.

Javascript represents its numbers as double precision floats which limits the largest integer number that can be represented to +-9007199254740992. Here is the ECMA documentation.

Related

Processed data different than raw data on Firefox

I came across this strange issue when using tools to prettify JSON on Firefox 65. This is my object :
{"status": 0, "message": "ok", "data": [466933532930080768, 537281936222191637]}
And as expected values are correct in RAW :
But when using the JSON or Pretty Print tools, it is not :
I tried a bunch of different JSON prettifier/formatter/validator and my object seems to be correct.
Am I missing something or did I just discovered a bug ?
ok your problem is famuos,
for fix this one you can pass string with your number and after you can use
BigInt("466933532930080768") // --> 466933532930080768n
for tacke a correct number.
do not worry about the final n JS treats it exactly like a number.
Doc
it also explains why and where does the error js

Storage Optimisation: JSON vs String with delimiters

The below JSON file costs 163 bytes to store.
{
"locations": [
{
"station": 6,
"category": 1034,
"type": 5
},
{
"station": 3,
"category": 1171,
"type": 7
},
]
}
But, If the values are put together as a string with delimiters ',' and '_', 6_1034_5,3_1171_7 costs only 17 bytes.
What are the problems with this approach?
Thank you.
The problems that I have seen with this sort of approach are mainly centered around maintainability.
With the delimited approach, the properties of your location items are identified by ordinal. Since there are all numbers, there is nothing to tell you whether the first segment is the station, category, or type; you must know that in advance. Someone new to your code base may not know that and therefore introduce bugs.
Right now all of your data are integers, which are relatively easy to encode and decode and do not risk conflicting with your delimiters. However, if you need to add user-supplied text at some point, you run the risk of that text containing your delimiters. In that case, you will have to invent an escaping/encoding mechanism to ensure that you can reliably detect your delimiters. This may seem simple, but it is more difficult than you may suspect. I've seen it done incorrectly many times.
Using a well-known structured text format like XML or JSON has the advantages that it has fully developed and tested rules for dealing with all types of text, and there are fully developed and tested libraries for reading and writing it.
Depending on your circumstances, this concern over the amount of storage could be a micro-optimization. You might want to try some capacity calculations (e.g., how much actual storage is required for X items) and compare that to the expected number of items vs. the expected amount of storage that will be available.

Are you able to subtract in JSON?

Here is my JSON code.
{
"user_email": "{User.Email}",
"activity_date": "{Lead.LastAction.Date}",
"record_id": "{Lead.Id}-{Lead.LastAction.Date}",
"action_type": "{Lead.LastAction}",
"milestone": "{Lead.Milestone}",
"date_added": "{Lead.Date}"
}
Is it possible to add calculations in the code?
For example, can I add a line where the date_added is subtracted from activity_date?
No: JSON is a way to transport JS Objects.
You can do that while you format the JSON in your native language ( for example in PHP or JS serverside), basically creating the JSON object with the result of the calculation.
In JSON just by itself you cannot do that, it's just a data format, it's totally passive, like a text file. (If you happen to use JSONP, then the story would be a bit different, it might be possible, but using JSONP to do such things would step into area of 'hack/exploit' and it probably should not be used in that way:) )
However, I see you are using not only JSON - there is some extra markup like {User.Email}. This is totally outside JSON spec so clearly you are using some form text-templating engine. These can be quite intelligent at times. Check that path, see which one you are using, see what are its features, maybe you can write a custom function or expression to do that subtraction for you. Maybe, just maybe, it's as easy as
"inactivity_period": "{Lead.LastAction.Date - Lead.Date}"
or
"inactivity_period": "{myFunctionThatIWrote(Lead.LastAction.Date, Lead.Date)}"
but that all depends on the templating engine.

How to Parse JSON Returned in ColdFusion

I'm sure this is a relatively simple question, but I can't seem to find a simple answer anywhere online.
I have a few lines of JSON returned by a cfhttp POST with an image URL that I'd like to parse out and display in my ColdFusion page:
{
"href": "http://server.arcgisonline.com/arcgis/rest/directories/arcgisoutput/ESRI_StreetMap_World_2D_MapServer/_ags_map734a6ad322dd493e84499d78f027d841.png",
"width": 854,
"height": 493,
"extent": {
"xmin": -8285407.015562119,
"ymin": 4944008.4197687358,
"xmax": -8220129.7934066672,
"ymax": 4981691.8747132765,
"spatialReference": {
"wkid": 102100,
"latestWkid": 3857
}
},
"scale": 288895.27714399656
}
How can I make "href"'s value a part of a variable in ColdFusion, and/or potentially have a button linked to downloading it?
EDIT: I forgot to mention that I'm using ColdFusion MX - also known as version 6 - and hence why I cannot use the DeserializeJSON listed on Adobe's page
Converts a JSON (JavaScript Object Notation) string data
representation into CFML data, such as a CFML structure or array.
https://wikidocs.adobe.com/wiki/display/coldfusionen/DeserializeJSON
Just parsing your cfhttp result with deserializeJSON()
<cfset getResult = deserializeJSON(result_Variable.filecontent)>
and you can get the href value using "#getResult.href#"
I forgot to mention that I'm using ColdFusion MX
Ah, that makes a very big difference! (Unless otherwise stated in the tags, most people will assume a more recent version, like CF9+).
JSON support was not added until CF8. If you search, there are still some older udf/cfc's for handling JSON out there. For example:
JSONDecode at http://www.cflib.org says it works with MX6
JSONUtil.cfc works with MX7+. It might work with MX6 out of the box, or with a few modifications. This thread has a description of how to encode with JSONUtil. Decoding should be equally simple. Just create an instance and invoke deserializeJSON, ie:
<!--- not tested --->
<cfset util = createObject("component", "path.to.JSONUtil")>
<cfset result = util.deSerializeJSON(yourJSONString)>
That said, ColdFusion MX is a bit long in the tooth and no longer supported. You should seriously consider upgrading or switching to the open source Railo engine.

How to parse JSON string containing "NaN" in Node.js

Have a node.js app that is receiving JSON data strings that contain the literal NaN, like
"[1, 2, 3, NaN, 5, 6]"
This crashes JSON.parse(...) in Node.js. I'd like to parse it, if i can into an object.
I know NaN is not part of JSON spec. Most SO links (sending NaN in json) suggest to fix the output.
Here, though the data is produced in a server I don't control, it's by a commercial Java library where I can see the source code. And it's produced by Google's Gson library:
private Gson gson = (new GsonBuilder().serializeSpecialFloatingPointValues().create());
...
gson.toJson(data[i], Vector.class, jsonOut)
So that seems like a legitimate source. And according to the Gson API Javadoc it says I should be able to parse it:
Section 2.4 of JSON specification disallows special double values
(NaN, Infinity, -Infinity). However, Javascript specification (see
section 4.3.20, 4.3.22, 4.3.23) allows these values as valid
Javascript values. Moreover, most JavaScript engines will accept these
special values in JSON without problem. So, at a practical level, it
makes sense to accept these values as valid JSON even though JSON
specification disallows them.
Despite that, this fails in both Node.js and Chrome: JSON.parse('[1,2,3,NaN,"5"]')
Is there a flag to set in JSON.parse()? Or an alternative parser that accepts NaN as a literal?
I've been Googling for a while but can't seem to find a doc on this issue.
PHP: How to encode infinity or NaN numbers to JSON?
Have a node.js app that is receiving JSON data strings that contain the literal NaN, like
Then your NodeJS app isn't receiving JSON, it's receiving text that's vaguely JSON-like. NaN is not a valid JSON token.
Three options:
1. Get the source to correctly produce JSON
This is obviously the preferred course. The data is not JSON, that should be fixed, which would fix your problem.
2. Tolerate the NaN in a simple-minded way:
You could replace it with null before parsing it, e.g.:
var result = JSON.parse(yourString.replace(/\bNaN\b/g, "null"));
...and then handle nulls in the result. But that's very simple-minded, it doesn't allow for the possibility that the characters NaN might appear in a string somewhere.
Alternately, spinning Matt Ball's reviver idea (now deleted), you could change it to a special string (like "***NaN***") and then use a reviver to replace that with the real NaN:
var result = JSON.parse(yourString.replace(/\bNaN\b/g, '"***NaN***"'), function(key, value) {
return value === "***NaN***" ? NaN : value;
});
...but that has the same issue of being a bit simple-minded, assuming the characters NaN never appear in an appropriate place.
3. Use (shudder!) eval
If you know and trust the source of this data and there's NO possibility of it being tampered with in transit, then you could use eval to parse it instead of JSON.parse. Since eval allows full JavaScript syntax, including NaN, that works. Hopefully I made the caveat bold enough for people to understand that I would only recommend this in a very, very, very tiny percentage of situations. But again, remember eval allows arbitrary execution of code, so if there's any possibility of the string having been tampered with, don't use it.
When you deal with about anything mathematical or with industry data, NaN is terribly convenient (and often infinities too are). And it's an industry standard since IEEE754.
That's obviously why some libraries, notably GSON, let you include them in the JSON they produce, losing standard purity and gaining sanity.
Revival and regex solutions aren't reliably usable in a real project when you exchange complex dynamic objects.
And eval has problems too, one of them being the fact it's prone to crash on IE when the JSON string is big, another one being security risks.
That's why I wrote a specific parser (used in production) : JSON.parseMore
You can use JSON5 library. A quote from the project page:
The JSON5 Data Interchange Format (JSON5) is a superset of JSON that aims to alleviate some of the limitations of JSON by expanding its syntax to include some productions from ECMAScript 5.1.
This JavaScript library is the official reference implementation for JSON5 parsing and serialization libraries.
As you would expect, among other things it does support parsing NaNs (compatible with how Python and the like serialize them):
JSON5.parse("[1, 2, 3, NaN, 5, 6]")
> (6) [1, 2, 3, NaN, 5, 6]
The correct solution is to recompile the parser, and contribute an "allowNan" boolean flag to the source base. This is the solution other libraries have (python's comes to mind).
Good JSON libraries will permissively parse just about anything vaguely resembling JSON with the right flags set (perl's JSON.pm is notably flexible)... but when writing a message they produce standard JSON.
IE: leave the room cleaner than you found it.
Just a minor addition to TJ Crowder's already comprehensive enough reply, I'd rather use
var result = JSON.parse(yourString.replace(/\bNaN\b/g, '"NaN"'));
because I actually need to know if its a NaN value.
Also I'd do this inside a fetch or axios GET request, only if the default JSON parsing failed and the data came as a string.
const StringConstructor = "".constructor;
if (data.constructor === StringConstructor) {
data = JSON.parse(tableData.data.replace(/\bNaN\b/g, '"NaN"'))
}