I am trying to perform an export to excel functionality from the data in an html table (5000+ rows). I am using json2.js for parsing the client side data in to json string called as jsonToExport.
The value of this variable is fine for less number of records and it is decoded fine (I checked in the browser in debug mode).
But for large dataset 5000+ records the json parsing/decoding is failing. I can see the encoded string but the decoded value shows:
jsonToExport: unable to decode
I experimented with the data and found that if the data exceeds a particular size then I get this error.
like increasing the column size or replacing large data columns with small length columns, so in effect its not an issue with the data format of encoded json string missing anything since all combination of columns work if the number of columns is limited.
Its definitely not able to decode/parse and then pass the json string in the request if its above a particular size limit.
Is there an Issue with json2.js which does the parsing (I think)?.
I also tried json3.min.js and received the same error.
Unless you're doing old browser support, like i.e. 7, you don't need to use antiquated libraries to parse JSON any longer, it's built in JSON.parse(jsonString)
Related
I have a dictionary file with 200,000 items in it.
I have a Dictionary Model which matches the SQLite db and the proper methods.
If I try to parse the whole file, it seems to hang. If I do 8000 items, it seems to do it quite quickly. Is there a size limit, or is just because there might be some corrupted data somewhere? This json was exported from the sqlite db as json pretty, so I would imagine it was done correctly. It also works fine with the first 8000 items.
String peuJson = await getPeuJson();
List<Dictionary> dicts = (json.decode(peuJson) as List)
.map((i) => Dictionary.fromJson(i))
.toList();
JSON is similar to other data formats like XML - if you need to transmit more data, you just send more data. There's no inherent size limitation to the JSON request. Any limitation would be set by the server parsing the request.
I'm trying to implement a very simple single get call, and the response returns some text with a bunch of ids separated by newline (like a single column csv). I want to save each one as a row in a dataset.
I understand that in general the Rest connector saves each response as a new row in an avro file, which works well for json responses which can then be parsed in code.
However in my case I need it to just save the response in a txt or csv file, which I can then apply a schema to, getting each id in its own row. How can I achieve this?
By default, the Data Connection Rest connector will place each response from the API as a row in the output dataset. If you know the format type of your response, and it's something that would usually be parsed to be one row per newline (csv for example), you can try setting the outputFileType to the correct format (undefined by default).
For example (for more details see the REST API Plugin documentation):
type: rest-source-adapter2
outputFileType: csv
restCalls:
- type: magritte-rest-call
method: GET
path: '/my/endpoint/file.csv'
If you don't know the format, or the above doesn't work regardless, you'll need to parse the response in transforms to split it into separate rows, this can be done as if the response was a string column, in this case exploding after splitting on newline (\n) might be useful: F.explode(F.split(F.col("response"), r'\n'))
We receive a JSON object from network along with a hash value of the object. In order to verify the hash we need to turn that JSON into a string and then make a hash out of it while preserving the order of the elements in the way they are in the JSON.
Say we have:
[
{"site1":
{"url":"https://this.is.site.com/",
"logoutURL":"",
"loadStart":[],
"loadStop":[{"someMore":"smthelse"}],
"there's_more": ... }
},
{"site2":
....
}
]
The Android app is able to get same hash value, and while debugging it we fed same simple string into both algorithms and were able to get out same hash out of it.
The difference that is there happens because of the fact that dictionaries are unordered structure.
While debugging we see that just before feeding a string into a hash algorithm, the string looks like the original JSON, just without the indentations, which means it preserves the order of items in it (on Android that is):
[{"site1":{"url":"https://this.is.site.com/", ...
While doing this with many approaches by now I'm not able to achieve the same: string that I get is different in order and therefore results in a different hash. Is there a way to achieve this?
UPDATE
It appears the problem is slightly different - thanks to #Rob Napier's answer below: I need a hash of only a part of incoming string (that has JSON in it), which means for getting that part I need to first parse it into JSON or struct, and after that - while getting the string value of it - the order of items is lost.
Using JSONSerialization and JSONDecoder (which uses JSONSerialization), it's not possible to reproduce the input data. But this isn't needed. What you're receiving is a string in the first place (as an NSData). Just don't get rid of it. You can parse the data into JSON without throwing away the data.
It is possible to create JSON parsers from scratch in Swift that maintain round-trip support (I have a sketch of such a thing at RNJSON). JSON isn't really that hard to parse. But what you're describing is a hash of "the thing you received." Not a hash of "the re-serialized JSON."
I am trying to load a JSON file into BigQuery using the bq load command
bq load --autodetect --source_format=NEWLINE_DELIMITED_JSON project_abd:ds.online_data gs://online_data/file.json
One of the key:value pair in the JSON file looks like -
"taxIdentifier":"T"
The bq load fails with the message - Error while reading data, error message: JSON parsing error in row
starting at position 713452: Could not convert value to boolean.
Field: taxIdentifier; Value: T (The JSON is really huge, hence cant paste it here)
I am really confused as to why the autodetect is treating the value T as boolean. I have tried all combinations of creating the table with STRING datatype and then load the table, but due to autodetect, it errors out mentioning - changed type from STRING to BOOLEAN, if I do not use the autodetect the load succeeds.
I have to use the "autodetect" feature, since the JSON is a result of an API call and the columns may increase or decrease.
Any idea why the value T is behaving weird, and how to get around this ?
I have a CSV file which I want to convert to Parquet for futher processing. Using
sqlContext.read()
.format("com.databricks.spark.csv")
.schema(schema)
.option("delimiter",";")
.(other options...)
.load(...)
.write()
.parquet(...)
works fine when my schema contains only Strings. However, some of the fields are numbers that I'd like to be able to store as numbers.
The problem is that the file arrives not as an actual "csv" but semicolon delimited file, and the numbers are formatted with German notation, i.e. comma is used as decimal delimiter.
For example, what in US would be 123.01 in this file would be stored as 123,01
Is there a way to force reading the numbers in different Locale or some other workaround that would allow me to convert this file without first converting the CSV file to a different format? I looked in Spark code and one nasty thing that seems to be causing issue is in CSVInferSchema.scala line 268 (spark 2.1.0) - the parser enforces US formatting rather than e.g. rely on the Locale set for the JVM, or allowing configuring this somehow.
I thought of using UDT but got nowhere with that - I can't work out how to get it to let me handle the parsing myself (couldn't really find a good example of using UDT...)
Any suggestions on a way of achieving this directly, i.e. on parsing step, or will I be forced to do intermediate conversion and only then convert it into parquet?
For anybody else who might be looking for answer - the workaround I went with (in Java) for now is:
JavaRDD<Row> convertedRDD = sqlContext.read()
.format("com.databricks.spark.csv")
.schema(stringOnlySchema)
.option("delimiter",";")
.(other options...)
.load(...)
.javaRDD()
.map ( this::conversionFunction );
sqlContext.createDataFrame(convertedRDD, schemaWithNumbers).write().parquet(...);
The conversion function takes a Row and needs to return a new Row with fields converted to numerical values as appropriate (or, in fact, this could perform any conversion). Rows in Java can be created by RowFactory.create(newFields).
I'd be happy to hear any other suggestions how to approach this but for now this works. :)