Json file needs to convert to Objects before inserting to mysqlite? - json

I am making an Android apps and I have a big json file and I like to store it in mysqlite.
Should I convert the json file to objects before inserting into mysqlite?
thanks.

I see that you were able to convert it to JSON object. After that You should convert the JSON object to a String then save the string as VARCHAR in the DB

Related

which data structure can i use to store elements in JSON files with loosing the JSON format

{"messageData":[{"vc":1,"ccid":"0010","hardwarePartNum":"00010","softwarePartNum":"000010","ecuName":"ecu333","ecuAssemblyNum":"4523001","requestId":"1001"},{"vc":2,"ccid":"0020","hardwarePartNum":"00020","softwarePartNum":"000020","ecuName":"ecu222","ecuAssemblyNum":"4523002","requestId":"2002"},{"vc":3,"ccid":"0010","hardwarePartNum":"00010","softwarePartNum":"000010","ecuName":"ecu333","ecuAssemblyNum":"4523001","requestId":"1001"},{"vc":4,"ccid":"0020","hardwarePartNum":"00020","softwarePartNum":"000020","ecuName":"ecu222","ecuAssemblyNum":"4523002","requestId":"2002"}]}
this is my jsonfile which i send to my kafka consumser
after parsing and storing it using Arraylist it is now in the form of list i.e like this
[messageData [vc=1,ccid=0010,hardwarePartNum=00010,softwarePartNum=000010,ecuName=ecu333,ecuAssemblyNum=4523001,requestId=1001]
[messageData [vc=2,ccid=0020,hardwarePartNum=00020,softwarePartNum=000020,ecuName=ecu222,ecuAssemblyNum=4523001,requestId=2002]
[messageData [vc=3,ccid=0010,hardwarePartNum=00010,softwarePartNum=000010,ecuName=ecu333,ecuAssemblyNum=4523001,requestId=1001]
[messageData [vc=4,ccid=0020,hardwarePartNum=00020,softwarePartNum=000020,ecuName=ecu222,ecuAssemblyNum=4523001,requestId=2002]
which data structure should i use to store , so that the final store file is also a json format file?

Is it possible to write json data to a file in azure blob storage without converting it to string?

I'm working on a project in azure databricks where I need to write my transformed data which is in JSON format to a file(.json) which further is written to DB.
I've tried with dataframes,rdd options. some snippets of the things I've tried
df.collect.map( line => {
//transformation logic to create json
(field1,field2,json);
})
var dataframe = processedList.toList.toDF("f1","f2","json");
dataframe .repartition(1).write.mode("overwrite").json(path)
This code works fine but the 'value' which is json data is treated/written as String as it contains all the escape characters etc. Cannot directly use JsonObject as dataframe doesn't support it.
So is there a way to write to the file without it converting to String?
What's the type of json column? It's probably string, thus Spark treats it as a string literal. Try
df.withColumn(to_json("json").alias("json")).write.json(path)

Can an ndjson file be saved using xdmp.save?

I have a requirement to gather certain JSON documents from my database and save them in an outside drive as one file for a downstream consumer.
Using server-side Javascript I can combine the documents in a JSON object or array. However, they need to be saved into this singular file in ndjson format.
Is there any way to do this using xdmp.save in MarkLogic? I thought of saving the documents as a sequence but that throws an error.
xdmp.save() expects a node() for the second parameter.
You could serialize the JSON docs and delimit with a carriage return to generate the Newline Delimited JSON, and then create a text() node from that string.
const ndjson = new NodeBuilder()
.addText(cts.search(cts.collectionQuery("json")).toArray().join("\n"))
.toNode();
xdmp.save("/temp/ndjson.json", ndjson);

change datatype of json string to Datetime in spark

i am feeding json files to spark. A value in it is of Datetime type but it is being converted to string type. I got a solution here which said to rebuild spark after changing InferSchema.scala file of it but i dont want to do it. Is there any way i can convert it while reading json files. Also can i convert it using spark sql after "jsonFiles.registerTempTable('jsonFiles')". Any help in this regard will be greatly appriciated.
With the jsonFile function you can also specify the schema at read time so:
sqlContext.jsonFile(path, schema) or in the new API (post 1.4) sqlContext.read.schema(schema).format("json").load(path)

String not valid UTF-8 when inserting json file into mongodb

i am storing a json file produced by cucumber json report into mongodb. But i get "String not valid UTF-8” when inserting json into mongodb.
Because the feature file has 年/月/日 , this is represented as "\u5e74/\u6708/\u65e5" in the json file.
This is what i am doing to store it in mongo
json_string=File.read(file_path)
data = JSON.parse(json_string)
#col.insert(data)
I can see that after JSON.parse, the already encoded string further changes to "\x90\u0013s/\x90\u0013s/\x90\u0013s"
exception is happening at insert statement.
any help appreciated.
I tried the following but still does not work
json_string=File.read(file_path,:encoding => 'UTF-8')
data = JSON.parse(json_string.force_encoding("UTF-8"))
Using Ruby 1.9.3