Implicit json Writes and Reads for Java8 ZonedDateTime in Play 2.3.x? - json

The new Play 2.4 has added out of the box support for json Writes and Reads for the new Java 8 time classes, but Play 2.3.x is still stuck with the Joda time json support only. Is there a way to get the Java 8 time json support on the 2.3.x? How the custom Reads and Writes for ZonedDateTime would look like?

You can copy the play 2.4 Writes and Reads code directly from their source code, or read it and adapt your own:
Writes:
https://github.com/playframework/playframework/blob/702e89841fc54f5603a0d981c3488ed9883561fe/framework/src/play-json/src/main/scala/play/api/libs/json/Writes.scala
Reads:
https://github.com/playframework/playframework/blob/cde65d987b6cf3c307dfab8269b87a65c5e84575/framework/src/play-json/src/main/scala/play/api/libs/json/Reads.scala
If you copy the files wholesale and remove the contravariant functor reads/writes, they will have no external dependencies beyond Java8 & Scala.
I'm obviously not advocating this kind of copy & paste in general, but I'd don't see that it would do any harm here, as it's just a stop-gap until your project migrates to play 2.4, at which point they can be deleted.

Related

Flink read binary ION records from Kinesis

I had a Kinesis stream containing binary ION records, and I needed to read that stream in Flink.
My solution, 2 years ago, was to write a Base64 SerDe (just about 20 lines of Java code) and use that for the KinesisConsumer.
Now I have the same requirements, but need to use PyFlink.
I guess I could create the same Java file, compile and package it in a jar, add that as a dependency for pyflink, and write a Python wrapper...
That sounds like a lot of effort for a task seemed simple like that.
Is there any simpler way? Like some config in the Kinesis Java SDK which does the base64 encode before yielding the record to the SDK user (Flink), which is same as what the aws cli is doing. Or, even simpler, to have that Java SDK convert an ION record from binary to text mode.
Thanks!

Loading json file into titan graph database

I have given a task to load a json file into titandb with dynamodb as back end.Is there any java tutorial or if possible please upload java sample coding...
thanks.
Titan is an abstraction layer so whether you use Cassandra, dynamo, hbase, etc, you merely need to find Titan data loading instructions. They are a bit dated but you might want to start with these blog posts:
http://thinkaurelius.com/2014/05/29/powers-of-ten-part-i/
http://thinkaurelius.com/2014/06/02/powers-of-ten-part-ii/
The code examples work with an older version of Titan (the schema portion) but the concepts still apply.
You will find that the strategy for data loading with Titan has a lot to do with the size of your graph. You said you are loading "a JSON file" so I imagine you have a smaller graph in the millions of edges. In this case, a simple groovy script will likely suffice. Write a script to parse your JSON and write the data to the Titan.

What is the current status of JSON mesh formats in three.js?

General question: What is a stable JSON format for loading 3d models that is currently widely used?
Extended question:
I'm a programmer doing some WebGL stuff, particularly with the Elm programming language.
I've been looking at different model formats, and it seems like using the three.js JSON format as a kind of standard makes a lot of sense for my project (and potentially for the langauge in general).
However, on the three.js github, it says that version 3 of the model langauge is soon to be depricated.
So, I'd like to know the current status of the model format before I convert. Is version 3 stable for the time being? Has version 4 been released? Will there be a version 4 model format, or is it being replaced by the geometry format? Are the upcoming changes so breaking that adopting the format at this point in time is a bad idea?
I've seenthe new ObjectLoader classes, how do these relate to the Json mesh formats? Is it a format that I can convert .obj to?
To follow up on my github post:
To be honest I don't think its safe to say that version 3 was never too stable. 3 has always had issues and the SceneLoader class that supports grew to be kind of unfriendly to maintain. Now 4 is pretty stable just lacking in support for textures. It is fine for objects, geometry, and materials but there is no exporter yet (that I am aware of).
Now what I think you are most curious about is the actual model format, which is this:
https://github.com/mrdoob/three.js/wiki/JSON-Geometry-format-4#example-of-geometry
To be honest the actual geometry format hasn't really changed much that I can tell. The big change between 3 and 4 (so far) is the scene formatting. In fact geometry is parsed with the JSONLoader class. In fact a couple days I committed, to dev branch, a new example file for msgpack compressed JSON scenes.
https://github.com/mrdoob/three.js/blob/dev/examples/webgl_loader_msgpack.html
msgpack is just JSON compression, so when it is decoded it is a JSON object. This msgpack file was converted from three.js/blob/dev/examples/scenes/robo_pigeon.js
This scene is a version 4 scene format. Each entry in the "geometries" table is actually an embedded geometry format. This format can also live in an external file. If you compare it to the first link you will see the formats are the same. Geometry files can be individually loaded into a scene with JSONLoader.
Now you asked about converters: glancing at convert_obj_three.py it says "JSON model version," in the documentation so I am going to guess it spits out a basic geometry model format and not a scene format so this may be usable. Even the blender exporter can still export compatible geometry scenes (leave the "Scene" option checked off). How do I know? Because the geometry I used for robo_pigeon.js came from that exporter, I just had to construct the version 4 scene by hand.
Does this begin to answer your question?
According to mrdoob he is planning to change the geometry format but as of this very moment the version 3 model format works fine in a version 4 scene because ObjectLoader passes those geometry (model) definitions to JSONLoader. So until a new format is actually spec'd out and JSONLoader is updated the version 3 model is the current one.
One more note: scene loaders (SceneLoader, ObjectLoader) don't natively load geometry. They always dispatch the tasks to the correct class. Not sure if it is supported yet in version 4 but in version 3 you could directly link the scene to OBJ files. And speaking of OBJ files, if you are just starting to poke at three.js and have assets in OBJ then have you considered just working with the OJBLoader directly?

JSON library in Scala and Distribution of the computation

I'd like to compute very large JSON files (about 400 MB each) in Scala.
My use-case is batch-processing. I can receive several very big files (up to 20 GB, then cut to be processed) at the same moment and I really want to process them quickly as a queue (but it's not the subject of this post!). So it's really about distributed architecture and performance issues.
My JSON file format is an array of objects, each JSON object contains at least 20 fields. My flow is composed of two major steps. The first one is the mapping of the JSON object into a Scala object. And the second step is some transformations I'm making on the Scala object data.
To avoid loading all the file in memory, I'd like a parsing library where I can have incremental parsing. There are so many libraries (Play-JSON, Jerkson, Lift-JSON, the built in scala.util.parsing.json.JSON, Gson) and I cannot figure out which one to take, with the requirement to minimize dependencies.
Do you have any ideas of a library I can use for high-volume parsing with good performances?
Also, I'm searching a way to process in parallel the mapping of the JSON file and the transformations made on the fields (between several nodes).
Do you think I can use Apache Spark to do it? Or are there alternative ways to accelerate/distribute the mapping/transformation?
Thanks for any help.
Best regards, Thomas
Considering a scenario without Spark, I would advise to stream the json with Jackson Streaming (Java) (see for example there), map each Json object to a Scala case class and send them to an Akka router with several routees that do the transformation part in parallel.

How can I marshal JSON to/from a POJO for BlackBerry Java?

I'm writing a RIM BlackBerry client app. BlackBerry uses a simplified version of Java (no generics, no annotations, limited collections support, etc.; roughly a Java 1.3 dialect). My client will be speaking JSON to a server. We have a bunch of JAXB-generated POJOs, but they're heavily annotated, and they use various classes that aren't available on this platform (ArrayList, BigDecimal, XMLGregorianCalendar). We also have the XSD used by the JAXB-XJC compiler to generate those source files.
Being the lazy programmer that I am, I'd really rather not manually translate the existing source files to Java 1.3-compatible JSON-marshalling classes. I already tried JAXB 1.0.6 xjc. Unfortunately, it doesn't understand the XSD file well enough to emit proper classes.
Do you know of a tool that will take JAXB 2.0 XSD files and emit Java 1.3 classes? And do you know of a JSON marshalling library that works with old Java?
I think I am doomed because JSON arrived around 2006, and Java 5 was released in late 2004, meaning that people probably wouldn't be writing JSON-parsing code for old versions of Java.
However, it seems that there must be good JSON libraries for J2ME, which is why I'm holding out hope.
For the first part good luck but I really don't think you're going to find a better solution than to modify the code yourself. However, there is a good J2ME JSON library you can find a link to the mirror here.
I ended up using apt (annotation processing tool) to run over the 1.5 sources and emit new 1.3-friendly source. Actually turned out to be a pretty nice solution!
I still haven't figured out an elegant way to do the actual JSON marshalling, but the apt tool can probably help write the rote code that interfaces with a JSON library like the one Jonathan pointed out.