While working on scala REPL with Play Json support(play.api.libs.json), it possible to specify that all REPL output of JsValues should be automatically formatted?
e.g. By specifying another formatter via implicits
I know Json.prettyPrint does JSON pretty on Play, I was interested to know if there is a mechanism, either in play.json to specify a formatter to be used on toString, or if there is a Scala 2.10 construct I can use to automatically wrap every call with minimal or preferably no overhead to the calls.
Probably in Play 2.1+ with Scala there Json.prettyPrint should work (can't confirm now, have no any Scala project atm, but doc says that).
On the other hand, if only destination for prettifying the JSON is possibility to check it 'with your own eyes' I'd rather suggest to leave it in compressed form, and then use some browser plugin to display it in prettified version, it also validates the code, allows to fold the sections etc. ie this one: https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc
What's most important you still save your transfer as just source JSON is still compressed ;)
Related
For better or worse our codebase relies heavily on Newtonsoft.Json. There are various type converters etc. based on this framework and it is simply not worth the effort to rewrite them using some other JSON framework (if even possible).
We use appsettings.json (+ other custom files) to load settings into our applications.
Typically we used to configure this like:
configurationBuilder.AddJsonFile(pathToFile, ...);
Now, we need to use some of the converters we usually rely on when parsing regular JSON for the settings JSON as well. These are usually being automatically picked up by Newtonsoft.Json so we thought the solution would simply be to reference
Microsoft.Extensions.Configuration.NewtonsoftJson and change to:
configurationBuilder.AddNewtonwoftJsonFile(pathToFile, ...);
However this does not appear to be the case. The converters are not being invoked when we call.
var someSettings = configurationSection.Get<SomeSettings>();
If we paste the same settings into a string and manually parse it the usual Newtonsoft.Json way. This works just fine.
Our conclusion is that either we are doing something wrong (hopefully) or the "binding" part where a configuration section is transferred into actual properties on the object is not part of the Newtonsoft.Json configuration extension.
Any suggestions?
Tornado 4.5.2 using Python3 represents the request body as a byte object instead of a native dictionary. This presents a problem for methods like RequestHandler.get_body_argument() which will not access the field correctly.
My question is how to correctly have tornado parse these bodies into more useable dictionaries so the standard library will work. I've looked throughout tornado's documentation and there's next to nothing on even the existence of this problem.
Am I missing something here or will I need to re-implement those methods myself?
Tornado never automatically parses JSON; it only automatically parses HTML-standard form encoding (the data models of form encoding and JSON are different, so it wouldn't make sense to use the same family of get_argument/get_arguments methods in the less-ambiguous JSON format). If you want to handle JSON requests, it's one line to parse it yourself:
args = tornado.escape.json_decode(self.request.body)
My program stack is ReactiveMongo 0.11.0, Scala 2.11.6, Play 2.4.2.
I'm adding PATCH functionality support to my Controllers. I want it to be type safe, so that PATCH would not mess the data in Mongo.
Current dirty solution of doing this, is
Reading object from Mongo first,
Performing JsObject.deepMerge with provided patch,
Checking that value can still be deserialized to target type.
Serializing merged object back to JsObject, and check, that patch contains only fields that are present in merged Json (So that there is no trash added to the stored object)
Call actual $set on mongo
This is obviously not perfect, but works fine. I would write macros to generate appropriate format generalization, but it might take too much time, which I currently lack of.
Is there a way to use Playframework Json macro generated format for partial entity validation like this?
Or any other solution, that can be easily integrated in Playframework for that matters.
With the help of #julien-richard-foy made a small library, to do exactly what I wanted.
https://github.com/clemble/scala-validator
Need to add some documentation, and I'll publish it to repository.
I am loading data from a mongodb collection to a mysql table through Kettle transformation.
First I extract them using MongodbInput and then I use json input step.
But since json input step has very low performance, I wanted to replace it with a
javacript script.
I am a beginner in Javascript and even though i tried somethings, the kettle javascript script is not recognizing any keywords.
can anyone give me sample code to convert Json data to different columns using javascript?
To solve your problem you need to see three aspects:
Reading from MongoDB
Reading from JSON
Reading from (probably) String
Reading from MongoDB Except if you changed the interface, MongoDB returns not JSON but BSON files (~binary JSON). You need to see the MongoDB documentation about reading and writing BSON: probably something like BSON.to() and BSON.from() but I don't know it by heart.
Reading from JSON Once you have your BSON in JSON format, you can read it using JSON.stringify() which returns a String.
Reading from (probably) String If you want to use the capabilities of JSON (why else would you use JSON?), you also want to use JSON.parse() which returns a JSON object.
My experience is that to send a JSON object from one step to the other, using a String is not a bad idea, i.e. at the end of a JavaScript step, you write your JSON object to a String and at the beginning of the next JavaScript step (can be further down the stream) you parse it back to JSON to work with it.
I hope this answers your question.
PS: writing JavaScript steps requires you to learn JavaScript. You don't have to be a master, but the basics are required. There is no way around it.
you could use the json input step to get the values of this json and put in common rows
Could someone explain to me why in GWT you cannot convert a client/shared pojo (that implements Serializable) into a JSON object without jumping through a load of hoops like using the AutoBeanFactory (e.g GWT (Client) = How to convert Object to JSON and send to Server? ) or creating javascript overlay objects (and so extends JavaScriptObject)
GWT compiles your client objects into a javascript object, so why can't it then simply convert your javascript to JSON if you ask it to?
The GWT JSON library supplied only allows you to JSONify java objects that extend JavaScriptObject
I am obviously misunderstanding something about GWT since a GWT compiles a simple java POJO into a javascript object and in javascript you can JSON.stringify it into JSON so why not in GWT?
GWT compiles your app, it doesn't just convert it. It does take advantage of the prototype object in JavaScript to build classes as it needs, usually following your class hierarchy (and any GWT classes you use), but it makes many other changes:
Optimizations:
Tightens up types - if you refer to something as List, but it can only be an ArrayList, it rewrites the type declarations. This by itself doesnt give much, but it lets other steps do better work, such as
Making methods static - if nothing ever overrides ArrayList.add, for example, this will turn any calls it can prove are to ArrayList.add into a static call, preventing the need for dynamic dispatch, and allowing the 'this' string in the final JS to be replaces with a shorter arg name. This will prevent a JS object from having a method you expect it to have.
Inline Methods - if a method is simple enough, and is called in few enough places, the compiler might remove the method entirely, since it knows all places where it is called. This will directly affect your use case.
Removes/Inlines unreferenced fields - if you read to a field but only write it once, it will assume that the original value is a constant. If you don't read it, there is no reason to assign it. Values that the compiler can't tell will ever be used don't need to be using up space in the js and time in the browser. This also will directly affect treating gwt'd Java as JS.
After these, among others, the compiler will rename fields, arguments, and types to be as small as possible - rarely will a field or argument be longer than 1 character when this is complete, since those are most frequently used and have the smallest scope, so can be reused the most often by the compiler. This too will affect trying to treat objects as JSON.
The libraries that allow you to export GWT objects as JSON do so by making some other assumption.
JavaScriptObject (JSO) isn't a real Java object, but actually represents a JavaScript instance, so you can cast back and forth at will - the JSNI you write will emerge relatively unoptimized, as the compiler can't tell if you are trying to talk to an external library.
AutoBeans are generated to assume that they should have the ability to write out JSON, so specific methods to encode objects are written in. They will be subject to the same rules as the other Java that is compiled - code that isn't used may be removed, code that is only called one way might be tightened up or inlined.
Libraries that can export JS compile in Java details into the final executable, making it bigger, but giving you the ability to treat these Java objects like JS in some limited way.
One last point, since you are talking both about JSON and Javascript - Some normal JS isn't suitable for writing out as JSON. Date objects don't have a consistent way to serialize that is recognized by JSON. Non-tree object graphs can't be serialized:
var obj = {};
obj.prop = {};
obj.prop.obj = obj;
Autobeans come with a built in checker for these circular references, and I would hope the JSO serialization does as well.