For better or worse our codebase relies heavily on Newtonsoft.Json. There are various type converters etc. based on this framework and it is simply not worth the effort to rewrite them using some other JSON framework (if even possible).
We use appsettings.json (+ other custom files) to load settings into our applications.
Typically we used to configure this like:
configurationBuilder.AddJsonFile(pathToFile, ...);
Now, we need to use some of the converters we usually rely on when parsing regular JSON for the settings JSON as well. These are usually being automatically picked up by Newtonsoft.Json so we thought the solution would simply be to reference
Microsoft.Extensions.Configuration.NewtonsoftJson and change to:
configurationBuilder.AddNewtonwoftJsonFile(pathToFile, ...);
However this does not appear to be the case. The converters are not being invoked when we call.
var someSettings = configurationSection.Get<SomeSettings>();
If we paste the same settings into a string and manually parse it the usual Newtonsoft.Json way. This works just fine.
Our conclusion is that either we are doing something wrong (hopefully) or the "binding" part where a configuration section is transferred into actual properties on the object is not part of the Newtonsoft.Json configuration extension.
Any suggestions?
Related
The setting I am referencing is shown in the snippet bellow
{
"compilerOptions": {
"resolveJsonModule": true,
}
}
I don't really understand why TS language engineers would add a flag for "resolveJsonModule"? Either an environment supports resolving JSON as module via an import statement (or require() method), or the environment doesn't. Why bother with the extra complexity?
Context
Historically, Node has included a specialized JSON loader (unrelated to ECMA standards) to allow importing JSON data only in CommonJS mode.
Standardized importing of anything at all (ES modules) is only a relatively recent phenomenon in ECMAScript. Importing text files containing valid JSON, parsed as native JS data ("importing JSON") is described in a proposal that is still only in stage 3.
However, there has been recent movement in regard to implementation of the above mentioned proposal:
V8 implemented it in June (Chrome 91+)
TypeScipt v4.5.0 implemented it in November
Deno v1.17.0 implemented it in December
Node LTS v16.14.0 implemented it last Tuesday (behind a CLI flag --experimental-json-modules)
TypeScript
TypeScript is a static type-checker, but also a compiler (technically a transpiler), and transforms your TS source code syntax into a syntax that is valid JavaScript for the runtime environment you have specified in your TSConfig. Because there are different runtime environments with different capabilities, the way that you configure the compiler affects the transformed JavaScript that is emitted. In regard to defaults, the compiler uses an algorithmic logic to determine settings. (I can't summarize that here: you honestly have to read the entire reference in order to understand it.) Because loading of JSON data has been a non-standard, specialized operation until extremely recently, it has not been a default.
Alternatives
All JS runtimes offer alternatives to an import statment for importing of textual JSON data (which can then be parsed using JSON.parse), and none of them require configuring the compiler in the ways that you asked about:
Note: the data parsed from the JSON strings imported using these methods will not participate in the "automatic" type inference capabilities of the compiler module graph because they aren't part of the compilation graph: so they'll be typed as any (or possibly unknown in an extremely strict configuration).
Browser and Deno: window.fetch
Deno: Deno.readTextFile
Node: fs.readFile
Additionally, because all JSON (JavaScript Object Notation) is valid JS, you can simply prepend the data in your JSON file with export default , and then save the file as data.js instead of data.json, and then import it as a standard module: import {default as data} from './data.js';.
Final notes about inferred types:
I prefer to audit the JSON I'm importing and use my own manually-written types (written either by myself or someone else: imported from a module/declaration file) for the data, rather than relying on the compiler's inferred types from import statements (which I have found to be too narrow on many occasions), by assigning the parsed JSON data to a new variable using a type assertion.
My program stack is ReactiveMongo 0.11.0, Scala 2.11.6, Play 2.4.2.
I'm adding PATCH functionality support to my Controllers. I want it to be type safe, so that PATCH would not mess the data in Mongo.
Current dirty solution of doing this, is
Reading object from Mongo first,
Performing JsObject.deepMerge with provided patch,
Checking that value can still be deserialized to target type.
Serializing merged object back to JsObject, and check, that patch contains only fields that are present in merged Json (So that there is no trash added to the stored object)
Call actual $set on mongo
This is obviously not perfect, but works fine. I would write macros to generate appropriate format generalization, but it might take too much time, which I currently lack of.
Is there a way to use Playframework Json macro generated format for partial entity validation like this?
Or any other solution, that can be easily integrated in Playframework for that matters.
With the help of #julien-richard-foy made a small library, to do exactly what I wanted.
https://github.com/clemble/scala-validator
Need to add some documentation, and I'll publish it to repository.
I'm trying to parse some JSON data, and I'm using the examples from this site: http://answers.oreilly.com/topic/257-how-to-parse-json-in-java
I'm using the data from section 2 and the code from section 3, and json-lib as my JSON library.
But when I tried to run the sample, the JSONSerializer class about a missing a language class from Apache Commons. I downloaded that, and it complained of a missing logging class. I downloaded that, and it complained of the missing EZmorph class. I downloaded that, and it complained of a missing Collections class. I then downloaded an unofficial JAR file with all the Apache Commons components, and it complained of a missing SLF4J logger or something. At least point, I gave up and decided to try Google's GSON instead.
This seems much more complicated than it should be. Just how many packages do I need do download? Is there a way to disable logging? Or am I doing something wrong?
I would recommend using another, more commonly used Java JSON library: for example Jackson or GSON. There are lots of tutorials and articles on using both, and I don't think there are many reasons to use json-lib over either one.
You can also have a look at Genson.
It is a all in one library, with nice features, easy to use and with good performances.
For example to serialize a full pojo to json and then deserialize it back :
Genson genson = new Genson();
String json = genson.serialize(yourPojo);
// and deserialize it
YourPojo pojo = genson.deserialize(json, YourPojo.class);
As Staxman says, there is no reason to use json-lib. I am still surprised of how many people are using it when there a couple of better libraries.
Could someone explain to me why in GWT you cannot convert a client/shared pojo (that implements Serializable) into a JSON object without jumping through a load of hoops like using the AutoBeanFactory (e.g GWT (Client) = How to convert Object to JSON and send to Server? ) or creating javascript overlay objects (and so extends JavaScriptObject)
GWT compiles your client objects into a javascript object, so why can't it then simply convert your javascript to JSON if you ask it to?
The GWT JSON library supplied only allows you to JSONify java objects that extend JavaScriptObject
I am obviously misunderstanding something about GWT since a GWT compiles a simple java POJO into a javascript object and in javascript you can JSON.stringify it into JSON so why not in GWT?
GWT compiles your app, it doesn't just convert it. It does take advantage of the prototype object in JavaScript to build classes as it needs, usually following your class hierarchy (and any GWT classes you use), but it makes many other changes:
Optimizations:
Tightens up types - if you refer to something as List, but it can only be an ArrayList, it rewrites the type declarations. This by itself doesnt give much, but it lets other steps do better work, such as
Making methods static - if nothing ever overrides ArrayList.add, for example, this will turn any calls it can prove are to ArrayList.add into a static call, preventing the need for dynamic dispatch, and allowing the 'this' string in the final JS to be replaces with a shorter arg name. This will prevent a JS object from having a method you expect it to have.
Inline Methods - if a method is simple enough, and is called in few enough places, the compiler might remove the method entirely, since it knows all places where it is called. This will directly affect your use case.
Removes/Inlines unreferenced fields - if you read to a field but only write it once, it will assume that the original value is a constant. If you don't read it, there is no reason to assign it. Values that the compiler can't tell will ever be used don't need to be using up space in the js and time in the browser. This also will directly affect treating gwt'd Java as JS.
After these, among others, the compiler will rename fields, arguments, and types to be as small as possible - rarely will a field or argument be longer than 1 character when this is complete, since those are most frequently used and have the smallest scope, so can be reused the most often by the compiler. This too will affect trying to treat objects as JSON.
The libraries that allow you to export GWT objects as JSON do so by making some other assumption.
JavaScriptObject (JSO) isn't a real Java object, but actually represents a JavaScript instance, so you can cast back and forth at will - the JSNI you write will emerge relatively unoptimized, as the compiler can't tell if you are trying to talk to an external library.
AutoBeans are generated to assume that they should have the ability to write out JSON, so specific methods to encode objects are written in. They will be subject to the same rules as the other Java that is compiled - code that isn't used may be removed, code that is only called one way might be tightened up or inlined.
Libraries that can export JS compile in Java details into the final executable, making it bigger, but giving you the ability to treat these Java objects like JS in some limited way.
One last point, since you are talking both about JSON and Javascript - Some normal JS isn't suitable for writing out as JSON. Date objects don't have a consistent way to serialize that is recognized by JSON. Non-tree object graphs can't be serialized:
var obj = {};
obj.prop = {};
obj.prop.obj = obj;
Autobeans come with a built in checker for these circular references, and I would hope the JSO serialization does as well.
I'm developing a RESTful interface which is used to provide JSON data for a JavaScript application.
On the server side I use Grails 1.3.7 and use GORM Domain Objects for persistence. I implemented a custom JSON Marshaller to support marshalling the nested domain objects
Here are sample domain objects:
class SampleDomain {
static mapping = { nest2 cascade: 'all' }
String someString
SampleDomainNested nest2
}
and
class SampleDomainNested {
String someField
}
The SampleDomain resource is published under the URL /rs/sample/ so /rs/sample/1 points to the SampleDomain object with ID 1
When I render the resource using my custom json marshaller (GET on /rs/sample/1), I get the following data:
{
"someString" : "somevalue1",
"nest2" : {
"someField" : "someothervalue"
}
}
which is exactly what I want.
Now comes the problem: I try to send the same data to the resource /rs/sample/1 via PUT.
To bind the json data to the Domain Object, the controller handling the request calls def domain = SampleDomain.get(id) and domain.properties = data where data is the unmarshalled object.
The binding for the "someString" field is working just fine, but the nested object is not populated using the nested data so I get an error that the property "nest2" is null, which is not allowed.
I already tried implementing a custom PropertyEditorSupport as well as a StructuredPropertyEditor and register the editor for the class.
Strangely, the editor only gets called when I supply non-nested values. So when I send the following to the server via PUT (which doesn't make any sense ;) )
{
"someString" : "somevalue1",
"nest2" : "test"
}
at least the property editor gets called.
I looked at the code of the GrailsDataBinder. I found out that setting properties of an association seems to work by specifying the path of the association instead of providing a map, so the following works as well:
{
"someString" : "somevalue1",
"nest2.somefield" : "someothervalue"
}
but this doesn't help me since I don't want to implement a custom JavaScript to JSON object serializer.
Is it possible to use Grails data binding using nested maps? Or do I really heave to implement that by hand for each domain class?
Thanks a lot,
Martin
Since this question got upvoted several times I would like to share what I did in the end:
Since I had some more requirements to be implemented like security etc. I implemented a service layer which hides the domain objects from the controllers. I introduced a "dynamic DTO layer" which translates Domain Objects to Groovy Maps which can be serialized easily using the standard serializers and which implements the updates manually. All the semi-automatic/meta-programming/command pattern/... based solutions I tried to implement failed at some point, mostly resulting in strange GORM errors or a lot of configuration code (and a lot of frustration). The update and serialization methods for the DTOs are fairly straightforward and could be implemented very quickly. It does not introduce a lot of duplicate code as well since you have to specify how your domain objects are serialized anyway if you don't want to publish your internal domain object structure. Maybe it's not the most elegant solution but it was the only solution which really worked for me. It also allows me to implement batch updates since the update logic is not connected to the http requests any more.
However I must say that I don't think that grails is the appropriate tech stack best suited for this kind of application, since it makes your application very heavy-weight and inflexbile. My experience is that once you start doing things which are not supported by the framework by default, it starts getting messy. Furthermore, I don't like the fact that the "repository" layer in grails essentially only exists as a part of the domain objects which introduced a lot of problems and resulted in several "proxy services" emulating a repository layer. If you start building an application using a json rest interface, I would suggest to either go for a very light-weight technology like node.js or, if you want to/have to stick to a java based stack, use standard spring framework + spring mvc + spring data with a nice and clean dto layer (this is what I've migrated to and it works like a charm). You don't have to write a lot of boilerplate code and you are completely in control of what's actually happening. Furthermore you get strong typing which increases developer productivity as well as maintainability and which legitimates the additional LOCs. And of course strong typing means strong tooling!
I started writing a blog entry describing the architecture I came up with (with a sample project of course), however I don't have a lot of time right now to finish it. When it's done I'm going to link to it here for reference.
Hope this can serve as inspiration for people experiencing similar problems.
Cheers!
It requires you to provide teh class name:
{ class:"SampleDomain", someString: "abc",
nest2: { class: "SampleDomainNested", someField:"def" }
}
I know, it requires different input that the output it produces.
As I mentioned in the comment earlier, you might be better off using the gson library.
Not sure why you wrote your own json marshaller, with xstream around.
See http://x-stream.github.io/json-tutorial.html
We have been very happy with xstream for our back end (grails based) services and this way you can render marshall in xml or json, or override the default marshalling for a specific object if you like.
Jettison seems to produce a more compact less human readable JSON and you can run into some library collision stuff, but the default internal json stream renderer is decent.
If you are going to publish the service to the public, you will want to take the time to return appropriate HTTP protocol responses for errors etc... ($.02)