I have a situation where I have to have this case class Config[F[_]](pattern: String, format:F[String]), because sometimes the format should be present, and use it like Config[Id] and sometimes not and make it with Config[Option].
The question is, how is this coping with Play or Spray Json and what are some best practices to serialize / deseralize such a structure.
I use to use this trick few times before, but never forced to serialized until and I wonder how read and write methods should look like. Also, if there are any drawback or penalties, performance wise as well.
Any thoughts? Thanks, folks!
First of all, if derivation is not made bad, you would be able to make codecs for Config[List] and Config[Option].
In circe it should be like this:
implicit val configOptionCodec: Codec[Config[Option]] = deriveCodec
implicit val configListCodec: Codec[Config[List]] = deriveCodec
This won't give you much performance penalty, just boilerplate penalty. However you can write macro like #JsonCodecsFor(List, Option, Chain).
Related
In the past I used this to return any data structure via SAP RFC:
json = /ui2/cl_json=>serialize( data = <lt_result>
pretty_name = /ui2/cl_json=>pretty_mode-low_case ).
This works very well if <lt_result> is small, but for bigger data sets this is slow.
How can I return any data structure via a generic ABAP RFC function module? I use PyRFC, but AFAIK this should not matter much for this question.
This may perform better:
DATA(lo_json_writer) = cl_sxml_string_writer=>create( type = if_sxml=>co_xt_json ).
CALL TRANSFORMATION id
SOURCE result = <lt_result>
RESULT XML lo_json_writer.
ev_json_data = lo_json_writer->get_output( ). " yours export parameter
Taken from official documentation.
If performance is most important for you, then /ui2/cl_json is the wrong choice. While it is an ABAP code and SAP_BASIS 700 compatible syntax.
CALL TRANSFORMATION id is better with respect to performance. This is also written in my blog. BTW: I am an author of /ui2/cl_json.
But if it goes about flexibility, comfort, supported data types and desired format, then there is no better solution, for now, comparing to /ui2/cl_json.
Potentially, one can get some better, specialized implementation, using CALL TRANSFORMATION and own XSLT transformation, but it would be already slower then id one and would cost more coding effort.
There are still potential to make /ui2/cl_json faster, by dropping support of lower releases (below 7.40) and using the build in SXML parser for processing the JSON, but that would be some work to do. And I do not have a time / actual request for that.
#Sandra Rossi: I would be happy to apply any performance suggestions for /ui2/cl_json, so if you have concrete examples, please send them to me. Here or in the blog. But please take into consideration that for the current moment, I need to conform to SAP_BASIS 7.00 limits.
I have seen some of the other issues involving the infamous "22 fields/parameters" issue that is an inherent bug (feature?) of Scala V < 2.11. See here and here. However, as per this blog post, it appears that the 22 parameter limit in case class has been fixed; at least where the language is concerned.
I have a case class that I want to load an arbitrary (Read: > 22) number of values into which will later be read into a JSON object using the Play library.
It looks something like this:
object L {
import play.api.libs.json.Reads. _
import play.api.libs.functional.syntax._
implicit val responseRead: Reads[L] = (
MyField1.jsPath.Read[MyField1.t] and
MyField2.jsPath.Read[MyField2.t] and
...
MyField35.jsPath.Read[MyField35.t]
) (L.apply _)
}
case class L(myField1: MyField1.t, myField2: MyField2.t, ... myField35: MyField35.t)
The issue is that on compile, Scala complains that there are more than 22 parameters in the case class. (Specifically: on the last line of the object definition, when the compiler attempts to build, I get: "implementation restricts functions to 22 parameters".) I'm currently using Scala v2.11.6, so I think it's not a language issue. That makes me think that the Play library hasn't updated their implementation of Read.
If that's the case, then I guess the best bet is to group related fields into Tuples and pass the Tuples in through the JSON API?
As mentioned in the blog post you referenced, the 22-parameter limit is still in effect for functions in Scala 2.11 and later, so what you've encountered is a language issue. The function call in this case is:
L.apply _
Restructuring your model is one way to deal with this limit.
So the answer to this question is actually two parts:
1. Workaround
I'll call this the "workaround" because while it does "work" it usually addresses the symptom and not the problem.
My solution was to use shapeless to provide generic heterogeneous lists of arbitrary length. This solution is already widely discussed and available elsewhere. See, e.g., (1) [SO Post] How to get around the Scala case class limit of 22 fields?; (2) Blog post; (3) Yet another blog post.
2. Solution
As #jeffrey-chung mentions is to restructure the model to deal with this limit. As many in the industry have noted, having a function with more than 30 arguments is likely a sign that your function is doing too much or that the function should be refactored to ingest a smaller number of arguments. See, e.g., (1) Rule of 30 – When is a method, class or subsystem too big?; (2) Databrick's style guide.
See answer here
https://stackoverflow.com/a/57317220/1606452
It seems this handles it all nicely.
+22 field case class formatter and more for play-json
https://github.com/xdotai/play-json-extensions
Supports Scala 2.11.x, 2.12.x, and 2.13.x and play 2.3, 2.4, 2.5 and 2.7
And is referenced in the play-json issue as the preferred solution (but not yet merged)
In immutable.js what is the equivalent of an empty object?
My code is:
let iState = fromJS(state)
iState = iState.setIn(['ui', 'drafts'], {})
return iState.toJS()
But I think I should not use {} when using setIn. Please advise what I should use.
The best 'empty' data structure to use will depend on your use case (and I highly recommend looking at the other data structures and use their advantages) but for the most common interface/expected behavior and structure - Record will be the closest analog. That said, I also recommend looking into the Map data structure as it has a bit more functionality baked in than Record that you may find you need.
I'm working in Python here (which is actually pass-by-name, I think), but the idea is language-agnostic as long as method parameters behave similarly:
If I have a function like this:
def changefoo(source, destination):
destination["foo"] = source
return destination
and call it like so,
some_dict = {"foo": "bar"}
some_var = "a"
new_dict = changefoo(some_var, some_dict)
new_dict will be a modified version of some_dict, but some_dict will also be modified.
Assuming the mutable structure like the dict in my example will almost always be similarly small, and performance is not an issue (in application, I'm taking abstract objects and changing into SOAP requests for different services, where the SOAP request will take an order of magnitude longer than reformatting the data for each service), is this okay?
The destination in these functions (there are several, it's not just a utility function like in my example) will always be mutable, but I like to be explicit: the return value of a function represents the outcome of a deterministic computation on the parameters you passed in. I don't like using out parameters but there's not really a way around this in Python when passing mutable structures to a function. A couple options I've mulled over:
Copying the parameters that will be mutated, to preserve the original
I'd have to copy the parameters in every function where I mutate them, which seems cumbersome and like I'm just duplicating a lot. Plus I don't think I'll ever actually need the original, it just seems messy to return a reference to the mutated object I already had.
Just use it as an in/out parameter
I don't like this, it's not immediately obvious what the function is doing, and I think it's ugly.
Create a decorator which will automatically copy the parameters
Seems like overkill
So is what I'm doing okay? I feel like I'm hiding something, and a future programmer might think the original object is preserved based on the way I'm calling the functions (grabbing its result rather than relying on the fact that it mutates the original). But I also feel like any of the alternatives will be messy. Is there a more preferred way? Note that it's not really an option to add a mutator-style method to the class representing the abstract data due to the way the software works (I would have to add a method to translate that data structure into the corresponding SOAP structure for every service we send that data off too--currently the translation logic is in a separate package for each service)
If you have a lot of functions like this, I think your best bet is to write a little class that wraps the dict and modifies it in-place:
class DictMunger(object):
def __init__(self, original_dict):
self.original_dict = original_dict
def changefoo(source)
self.original_dict['foo'] = source
some_dict = {"foo": "bar"}
some_var = "a"
munger = DictMunger(some_dict)
munger.changefoo(some_var)
# ...
new_dict = munger.original_dict
Objects modifying themselves is generally expected and reads well.
It seems like there might be a way, but I'm not seeing it. I have, in the past, used the valueOf() and toString() methods on Object to cause custom objects to behave in numbers or strings based on context, but I'd like to do more.
Basically no. Final is final so they cannot be extended. You could make a class which has all the same methods as the Number class, but it still wouldn't BE a Number as far as the compiler is concerned.
To be honest there should never be a reason that you should need to extend from these classes.
As far as proxies go you could consider making a factory class which returns a pre-formatted string/number eg:
var myString:String= StringFactory.asCurrency("50"); // "$50.00"
as already stated by groady, this is not possible ... also not in the scenarios you described ... but the thing is, that at runtime, the type detection mechanism is pretty easy ... lookup the traits object, and check whether it matches a class/subclass, or whether it explicitely implements and interface ... in any other case, you will have errors ... you can use proxies to implement your own array access ... however, they will not be arrays, thus passing them to a function that expects Array, will cause errors ... also, in AS3 you cannot overload operators, so you will really have a hard time ... you could create a class for numeric values, but then manipulating it would require methods as add, subtract etc. ... there is however a related request on jira ... still, this will not solve your problem entirely, because you cannot control the way an object responds to operators ... if you compile in ECMA compatibility mode, you probable will be able to bypass the strict runtime type checks, but on the other hand, you will lose a lot of speed ... the best thing probably really is creating a class that has to be manipulated through methods instead of operators ... not too comfortable, but the best AS3 offers ...
greetz
back2dos