Rivets.js adapter publish vs setting value - publish

Rivets.js proposes to use the adapter.read and adapter.publish functions to get and set properties of a model while defining binders. I have not found an actual benefit of using read/publish when compared to the standard get/set methodology.
Excerpt from documentation:
adapter.read(model, keypath)
adapter.publish(model, keypath, value)
The source code for read and publish from v0.6.10
read: function(obj, keypath) {
return obj[keypath];
},
publish: function(obj, keypath, value) {
return obj[keypath] = value;
}
I wonder if anyone knows about the benefits that read and publish may offer?

I finally figured this out. The answer is as simple as abstracting the get and set functionalities from the binder. This has no real benefit if using rivets as is with the one and only dot (.) binder which it ships with. But this approach comes in very handy when one defines custom adapters.
A good example, like in my case, is when using the rivets-backbone adapter. The model passed to the binder could be a plain old java object or a backbone model. Reading and writing of properties on the object vary based on its type. By using the publish and read functions, this logic gets abstracted from the binders implementation.

Related

Accessing regmap RegFields

I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.

How to declare Function's bind method for TypeScript

I am trying to use Mootools together with TypeScript. Mootools, and some modern browsers support .bind method, which is polymorphic.
How can I properly declare this feature in a *.d.ts file, to be able to use constructs like [1,2].map(this.foo.bind(this)); ?
I know I can avoid such constructs by using lambdas, but sometimes I do not want to.
Perhaps there is a mootools.d.ts file somewhere which I could download instead of reinventing it myself?
TypeScript's lib.d.ts already defines the bind function's signature in the Function interface as follows:
bind(thisArg: any, ...argArray: any[]): Function;
I don't think there's any better way of doing it until generics get added to the language.
For the time being though, if you want to use bind and the recipient of the resulting function expects a specific signature, you're going to have to cast the function back to that signature:
var bfn : (p: number) => string;
bfn = <(p: number) => string> fn.bind(ctx);
There's a growing list of definition files being tracked here.
As for generating methods pre-bound to their this pointer in TypeScript I've suggested two ways of doing this. 1) a simple base class I defined at the end of this thread. and 2) a more advanced mixin & attribute system here.

Understanding Json structure generated by an AJAX-enabled WCF Service

Good afternoon
In Visual Studio 2010 I am able to add to my solution a new item called in AJAX-enabled WCF service. That will add a new a .svc file.
Later, I have created a method just for debugging purposes:
[ServiceContract(Namespace = "")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class DataAccessService
{
[WebGet]
[OperationContract]
public MyClass DoWork()
{
var o = new MyClass
{
Id = 1,
FirstName = "Junior",
LastName = "Mayhe"
};
return o;
}
}
When debugging here is the resulting Json string:
{"d":
{"__type":"MyClass:#MyProject",
"Id":1,
"FirstName":"Junior",
"LastName":"Mayhe"
}
}
The question is, what is this "d"?
Is it some result type code for a Json string, and if so, are there other codes?
thanks in advance
It is only "d", and it is intended as protection against some cross-site scripting attacks.
E.g. consider a method that returns an int array of sensitive data (e.g. bank account balances). It can be returned as:
[10000,12300,15000]
Or:
{"d":[10000,12300,15000]}
The problem is that in the first case, there's a (very advanced and obscure but nevertheless real) attack whereby another site can steal this data by including a call to the service in a tag and overriding the JavaScript array constructor. The attack is not possible if the JSON looks like the latter case.
There was some talk within Microsoft to extend the format beyond just "d", but I don't think it ever went anywhere.
Your response is simply getting encapsulated with a parent object called "d". It was introduced in ASP.NET 3.5 web services as a security enhancement to prevent JSON hijacking.
The client proxies generated for your service will strip out the "d" so you will never really even know it was there. But since you're service isn't really going to be consumed for anything other than AJAX requests, you'll have to access your JSON objects through the ".d" property. I would recommend using JSON2 to parse the response, since not all browsers have native JSON support at the time of this writing.
You can read a little more about the security problem here.

How to separate data validation from my simple domain objects (POCOs)?

This question is language agnostic but I am a C# guy so I use the term POCO to mean an object that only preforms data storage, usually using getter and setter fields.
I just reworked my Domain Model to be super-duper POCO and am left with a couple of concerns regarding how to ensure that the property values make sense witin the domain.
For example, the EndDate of a Service should not exceed the EndDate of the Contract that Service is under. However, it seems like a violation of SOLID to put the check into the Service.EndDate setter, not to mention that as the number of validations that need to be done grows my POCO classes will become cluttered.
I have some solutions (will post in answers), but they have their disadvantages and am wondering what are some favorite approaches to solving this dilemma?
I think you're starting off with a bad assumption, ie, that you should have objects that do nothing but store data, and have no methods but accessors. The whole point of having objects is to encapsulate data and behaviors. If you have a thing that's just, basically, a struct, what behaviors are you encapsulating?
I always hear people argument for a "Validate" or "IsValid" method.
Personally I think this may work, but with most DDD projects you usually end up
with multiple validations that are allowable depending on the specific state of the object.
So I prefer "IsValidForNewContract", "IsValidForTermination" or similar, because I believe most projects end up with multiple such validators/states per class. That also means I get no interface, but I can write aggregated validators that read very well reflect the business conditions I am asserting.
I really do believe the generic solutions in this case very often take focus away from what's important - what the code is doing - for a very minor gain in technical elegance (the interface, delegate or whatever). Just vote me down for it ;)
A colleague of mine came up with an idea that worked out pretty well. We never came up with a great name for it but we called it Inspector/Judge.
The Inspector would look at an object and tell you all of the rules it violated. The Judge would decide what to do about it. This separation let us do a couple of things. It let us put all the rules in one place (Inspector) but we could have multiple Judges and choose the Judge by the context.
One example of the use of multiple Judges revolves around the rule that said a Customer must have an Address. This was a standard three tier app. In the UI tier the Judge would produce something that the UI could use to indicate the fields that had to be filled in. The UI Judge did not throw exceptions. In the service layer there was another Judge. If it found a Customer without an Address during Save it would throw an exception. At that point you really have to stop things from proceeding.
We also had Judges that were more strict as the state of the objects changed. It was an insurance application and during the Quoting process a Policy was allowed to be saved in an incomplete state. But once that Policy was ready to be made Active a lot of things had to be set. So the Quoting Judge on the service side was not as strict as the Activation Judge. Yet the rules used in the Inspector were still the same so you could still tell what wasn't complete even if you decided not to do anything about it.
One solution is to have each object's DataAccessObject take a list of Validators. When Save is called it preforms a check against each validator:
public class ServiceEndDateValidator : IValidator<Service> {
public void Check(Service s) {
if(s.EndDate > s.Contract.EndDate)
throw new InvalidOperationException();
}
}
public class ServiceDao : IDao<Service> {
IValidator<Service> _validators;
public ServiceDao(IEnumerable<IValidator<Service>> validators) {_validators = validators;}
public void Save(Service s) {
foreach(var v in _validators)
v.Check(service);
// Go on to save
}
}
The benefit, is very clear SoC, the disadvantage is that we don't get the check until Save() is called.
In the past I have usually delegated validation to a service unto its own, such as a ValidationService. This in principle still ad hears to the philosophy of DDD.
Internally this would contain a collection of Validators and a very simple set of public methods such as Validate() which could return a collection of error object.
Very simply, something like this in C#
public class ValidationService<T>
{
private IList<IValidator> _validators;
public IList<Error> Validate(T objectToValidate)
{
foreach(IValidator validator in _validators)
{
yield return validator.Validate(objectToValidate);
}
}
}
Validators could either be added within a default constructor or injected via some other class such as a ValidationServiceFactory.
I think that would probably be the best place for the logic, actually, but that's just me. You could have some kind of IsValid method that checks all of the conditions too and returns true/false, maybe some kind of ErrorMessages collection but that's an iffy topic since the error messages aren't really a part of the Domain Model. I'm a little biased as I've done some work with RoR and that's essentially what its models do.
Another possibility is to have each of my classes implement
public interface Validatable<T> {
public event Action<T> RequiresValidation;
}
And have each setter for each class raise the event before setting (maybe I could achieve this via attributes).
The advantage is real-time validation checking. But messier code and it is unclear who should be doing the attaching.
Here's another possibility. Validation is done through a proxy or decorator on the Domain object:
public class ServiceValidationProxy : Service {
public override DateTime EndDate {
get {return EndDate;}
set {
if(value > Contract.EndDate)
throw new InvalidOperationexception();
base.EndDate = value;
}
}
}
Advantage: Instant validation. Can easily be configured via an IoC.
Disadvantage: If a proxy, validated properties must be virtual, if a decorator all domain models must be interface-based. The validation classes will end up a bit heavyweight - proxys have to inherit the class and decorators have to implement all the methods. Naming and organization might get confusing.

Registering derived classes with reflection, good or evil?

As we all know, when we derive a class and use polymorphism, someone, somewhere needs to know what class to instanciate. We can use factories, a big switch statement, if-else-if, etc. I just learnt from Bill K this is called Dependency Injection.
My Question: Is it good practice to use reflection and attributes as the dependency injection mechanism? That way, the list gets populated dynamically as we add new types.
Here is an example. Please no comment about how loading images can be done other ways, we know.
Suppose we have the following IImageFileFormat interface:
public interface IImageFileFormat
{
string[] SupportedFormats { get; };
Image Load(string fileName);
void Save(Image image, string fileName);
}
Different classes will implement this interface:
[FileFormat]
public class BmpFileFormat : IImageFileFormat { ... }
[FileFormat]
public class JpegFileFormat : IImageFileFormat { ... }
When a file needs to be loaded or saved, a manager needs to iterate through all known loader and call the Load()/Save() from the appropriate instance depending on their SupportedExtensions.
class ImageLoader
{
public Image Load(string fileName)
{
return FindFormat(fileName).Load(fileName);
}
public void Save(Image image, string fileName)
{
FindFormat(fileName).Save(image, fileName);
}
IImageFileFormat FindFormat(string fileName)
{
string extension = Path.GetExtension(fileName);
return formats.First(f => f.SupportedExtensions.Contains(extension));
}
private List<IImageFileFormat> formats;
}
I guess the important point here is whether the list of available loader (formats) should be populated by hand or using reflection.
By hand:
public ImageLoader()
{
formats = new List<IImageFileFormat>();
formats.Add(new BmpFileFormat());
formats.Add(new JpegFileFormat());
}
By reflection:
public ImageLoader()
{
formats = new List<IImageFileFormat>();
foreach(Type type in Assembly.GetExecutingAssembly().GetTypes())
{
if(type.GetCustomAttributes(typeof(FileFormatAttribute), false).Length > 0)
{
formats.Add(Activator.CreateInstance(type))
}
}
}
I sometimes use the later and it never occured to me that it could be a very bad idea. Yes, adding new classes is easy, but the mechanic registering those same classes is harder to grasp and therefore maintain than a simple coded-by-hand list.
Please discuss.
My personal preference is neither - when there is a mapping of classes to some arbitrary string, a configuration file is the place to do it IMHO. This way, you never need to modify the code - especially if you use a dynamic loading mechanism to add new dynamic libraries.
In general, I always prefer some method that allows me to write code once as much as possible - both your methods require altering already-written/built/deployed code (since your reflection route makes no provision for adding file format loaders in new DLLs).
Edit by Coincoin:
Reflection approach could be effectively combined with configuration files to locate the implmentations to be injected.
The type could be declared explicitely in the config file using canonical names, similar to MSBuild <UsingTask>
The config could locate the assemblies, but then we have to inject all matching types, ala Microsoft Visual Studio Packages.
Any other mechanism to match a value or set of condition to the needed type.
My vote is that the reflection method is nicer. With that method, adding a new file format only modifies one part of the code - the place where you define the class to handle the file format. Without reflection, you'll have to remember to modify the other class, the ImageLoader, as well
Isn't this pretty much what the Dependency Injection pattern is all about?
If you can isolate the dependencies then the mechanics will almost certainly be reflection based, but it will be configuration file driven so the messiness of the reflection can be pretty well encapsulated and isolated.
I believe with DI you simply say I need an object of type <interface> with some other parameters, and the DI system returns an object to you that satisfies your conditions.
This goes together with IoC (Inversion of Control) where the object being supplied may need something else, so that other thing is automatically created and installed into your object (being created by DI) before it's returned to the user.
I know this borders on the "no comment about loading images other ways", but why not just flip your dependencies -- rather than have ImageLoader depend on ImageFileFormats, have each IImageFileFormat depend on an ImageLoader? You'll gain a few things out of this:
Each time you add a new IImageFileFormat, you won't need to make any changes anywhere else (and you won't have to use reflection, either)
If you take it one step further and abstract ImageLoader, you can mock it in Unit Tests, making testing the concrete implementations of each IImageFileFormat that much easier
In vb.net, if all the image loaders will be in the same assembly, one could use partial classes and events to achieve the desired effect (have a class whose purpose is to fire an event when the image loaders should register themselves; each file containing image loaders can have use a "partial class" to add another event handler to that class); C# doesn't have a direct equivalent to vb.net's WithEvents syntax, but I suspect partial classes are a limited mechanism for achieving the same thing.