How to handle object freezing for redux - json

Regarding redux -> in development mode I would like to "freeze" the object state after the reducers have done their work and before subscribers get notified...
My approach was to define middleware for "freezing" the state of the object (e.g. with node module deep-freeze-strict).
But it seems this is not the correct approach. Because of a subscriber receives the updated state before the freezing takes place...
Does anybody has an idea how to define such freezing before the subscribers are notified??

My solution for this was to define a function which wraps the reducers:
export function freezeStateOfReducer(reducer): any {
return (state, action) => {
let newState = reducer(state, action);
freezeState(newState);
return newState;
};
}
export const rootReducer = combineReducers({
firstReducer: ...
});
createStore(freezeStateOfReducer(rootReducer));

That's an interesting question, actually. In theory, the only state updates should be happening in the reducer. In practice, I've definitely seen cases where people were accidentally mutating things in their mapState functions. You're definitely right that subscribers are notified before the middleware sees the updated state.
This might be a valid use case for a store enhancer, which could ensure that the state is frozen before actually notifying subscribers. Or, a higher-order reducer that wraps around your normal root reducer.
Also, there are several existing "freeze state in development" utilities already out there. See the DevTools#Linting section of my Redux addons catalog for a list of those existing libs.

Related

Working with complex json via HTTP request in Angular project (MEAN STACK)

I'm currently working on a project of my own, and I'm having problems when handling "large" JSON files. I get the data from a MongoDB online (Mongo Atlas), then I access that data through a simple node JavaScript REST API. Given that the complexity of the JSON is large, I'm not sure how to proceed. I normally create a model of the JSON to handle it. But in this case, I don't really know how to do it. The schema is this:
Swagger documentation
As you can see it has a lot of nested arrays.
My question is, should I use classes or maybe interfaces? For every array do I need to make a new class?
Currently, this is the model that I was working with:
(this was working on JavaScript, but of course does not work in TypeScript because the object is not truly defined)
export class Match{
constructor(
public _id: string,
public game: Object
){}
}
I know I could import the whole swagger UI to my own project (don't really know how to do it, https://www.npmjs.com/package/swagger-ts-generator, this might work), but I only really need that one schema.
Any help would be appreciated.
To be frank, when a JSON exceeds a specific length, I really don't use models to handle it (Java side), in fact I'm not using them at all in Angular. It has advantages and also disadvantages, especially when talking about a typescript context, it's not really the recommended thing, but it does the job.
public getData(): Observable<any> {
return this.http.get(environment.apiBaseUrl + "/data/", this.prepareHeader());
}
Then you actually just subscribe on the Observable and access the variables you need by their key, e.g.
this.dataAcquireService.getData().subscribe(
(res) => {
this.data = res;
for (let i = 0; i < this.data.length; i++) {
if (!this.selectGroups[this.data[i].name]) this.selectGroups[this.data[i].name] = [];
this.selectGroups[this.data[i].name].push(this.data[i]);
}
},
(err) => this.dataAcquireService.handleErrorResponse(err)
)
In the end, it's down to taste and expectations because both ways lead to the desired goal, the one might be considered dirty by dogmatists, the other might be considered too tedious.

Is there a way to lazily load static json during Angular6 routing?

I've got an Angular6 app that is being built as more of a framework for internal apps. The header / footer / major navigation will be shared, but each app (feature module) should have internal flows separate from each other. For example one feature might be purchasing, another might be onboarding, another might be a sales dashboard. They're very different from each other.
What I'm trying to do is come up with a declarative way to define known configurations for each feature. For example, the minor navigation (page to page within a feature), top level header title, and various other context related data points. My initial thought was to have them defined as JSON within each feature, which works great except I now have to import every feature's config regardless of whether a user navigates, or even has access to, that feature.
I've already got the context service set up that is checking the URL on navigation and setting some items, but again, it has to import all possible configs using this at the top of that service.
import { fooConfig } from '#foo\foo.config';
import { barConfig } from '#bar\bar.config';
import { bazConfig } from '#baz\baz.config';
So the question is: Is there a way for me to check the URL on navigation, and within that subscription, pick up the config from the correct file without pre-maturely importing them? Or maybe I can use the module to express / declare those options?
Using Typescript's Dynamic Import Expressions might be a viable option in your case..
let config;
switch (val) {
case 'foo': config = await import('#foo\foo.config'); break;
case 'bar': config = await import('#bar\bar.config'); break;
case 'baz': config = await import('#baz\baz.config'); break;
}
Though, as far as I know, there's now way at the time of writing to use variables for the import string (e.g. await import(path)).
updateConfig(feature: string) {
import(`../configs/${feature}.config.json`).then(cfg => {
this._currentConfig$.next(cfg);
});
}
The above snippet shows what I came up with. It turns out WebPack can't digest the # paths you normally use on imports and you also can't use fully variable paths. All possible options for the module package must have some common part of the path. I ended up moving my configs from #foo\foo.config.ts to #core\configs\foo.config.json. Which makes them less modular because now core is holding their config, but it makes the module lazy.

is it ok to change the route programatically inside an epic in redux-observable

I am using redux-observable and in my epics I am changing the route in some of them using browserHistory.push. Is this standard/good practice? my gut feeling tells no, however need your opinion. In addition to this after upgrading to react-router 4, I cannot get access to browserHistory how do I do this?
Imperatively calling browserHistory.push() is totally fine. If it works for your use case and you don't need to know about the route changes in your reducers, carry on!
That said, the "purist" way would be for your epic to emit an action that will cause the route change. This can be done in react-router v4 by using the new react-router-redux, which supersedes the old one with the same name.
Once added, you can use the provided actions creators:
import { push } from 'react-router-redux';
export const somethingEpic = action$ =>
action$.ofType(SOMETHING)
.switchMap(() => {
somethingLikeAjaxOrWhatever()
.mergeMap(response => Observable.of(
somethingFulfilled(response),
push('/success-page')
))
});
To be clear, push() replace() etc in react-router-redux are action creators--aka action factories. They don't actually cause the route change themselves, or perform any side effects at all. They return an action that needs to be dispatched somehow for the route to change.
The other benefit of using actions to signal route change intents is testability. You no longer have to mock a history API, you can just assert that the epic emits the expected action. This is known as "effects as data", since your code doesn't perform the actual side effect, you just generate the intent.

as3 loading architecture

I have a large application that needs to ensure that various items are loaded (at different times, not just at startup) before calling other routines that depend on said loaded items. What i find problematic is how my architecture ends up looking to support this: it is either littered with callbacks (and nested callbacks!), or pre populated with dozens of neat little
private function SaveUser_complete(params:ReturnType):void
{
continueOnWithTheRoutineIWasIn();
}
and so forth. Right now the codebase is only perhaps 2500 lines, but it is going to grow to probably around 10k. I just can't see any other way around this, but it seems so wrong (and laborious). Also, i've looked into pureMVC, Cairngorm, and these methods seem equally tedious,except with another layer of abstraction. Any suggestions?
Well asynchronous operations always have this affect on code bases, unfortunately there's not really a lot you can do. If your loading operations form some sort of 'Service' then it would be best to make a IService interface, along with the appropriate MVC Style architecture and use data tokens. Briefly:
//In your command or whatever
var service:IService = model.getService();
var asyncToken:Token = service.someAsyncOperation(commandParams);
//some messaging is used here, 'sendMessage' would be 'sendNotification' in PureMVC
var autoCallBack:Function = function(event:TokenEvent):void
{
sendMessage(workOutMessageNameHere(commandParams), event.token.getResult());
//tidy up listeners and dispose token here
}
asyncToken.addEventListener(TokenEvent.RESULT, autoCallBack, false, 0, true);
Where I have written the words 'workOutMessageNameHere()' I assume is the part you want to automate, you could either have some sort of huge switch, or a map of commandParams (urls or whatever) to message names, either way best get this info from a model (in the same command):
private function workOutMessageNameHere(commandParams):String
{
var model:CallbackModel = frameworkMethodOfRetrivingModels();
return model.getMessageNameForAsyncCommand(commandParams);
}
This should hopefully just leave you with calling the command 'callService' or however you are triggering it, you can configure the callbackMap / switch in code or possibly via parsed XML.
Hope this gets you started, and as I've just realized, is relevant?
EDIT:
Hi, just had another read through of the problem you are trying to solve, and I think you are describing a series of finite states, i.e. a state machine.
It seems as if roughly your sequences are FunctionState -> LoadingState -> ResultState. This might be a better general approach to managing loads of little async 'chains'.
Agreeing with enzuguri. You'll need lots of callbacks no matter what, but if you can define a single interface for all of them and shove the code into controller classes or a service manager and have it all in one place, it won't become overwhelming.
I know what you are going through. Unfortunately I have never seen a good solution. Basically asynchronous code just kind of ends up this way.
One solution algorithm:
static var resourcesNeededAreLoaded:Boolean = false;
static var shouldDoItOnLoad:Boolean = false;
function doSomething()
{
if(resourcesNeededAreLoaded)
{
actuallyDoIt();
}
else
{
shouldDoItOnLoad = true;
loadNeededResource();
}
}
function loadNeededResource()
{
startLoadOfResource(callBackWhenResourceLoaded);
}
function callBackWhenResourceLoaded()
{
resourcesNeededAreLoaded = true;
if(shouldDoItOnLoad)
{
doSomething();
}
}
This kind of pattern allows you to do lazy loading, but you can also force a load when necessary. This general pattern can be abstracted and it tends to work alright. Note: an important part is calling doSomething() from the load callback and not actuallyDoIt() for reasons which will be obvious if you don't want your code to become out-of-sync.
How you abstract the above pattern depends on your specific use case. You could have a single class that manages all resource loading and acquisition and uses a map to manage what is loaded and what isn't and allows the caller to set a callback if the resource isn't available. e.g.
public class ResourceManager
{
private var isResourceLoaded:Object = {};
private var callbackOnLoad:Object = {};
private var resources:Object = {};
public function getResource(resourceId:String, callBack:Function):void
{
if(isResourceLoaded[resourceId])
{
callback(resources[resourceId]);
}
else
{
callbackOnLoad[resourceId] = callBack;
loadResource(resourceId);
}
}
// ... snip the rest since you can work it out ...
}
I would probably use events and not callbacks but that is up to you. Sometimes a central class managing all resources isn't possible in which case you might want to pass a loading proxy to an object that is capable of managing the algorithm.
public class NeedsToLoad
{
public var asyncLoader:AsyncLoaderClass;
public function doSomething():void
{
asyncLoader.execute(resourceId, actuallyDoIt);
}
public function actuallyDoIt ():void { }
}
public class AsyncLoaderClass
{
/* vars like original algorithm */
public function execute(resourceId:String, callback:Function):void
{
if(isResourceLoaded)
{
callback();
}
else
{
loadResource(resourceId);
}
}
/* implements the rest of the original algorithm */
}
Again, it isn't hard to change the above from working with callbacks to events (which I would prefer in practise but it is harder to write short example code for that).
It is important to see how the above two abstract approaches merely encapsulate the original algorithm. That way you can tailor an approach that suites your needs.
The main determinants in your final abstraction will depend on:
Who knows the state of resources ... the calling context or the service abstraction?
Do you need a central place to acquire resources from ... and the hassle of making this central place available all throughout your program (ugh ... Singletons)
How complicated really is the loading necessities of your program? (e.g. it is possible to write this abstraction in such a way that a function will not be executed until a list of resources are available).
In one of my project, I build custom loader which was basically wrapper class. I was sending it Array of elements to load and wait for either complete or failed event(further I modified it and added priority also). So I didn't have to add so many handlers for all resources.
You just need to monitor which all resources has been downloaded and when all resources complete, dispatch a custom event-resourceDownloaded or else resourcesFailed.
You can also put a flag with every resource saying it is necessary or compulsory or not, If not compulsory, don't throw failed event on failing of that resource and continue monitoring other resources!
Now with priority, you can have bunch of file which you want to display first, display and continue loading other resources in background.
You can do this same and believe me you'll enjoy using it!!
You can check the Masapi framework to see if it fulfills your needs.
You can also investigate the source code to learn how they approached the problem.
http://code.google.com/p/masapi/
It's well written and maintained. I used it successfully in a desktop RSS client I developed with Air.
It worked very well assuming you pay attention to the overhead while loading too many resources in parallel.

How to separate data validation from my simple domain objects (POCOs)?

This question is language agnostic but I am a C# guy so I use the term POCO to mean an object that only preforms data storage, usually using getter and setter fields.
I just reworked my Domain Model to be super-duper POCO and am left with a couple of concerns regarding how to ensure that the property values make sense witin the domain.
For example, the EndDate of a Service should not exceed the EndDate of the Contract that Service is under. However, it seems like a violation of SOLID to put the check into the Service.EndDate setter, not to mention that as the number of validations that need to be done grows my POCO classes will become cluttered.
I have some solutions (will post in answers), but they have their disadvantages and am wondering what are some favorite approaches to solving this dilemma?
I think you're starting off with a bad assumption, ie, that you should have objects that do nothing but store data, and have no methods but accessors. The whole point of having objects is to encapsulate data and behaviors. If you have a thing that's just, basically, a struct, what behaviors are you encapsulating?
I always hear people argument for a "Validate" or "IsValid" method.
Personally I think this may work, but with most DDD projects you usually end up
with multiple validations that are allowable depending on the specific state of the object.
So I prefer "IsValidForNewContract", "IsValidForTermination" or similar, because I believe most projects end up with multiple such validators/states per class. That also means I get no interface, but I can write aggregated validators that read very well reflect the business conditions I am asserting.
I really do believe the generic solutions in this case very often take focus away from what's important - what the code is doing - for a very minor gain in technical elegance (the interface, delegate or whatever). Just vote me down for it ;)
A colleague of mine came up with an idea that worked out pretty well. We never came up with a great name for it but we called it Inspector/Judge.
The Inspector would look at an object and tell you all of the rules it violated. The Judge would decide what to do about it. This separation let us do a couple of things. It let us put all the rules in one place (Inspector) but we could have multiple Judges and choose the Judge by the context.
One example of the use of multiple Judges revolves around the rule that said a Customer must have an Address. This was a standard three tier app. In the UI tier the Judge would produce something that the UI could use to indicate the fields that had to be filled in. The UI Judge did not throw exceptions. In the service layer there was another Judge. If it found a Customer without an Address during Save it would throw an exception. At that point you really have to stop things from proceeding.
We also had Judges that were more strict as the state of the objects changed. It was an insurance application and during the Quoting process a Policy was allowed to be saved in an incomplete state. But once that Policy was ready to be made Active a lot of things had to be set. So the Quoting Judge on the service side was not as strict as the Activation Judge. Yet the rules used in the Inspector were still the same so you could still tell what wasn't complete even if you decided not to do anything about it.
One solution is to have each object's DataAccessObject take a list of Validators. When Save is called it preforms a check against each validator:
public class ServiceEndDateValidator : IValidator<Service> {
public void Check(Service s) {
if(s.EndDate > s.Contract.EndDate)
throw new InvalidOperationException();
}
}
public class ServiceDao : IDao<Service> {
IValidator<Service> _validators;
public ServiceDao(IEnumerable<IValidator<Service>> validators) {_validators = validators;}
public void Save(Service s) {
foreach(var v in _validators)
v.Check(service);
// Go on to save
}
}
The benefit, is very clear SoC, the disadvantage is that we don't get the check until Save() is called.
In the past I have usually delegated validation to a service unto its own, such as a ValidationService. This in principle still ad hears to the philosophy of DDD.
Internally this would contain a collection of Validators and a very simple set of public methods such as Validate() which could return a collection of error object.
Very simply, something like this in C#
public class ValidationService<T>
{
private IList<IValidator> _validators;
public IList<Error> Validate(T objectToValidate)
{
foreach(IValidator validator in _validators)
{
yield return validator.Validate(objectToValidate);
}
}
}
Validators could either be added within a default constructor or injected via some other class such as a ValidationServiceFactory.
I think that would probably be the best place for the logic, actually, but that's just me. You could have some kind of IsValid method that checks all of the conditions too and returns true/false, maybe some kind of ErrorMessages collection but that's an iffy topic since the error messages aren't really a part of the Domain Model. I'm a little biased as I've done some work with RoR and that's essentially what its models do.
Another possibility is to have each of my classes implement
public interface Validatable<T> {
public event Action<T> RequiresValidation;
}
And have each setter for each class raise the event before setting (maybe I could achieve this via attributes).
The advantage is real-time validation checking. But messier code and it is unclear who should be doing the attaching.
Here's another possibility. Validation is done through a proxy or decorator on the Domain object:
public class ServiceValidationProxy : Service {
public override DateTime EndDate {
get {return EndDate;}
set {
if(value > Contract.EndDate)
throw new InvalidOperationexception();
base.EndDate = value;
}
}
}
Advantage: Instant validation. Can easily be configured via an IoC.
Disadvantage: If a proxy, validated properties must be virtual, if a decorator all domain models must be interface-based. The validation classes will end up a bit heavyweight - proxys have to inherit the class and decorators have to implement all the methods. Naming and organization might get confusing.