Term for manipulating zero, one or more items as a single object - terminology

Let's say we have a tree structure, DOM for instance, and we wish to make a function for finding all nodes meeting certain criteria and returning them in some way.
The naive way to do this is to return an array, list or similar of found nodes. However, doing so requires the use of loops or higher-order functions (such as map) to process all entries.
jQuery and some other DOM traversal frameworks I've seen partially abstracts the individual nodes away, instead working with specialized sets that can contain any number of nodes (or none at all). These objects have all the methods you'd expect the nodes to have, except it applies the calls to all the nodes in the set (where applicable), ignoring the call if the set is empty.
As an example, jQuery allows you to use $('img').css('opacity', 0.5) to find all images on a page and make them all partially transparent. The traversal of the matched nodes and application of the call to the individual images happens behind the scenes.
Is there a term for this way of ignoring the plurality of elements and operating on zero, one or more as if they were a single object? More specifically, do these objects themselves have a name?

Related

Distinguish different 'types' of nodes in the InstanceTree of the viewer

We are running some processing on the contents of the InstanceTree, were the goal is to only collect the nodes which have a direct (geometric element) counterpart in the model -> meaning they are directly selectable by clicking on a model element in the viewer.
At first, it seemed like this was solved by focusing on the leaf nodes in the tree, traversing it recursively via enumNodeChildren(node, callback, recursive) and storing the node only if getChildCount(dbId) was 0, thus indicating we reached a leaf.
However, there seem to be constellations where there is geometry connected to non-leaf nodes as well as to their children. This seems to be the case where these nodes represent certain Revit Family Types with independent geometry.
We then tried to find a way to distinguish nodes with directly attached geometry vs. nodes which only act as "grouping" for real geometry nodes. But none of the API methods under https://forge.autodesk.com/en/docs/viewer/v7/reference/Private/InstanceTree/ seem to help in this case (not even the promising getNodeType(dbId) as just returns 0 for all nodes involved).
A rather dirty fix for now is that we check for a id suffix in the node's name, which only seems to be present if there is directly related geometry. But I guess this also only just works if the viewable originates from a Revit file. See this image for clarification. It shows a parent node with no geometry, an intermediate node with geometry and several leaf nodes with geometry.
Is there a better way to solve this problem?

Om Next Multiple Instances of the Same Component with Different Query Parameters

I'm developing a tree menu using Om Next by nesting multiple instances of the same component ((defui Tree...). I can recursively build the tree by passing different properties, so the initial rendering is fine.
But, re-rendering items upon the state change is problematic since they share the same query and the params. But, if I can have different query parameters in different component instances they will be served with appropriate properties.
My understanding is, the query and the parameters are linked to the Component rather the individual instances. Therefore, trying to update parameters using om-next/set-query! didn't work here.
What is the idiomatic way of handling such a scenario?
Can we do a workaround with om/factory?
(Please pardon me if I'm suffering from a misunderstanding of fundamentals here.)

Does a deep copy operation recursively copies subvariables which it doesn't own?

Given an object that has a variable which it doesn't own; that is, the variable is composed by aggregation instead of composition. Will a deep copy operation copy the variable or only the link to it?
I like the distinction that you are making here between the role of composition and aggregation in the context of a deep copy.
I am going to go against the other answer and say: no, an object should not deep-copy another object that it doesn't own.
One would expect a deep copy of an object to be (at least initially) identical to the original. If a deep copy were made of a reference that the original didn't own, then this leaves open the question of what owns the new copy. If the clone owns it, then it would not be identical to the original object. It would be an object like the original, except it owns the reference to one of its aggregated members. This would surely lead to chaos. If the clone doesn't own it, then who does?
This problem of ownership is especially important in non-garbage-collected languages, but it also creates problems even with a garbage collector. For example, if the clone is made to allow uncommitted changes to an object, are changes to be allowed on this other object that it references? If changes are not allowed, then there was no reason to deep-copy it. If changes are allowed, then how are those changes to be committed, since the object being modified doesn't control this referenced object? Sure, a mechanism for this could be contrived, but it would surly mean that the cloned object is overstepping its responsibilities, and the program would be a maintenance nightmare.
A deep copy operation that includes unowned objects also leads to problems of infinite (or at least excessive) copy operations. Suppose an object is part of a collection, and further suppose the object requires a reference to the collection. A naive deep-copy operation on that object would then create a new copy of the collection and each of its members. Even assuming that we avoid the problem of infinite recursion, and keep all the references consistent among this new set of objects, it is still excessive for most purposes, and for those cases where a new collection is desired, wouldn't it make more sense to deep-copy the collection itself, rather than one of its members, for this purpose?
I think a deep-copy that only includes owned objects, as you suggest, is the only sane approach for most purposes.
Deep copy in oposite to shallow one should copy whole object recursively to the ground and make completely new copy of object and all contained objects.
So yes, it should copy variables, not only links..

DDD Repository Awareness of Other Repositories

Is it generally acceptable that one repository can access another repository? Specifically in this case, I have one aggregate root that uses another aggregate root to determine what entities to add. It falls along the lines of an Item/Item Type relationship. The reason that the Item Type is an aggregate root is that they are separately maintainable within a management tool outside of the scope of any single Item.
If it does matter, I am only creating my repository instances through a repository factory implementation, so I am not directly creating it by the concrete class name. At no time is the aggregate aware of the repository.
Edit - More information:
The specific implementation is that we can attach images to a document. Not only can we manage the images on the document, but there are different types of images (types being defined as how it is implemented, as opposed to an extension, for example). The document aggregate is one of a few types of other objects in the system that use these images and they do not all use the same types. While we do attach rules in the domain services, this is more specifically geared towards building the document aggregate. When building the aggregate we have five images of a specific type, and one each of two other types. We pull these individually because they are stored in separate lists in the aggregate. The validation is not the issue, but limiting what type of images are being evaluated when assembling the document.
I guess it boils down to what your trying to do. If it's a sort of validation step (e.g. remove all items that have item types that have expired) you could argue it belongs in a service layer or specification. From the language you use (i.e. "determine what entities to add") it seems to suggest the latter, though it's hard to say without more details.
I guess from a certain point of view there's no real reason why you can't (I'm by no means a super DDD purest), especially since an Item and its type could be viewed as an aggregate root and it's only the implementation detail that you need to provide a management console that's prevent.
From another point of view it does seem to suggest that there's a blurring between your aggregate roots that could suggest two different contexts are at work. For instance, one could argue that a management tool forms a separate bounded context to your main application and therefore the case for the Item type being an aggregate root does not really apply. e.g. The management tool might only ever be concerned with Item Types (and never items) while your main application might view item types as more of a value object than an entity.
Update
As you mention assembling the document this seems like the responsibility of a factory class that can correctly assemble a valid entity (the factory can use the image type repository). A repository should (in my eyes) expose querying and adding operations, not the logic to configure entities (except maybe rehydrating from persistence).
It does not follow the principles of DDD to have one repository using another repository. If you need data from one aggregate root to perform some action on another aggregate root, you can retrieve the first root (in the application service layer or possibly in a domain service) and then pass that data into the public api of the dependent aggregate root, or in the case of aggregate root creation where a separate factory is being used, into the factory's public api.
An aggregate root defines a transactional boundary, where the data within it should remain transactionally consistent. This expectation of transactional consistency should not extend to anything outside that boundary. If you have a repository dependent on another respository then you have the state of one aggregate being transactionally dependent on the state another aggregate, which means neither of these 'aggregate roots' are actually aggregate roots.
Furthermore, if you need changes to state of one aggregate to effect the state of another aggregate, you should use domain events for this, which enforce eventual consistency between the roots.

Undo/Redo with immutable objects

I read the following in an article
Immutable objects are particularly handy for implementing certain common idioms such as undo/redo and abortable transactions. Take undo for example. A common technique for implementing undo is to keep a stack of objects that somehow know how to run each command in reverse (the so-called "Command Pattern"). However, figuring out how to run a command in reverse can be tricky. A simpler technique is to maintain a stack of immutable objects representing the state of the system between successive commands. Then, to undo a command, you simply revert back to the previous system state (and probably store the current state on the redo stack).
However, the article does not show a good practical example of how immutable objects could be used to implement "undo" operations. For example... deleting 10 emails from a gmail inbox. Once you do that, it has an undo option. How would an immutable object help in this regard?
The immutable objects would hold the entire state of the system, so in this case you'd have object A that contains the original inbox, and then object B that contains the inbox with ten e-mails deleted, and (in effect) a pointer back from B to A indicating that, if you do one "undo", then you stop using B as the state of the system and start using A instead.
However, Gmail inboxes are far too large to use this technique. You'd use it on documents that can actually be stored in a fairly small amount of memory, so that you can keep many of them around for multi-level undo.
If you want to keep ten levels of undo, you can potentially save memory by only keeping two immutable objects - one that is current, and one that is from ten "undos" ago - and a list of Commands that were applied between them.
To do an "undo", you re-execute all but the last Command object, use that as the new current object, and erase the last Command (or save it as a "Redo" object). Every time you do a new action, you update the current object, add the associated Command to the list, and then (if the list is more than ten Commands long) you execute the first Command on the object from the start of the undo list and throw away the first Command on the list.
You can do various other checkpointing systems as well, involving a variable number of complete representations of the system as well as a variable number of Commands between them. But it gets further and further from the original idea that you cited and becomes more and more like a typical mutable system. It does, however, avoid the problem of making Commands consistently reversible; you need only ever apply Commands to an object forward and not reverse.
SVN and other version control systems are effectively a disk- or network-based form of undo-and-redo.