In Gulp 4 we have a wonderful parallel(...tasks) API, but the doc didn't specify how this works under the hood. I assume it uses multithreading? In that case, I guess people should be careful not to introduce race conditions among the tasks (say, reading & writing the same global variable).
Related
I'm documenting the current state of a javascript package which is comprised of several modules predominantly consisting of standalone functions. As the result of using callbacks extensively, the package includes nested calls between standalone functions from multiple packages.
With this in mind, does anyone know what is the best way to represent calls between standalone functions in a sequence diagram?
Are the details of the standalone function worth it?
A common wisdom recommend to avoid the trap of UML as graphical programming language. Things that are easier expressed in code and easy to understand by readers better stay as code. Prefer to use UML to give the big picture, and explain complex relationships that are less obvious to spot in the code.
Automate obvious documentation?
Manually modelling a very precise sequence diagram is time consuming. Moreover, such diagram is quickly outdated with the next version of code.
Therefore, if your interest is to give an overview on how the function relate to each other, you may be interested to provide instead a visual overview using a simpler call graph. The reader can grasp the overall structure easily and look for more details in the code:
The advantage is that this task can be automated, using one of the many call graph generators available on the market (just google for javascript call graph generator to find some). There's by the way an excellent book on further automating documentation, that I can only recommend with enthousiasm: "Living Documentation: Continuous Knowledge Sharing by Design, First Edition"
If you have to set the focus on the detailed chronological sequencing of the calls a call graph would however not be sufficient. In this case, the sequence diagrams may then indeed be more relevant.
UML sequence diagrams with standalone functions?
A sequence diagram shows interactions between lifelines within an enclosing classifier. Usually a lifeline is used to represent an object, i.e. an instance of a class, but its definition is flexible enough to accommodate with any participant in an interaction.
Individual standalone functions can moreover be considered as individual objects that instantiate a more general class of functions (that's the concept behind a functor, like C++ std::function). This is particularly relevant in javascript where functions can be assigned to variables or used as parameters. So you may just use a lifeline that clarifies this. Up to you to decide how you will name the call message (e.g. operator()(a,b,c) or using its real name for readability ? ):
You can also group a bunch of related standalone functions into a pseudo-class that would represent in your model a module, compilation unit or namespace. Although a module is not stricto sensu an object, you may in your modeling deal with it as if it was a class with only one (anonymous) instance (i.e. its state would be the global variables defined in the module scope. The related standalone functions could be seen as operations of this imaginary class). The lifeline would correspond to a module, and function calls would be represented as synchronous messages either to another module or a message to itself with nested activation to visualise the “nested” calls.
Which is more preferable? between the two? like for example in sorting arrays, would it be more practical to use pre-defined sorting function than creating your own sorting function?
What are the advantages and disadvantages between using a Pre-defined Function and User-defined function?
Typically, pre-defined functions are better, if they exist. They usually are optimized to operate in the least amount of time, no matter what the input (they optimize based on input type and size). Really, the only reason you should user-define something is if the functionality of the pre-defined code does not meet some certain requirement that you have. For example, there may be a search function that has been pre-defined and returns a boolean, but you need to know the index of the found item.
Long story short: it's often best to use pre-defined if it's defined.
Only use user defined functions if the pre defined function doesn't meet your need for a "very good" reason. Never good to reinvent the wheel
A diligent programmer always tries to know how built-in functions are implemented. For the reason that he's got to choose among multiple solutions every time and has to make the best choice to fit his needs. Moreover, to know if coding his own feature is pertinent or not compared to existing ones.
Most of the time built-in functions are optimized for the best, but sometimes you will need more accurate or faster implementation, and will have to do your own version.
Example : If you need to compute the intersection between two std::set of integers (C++ STL), you will get very poor performance on large sets. If computing fast is your priority better write your own representation of a set. Here is a sample case where I had to do such a thing.
Note : As mentioned by TGH, it is Never good to reinvent the wheel. So before implementing your own feature, you should also try to find out if a qualitative third party library as not already been written (taking care of the license requirements, of course). Such that you can use it directly, or get some inspiration from it.
A built in function is a predefined function or statement or operator that supplied along with compiler used i c program.
while user defined function is a self contained building blocks of statement which are written by the user to compute the value or to program a task, they can be called by the main function as per requirement of the called function.
Non-trivial native extensions will require per-interpreter data
structures that are dynamically allocated.
I am currently using Tcl_SetAssocData, with a key corresponding
to the extension's name and an appropriate deletion routine,
to prevent this memory from leaking away.
However, Tcl_PkgProvideEx also allows one to record such
information. This information can be retrieved by
Tcl_PkgRequireEx. Associating the extension's data structures
with its package seems more natural than in the "grab-bag"
AssocData; yet the Pkg*Ex routines do not provide an
automatically invoked deletion routine. So I think I need
to stay with the AssocData approach.
For which situations were the Pkg*Ex routines designed for?
Additionally, the Tcl Library allows one to install
ExitHandlers and ThreadExitHandlers. Paraphasing the
manual, this is for flushing buffers to disk etc.
Are there any other situations requiring use of ExitHandlers?
When Tcl calls exit, are Tcl_PackageUnloadProcs called?
The whole-extension ClientData is intended for extensions that want to publish their own stub table (i.e., an organized list of functions that represent an exact ABI) that other extensions can build against. This is a very rare thing to want to do; leave at NULL if you don't want it (and contact the Tcl core developers' mailing list directly if you do; we've got quite a bit of experience in this area). Since it is for an ABI structure, it is strongly expected to be purely static data and so doesn't need deletion. Dynamic data should be sent through a different mechanism (e.g., via the Tcl interpreter or through calling functions via the ABI).
Exit handlers (which can be registered at multiple levels) are things that you use when you have to delete some resource at an appropriate time. The typical points of interest are when an interpreter (a Tcl_Interp structure) is being deleted, when a thread is being deleted, and when the whole process is going away. What resources need to be specially deleted? Well, it's usually obvious: file handles, database handles, that sort of thing. It's awkward to answer in general as the details matter very much: ask a more specific question to get tailored advice.
However, package unload callbacks are only called in response to the unload command. Like package load callbacks, they use “special function symbol” registration, and if they are absent then the unload command will refuse to unload the package. Most packages do not use them. The use case is where there are very long-lived processes that need to have extra upgradeable functionality added to them.
I have been reading over design-by-contract posts and examples, and there is something that I cannot seem to wrap my head around. In all of the examples I have seen, DbC is used on a trivial class testing its own state in the post-conditions (e.g. lots of Bank Accounts).
It seems to me that most of the time when you call a method of a class, it does much more work delegating method calls to its external dependencies. I understand how to check for this in a Unit-Test with specific scenarios using dependency inversion and mock objects that focus on the external behavior of the method, but how does this work with DbC and post-conditions?
My second question has to deal with understanding complex post-conditions. It seems to me that to write out a post-condition for many functions, that you basically have to re-write the body of the function for your post-condition to know what the new state is going to be. What is the point of that?
I really do like the notion of DbC and I think that it has great promise, particularly if I can figure out how to reproduce some failure state once I find a validated contract. Over the past couple of hours I have been reading some neat stuff wrt. automatic test generation in Eiffel. I am currently trying to improve my processes in C++ development, but I am open to learning something new if I can figure out how to not lose all of the ground I have made in my current projects. Thanks.
but how does this work with DbC and
post-conditions?
Every function is basically one of these:
A sequence of statements
A conditional statement
A loop
The idea is that you should check any postconditions about the results of the function that go beyond the union of the postconditions of all the functions called.
that you basically have to re-write
the body of the function for your
post-condition to know what the new
state is going to be
Think about it the other way round. What made you write the function in the first place? What were you pursuing? Can that be expressed in a postcondition which is more simple than the function body itself? A postcondition will typically use queries (what in C++ are const functions), while the body usually combines commands and queries (methods that modify the object and methods which only get information from it).
In some cases, yes, you will find out that you can really add little value with postconditions. In these cases, writing a bunch of tests will typically be enough.
See also:
Bertrand Meyer, Contract Driven
Development
Related questions 1, 2
Delegation at the contract level
most of the time when you call a
method of a class, it does much more
work delegating method calls to its
external dependencies
As for this first question: the implementation of a function/method may call many other function/methods, but if the designer of the code had a clear mind, this does not imply that the specification of the caller is the concatenation of the specifications of the callees. For a method that calls many others, the size of the specification can remain contained if the method accomplishes a precise and well-defined task. Which it should if the whole system was well designed.
You are clearly asking your question from the point of view of run-time assertion checking. In this context, the above would perhaps be expressed as "you don't need to re-check in the post-condition of the caller that all the callees have respected their respective contracts. These checks will already be made on each call. In the post-condition of the caller, only check the functionally visible result of the caller."
Understanding complex post-conditions
You may find this "ACSL by example" document interesting (although probably different from what you're used to). It contains many examples of formal contracts for C functions. The language of the contracts is intended for static verification instead of run-time checking, with all the advantages and the drawbacks that it entails. They are a little more sophisticated than the "Bank Accounts" that you mention — these functions implement real algorithms, although simple ones. The document keeps the contracts short and readable by introducing well-thought-out auxiliary predicates (which would be called queries in Eiffel, as Daniel points out in his answer).
I often hear around here from test driven development people that having a function get large amounts of information implicitly is a bad thing. I can see were this would be bad from a testing perspective, but isn't it sometimes necessary from an encapsulation perspective? The following question comes to mind:
Is using Random and OrderBy a good shuffle algorithm?
Basically, someone wanted to create a function in C# to randomly shuffle an array. Several people told him that the random number generator should be passed in as a parameter. This seems like an egregious violation of encapsulation to me, even if it does make testing easier. Isn't the fact that an array shuffling algorithm requires any state at all other than the array it's shuffling an implementation detail that the caller should not have to care about? Wouldn't the correct place to get this information be implicitly, possibly from a thread-local singleton?
I don't think it breaks encapsulation. The only state in the array is the data itself - and "a source of randomness" is essentially a service. Why should an array naturally have an associated source of randomness? Why should that have to be a singleton? What about different situations which have different requirements - e.g. speed vs cryptographically secure randomness? There's a reason why java.util.Random has a SecureRandom subclass :) Perhaps it doesn't matter whether the shuffle's results are predictable with a lot of effort and observation - or perhaps it does. That will depend on the context, and that's information that the shuffle algorithm shouldn't care about.
Once you start thinking of it as a service, it makes sense that it's passed in as a dependency.
Yes, you could get it from a thread-local singleton (and indeed I'm going to blog about exactly that in the next few days) but I would generally code it so that the caller gets to make that decision.
One benefit of the "randomness as a service" concept is that it makes for repeatability - if you've got a test which fails, you can pass in a Random with a specific seed and know you'll always get the same results, which makes debugging easier.
Of course, there's always the option of making the Random optional - use a thread-local singleton as a default if the caller doesn't provide their own.
Yes, that does break encapsulation. As with most software design decisions, this is a trade-off between two opposing forces. If you encapsulate the RNG then you make it difficult to change for a unit test. If you make it a parameter then you make it easy for a user to change the RNG (and potentially get it wrong).
My personal preference is to make it easy to test, then provide a default implementation (a default constructor that creates its own RNG, in this particular case) and good documentation for the end user. Adding a method with the signature
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source)
that creates a Random using the current system time as its seed would take care of most normal use cases of this method. The original method
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
could be used for testing (pass in a Random object with a known seed) and also in those rare cases where a user decides they need a cryptographically secure RNG. The one-parameter implementation should call this method.
I don't think this violates encapsulation.
Your Example
I would say that being able to provide an RNG is a feature of the class. I would obviously provide a method that doesn't require it, but I can see times where it may be useful to be able to duplicate the randomization.
What if the array shuffler was part of a game that used the RNG for level generation. If a user wanted to save the level and play it again later, it may be more efficient to store the RNG seed.
General Case
Simple classes that have a single task like this typically don't need to worry about divulging their inner workings. What they encapsulate is the logic of the task, not the elements required by that logic.