Julia: Enforce constraints on objects in a container? - containers

I am rather new to Julia; my programming is typically in C++, Python, or sometimes Fortran for numerics. My understanding is that Julia lacks something analogous to C++'s private variables (or even Python's "I suggest you treat this as private" convention of using a leading underscore). If I have a container, is there a way to enforce constraints on the objects that I add to the container?
Consider an example: Let's say I want an array of integers, and my constraint is that all integers in the array must share a greatest common factor greater than one. So if I put 12 into the array, any number that's a multiple of 2 or 3 may be added. So I next add 21, and the greatest common factor must now be 3. If I try to add 26, I will get an error because it violates the constraint. But had I added 12 then 26, that would be legal with a greatest common factor of 2.
I realize it's a bit of a contrived example, but it should have all the salient features of what I hope to do, and requires less explanation.

True enforcement is only possible for immutable types, where you can check any desired constraints in the inner constructor(s). Outside the type definition there is no way to add new inner constructors, and if there is one you cannot create an instance without going through one.
However, while the convention in Python is that fields that begin with _ are private, the general convention in Julia is that all fields are private (unless they are explicitly documented). It's considered bad style to access fields directly outside the implementation of a type, you should generally have accessor functions instead.

Related

What is relvar - relational variable?

I am reading an introductory book on database systems and the authour introdiced the term: relational variable - relvar.
It says that the relvar is a container for the actual relation.
What is it meant by container? Is this a pysical concept, like a place on disk? Is this more of an logical concept, so that container is just an umbrella term for metadata and relation?
A relation variable can be contrasted with a relation value. These concepts are analogous to simple algebraic variables like x, and values like 5.
A relation variable is a symbol that can reference different values at different times - hence the term variable, since its value can vary. For example, I might have a relation Employee which holds information about the people working for me at any given time.
A relation value is a particular state. Values don't vary. When we say the value of a variable changes, we actually mean that the variable is assigned a new value, which may be derived from the old value.
These are logical concepts. Container is an informal term which is accessible to a lay audience. However, it shouldn't be taken too literally. Variables and values can be implemented or represented in a variety of ways in physical systems.

DDD: The conondrum of Side-Effect-Free functions

I apologize for so many questions, but I felt that they make the most sense only when treated as a unit
Note - all quotes are from DDD: Tackling Complexity in the Heart of Software ( pages 250 and 251 )
1)
Operations can be broadly divided into two categories, commands and
queries.
...
Operations that return results without producing side effects are
called functions. A function can be called multiple times and return
the same value each time.
...
Obviously, you can't avoid commands in most software systems, but the
problem can be mitigated in two ways. First, you can keep the commands
and queries strictly segregated in different operations. Ensure that
the methods that cause changes do not return domain data and are kept
as simple as possible. Perform all queries and calculations in methods
that cause no observable side effects
a) Author implies that a query is a function since it doesn't produce side effects. He also notes that function will always return same value, by which I assume he means that for the same input we will always get the same output?
b) Assume we have a method QandC(int entityId) which queries for specific domain entity, from which it extracts certain values, which in turn are used to initialize a new Value Object and this VO is then returned to the caller. Isn't according to above quote QandC a function, since it doesn't change any state?
c) But author also argues that for same input a function will always produce same output, which isn't the case with QandC, since if we place several calls to QandC, it will produce different results, assuming that in the time between the two calls this entity was modified or even deleted. As such, how can we claim QandC is a function?
d)
Ensure that the methods that cause changes do not return domain data
...
Reason being that the state of returned non-VO may be changed in some future operations and as such the side effects of such methods are unpredictable?
e)
Ensure that the methods that cause changes do not return domain data
...
Is a query method that returns an entity still considered a function, even if it doesn't change any state?
2)
VALUE OBJECTS are immutable, which implies that, apart from
initializers called only during creation, all their operations are
functions.
...
An operation that mixes logic or calculations with state change
should be refactored into two separate operations. But by definition,
this segregation of side effects into simple command methods only
applies to ENTITIES. After completing the refactoring to separate
modification from querying, consider a second refactoring to move the
responsibility for the complex calculations into a VALUE OBJECT. The
side effect often can be completely eliminated by deriving a VALUE
OBJECT instead of changing existing state, or by moving the entire
responsibility into a VALUE OBJECT.
a)
VALUE OBJECTS are immutable, which implies that, apart from
initializers called only during creation, all their operations are
functions ... But by definition, this segregation of side effects into
simple command methods only applies to ENTITIES.
I think author is saying all methods defined on VOs are functions, which doesn't make sense, since even though a method defined on a VO can't change its own state, it still can change the state of other, non-VO objects?!
b) Assuming method defined on an entity doesn't change any state, do we consider such a method as being a function, even though it is defined on an entity?
c)
... consider a second refactoring to move the responsibility for the
complex calculations into a VALUE OBJECT.
Why is author suggesting we should only refactor from entities those function that perform complex calculations? Why instead shouldn't we also refactor simpler functions?
d)
... consider a second refactoring to move the responsibility for the
complex calculations into a VALUE OBJECT.
In any case, why is author suggesting we should refactor functions out of entities and place them inside VOs? Just because it makes it more apparent to the client that this operation MAY be a function?
e)
The side effect often can be completely eliminated by deriving a VALUE
OBJECT instead of changing existing state, or by moving the entire
responsibility into a VALUE OBJECT.
This doesn't make sense, since it appears author is arguing if we move a command ( ie operation which changes the state ) into a VO, then we will in essence eliminate any side-effects, even if command is changing the state. So any ideas, what was author actually trying to say?
UPDATE:
1b)
It depends on the perspective. A database query does not change state
and thus has no side effects, however it isn't deterministic by
nature, since as you point out the data can change. In the book, the
author is referring to functions associated with value object and
entities, which don't themselves make external calls. Therefore, the
rules don't apply to QandC.
So author was describing only functions that don't make external calls and as such QandC isn't a type of function that author was describing?
1c)
QandC does not itself change state - there are no side effects. The
underlying state may be changed out of band however. Due to this, it
is not a pure function.
But it also isn't the Side-Effect-Free function in the sense author defined them?
1d)
Again, this is based on CQS.
I know I'm repeating myself, but I assume discussion in the book is based on CQS and CQS doesn't consider QandC as Side Effect Free function due to a chance of entity returned by QandC having its state modified ( by some other operation ) sometime in the future?
1e)
It is considered a query from the CQRS perspective, but it cannot be
called a function in the sense that a pure function on a VO is a
function due to lack of determinism.
I don't quite understand what you were trying to say ( the confusing part is in bold ). Perhaps that while QandC is considered a query, it is not considered a function due to returning an entity and such the side-effects are unpredictable, which makes QandC a non-deterministic by nature
So author is only making those statements ( see quote in 1e ) under the implicit assumption that no operation defined in VO will ever try to change the state of non-VO objects?
2d)
Given that VOs are immutable, they are a fitting place to house pure
functions. This is another step towards freeing domain knowledge from
technical constraints.
I don't understand why moving function from entity to VO would help free domain knowledge from technical constraints ( I'm also not really sure what you mean by technical – technical as in technology-related or... )?
I assume other reason for putting function in VO is because it is that much more obvious ( to client ) that this is a function?
2e)
I view this as a hint towards event-sourcing. Instead of changing
existing state, you add a new event which represents the change. There
is still a net side effect, however existing state remains stable.
I must confess I know nothing about even-source programming, since I'd like to first wrap my head around DDD. Anyway, so author didn't imply that just moving a command to VO would automatically eliminate side-effects, but instead some additional actions would have to be taken ( such as implementing event-sourcing ), only he "forgot" to mention that part?
SECOND UPDATE:
2d)
One of the defining characteristics of an entity is its identity ....
By placing business logic into VOs you can consider it outside of the
context of an entity's identity. This makes it easier to test this
logic, among other things.
I somehwat understand the point you're making ( when thinking about the concept from distance ), but on the other hand I really don't. Why would function within an entity be influenced by an identity of this entity ( assuming this function is pure function, in other word it doesn't change state and is deterministic )?
2e)
Yes that is my understanding of it - there is still a net "side
effect". However, there are different ways to attain a side effect.
One way is to mutate existing state. Another way is to make the state
change explicit with an object representing that change.
I - Just to be sure ... From your answer I gather that author didn't imply that side-effects would be eliminated simply by moving a command into VO?
II - Ok,if I understand you correctly, we can move a command into VOs ( even though VOs shouldn't change the state of anything and as such shouldn't cause any side-effects ) and this command inside VO is still allowed to produce some sort of side effects, but this side effect is somehow more acceptable ( OR MORE CONTROLLABLE ) by making state change explicit ( which I interpret as the thing that changed is returned to the caller as VO )?
3) I must say that I still don't quite understand why state-changing method SC shouldn't return domain objects. Perhaps because non-VO may be changed in some future operations and as such the side effects of SC are very unpredictable?
THIRD UPDATE:
Delegating the management of state to the entity and the
implementation of behavior to VOs creates certain advantages. One is
basic partitioning of responsibilities.
a) You're saying that even though a method describes a behavior of an entity ( and thus entity containing this method adheres to SRP ) and as such belongs in the entity, it may still be a good idea to move it into VO? Thus in essence, we would partition a responsibility of an entity into two even smaller responsibilities?
b) But won't moving behavior into VO basically turn this entity into a mere data container ( I understand that entity will still manage its state, but still ... )?
thank you
1a) Yes. The discourse on separating queries from commands is based on the Command-query separation principle.
1b) It depends on the perspective. A database query does not change state and thus has no side effects, however it isn't deterministic by nature, since as you point out the data can change. In the book, the author is referring to functions associated with value object and entities, which don't themselves make external calls. Therefore, the rules don't apply to QandC. Determinism could be fabricated however, offering degrees of "pureness". For instance, a serializable transaction could be created which can ensure that data doesn't change for its duration.
1c) QandC does not itself change state - there are no side effects. The underlying state may be changed out of band however. Due to this, it is not a pure function. However, the restriction that QandC doesn't change state is still valuable. The value is fittingly demonstrated by CQRS which is the application of CQS in distributed scenarios.
1d) Again, this is based on CQS. Another take on this is the Tell-Don't-Ask principle. Given an understanding of these principles however, the rule can be bent IMO. A side-effecting method could return a VO representing the result for instance. However, in certain scenarios such as CQRS + Event Sourcing it could be desirable for commands to return void.
1e) It is considered a query from the CQRS perspective, but it cannot be called a function in the sense that a pure function on a VO is a function due to lack of determinism.
2a) No, a VO function shouldn't change state of anything, it should instead return a new object.
2b) Yes.
2c) Because functional purity tends to become more important in more complex scenarios. However, as you point out, isn't a clear and definitive rule. It shouldn't be based on complexity as much as it is based on the domain at hand.
2d) Given that VOs are immutable, they are a fitting place to house pure functions. This is another step towards freeing domain knowledge from technical constraints.
2e) I view this as a hint towards event-sourcing. Instead of changing existing state, you add a new event which represents the change. There is still a net side effect, however existing state remains stable.
UPDATE
1b) Yes.
1c) It is a side-effect free function, however it is not a deterministic function because it cannot be thought to always return the same value given the same input. For example, the function that returns the current time is a side-effect free function, but it certainly does not return the same value in subsequent calls.
1d) QandC can be thought of as side-effect free, but not pure. Another way to look at functional purity is as referential transparency - the ability to replace a function call by its value without changing program behavior. In other words, asking the question does not change the answer. QandC can guarantee that, but only within a context such as a transaction. So QandC can be thought of as a function, but only in a specific context.
1e) I think the confusing part is that the author is talking specifically about functions on VOs and entities - not database queries, where as we are talking about both. My statement extends the discussion to database queries and CQRS given certain restrictions, ie an ambient transaction.
2d) I can see how what I said was a bit vague, I was getting lazy. One of the defining characteristics of an entity is its identity. It maintains its identity throughout its life-cycle while its state may change. By placing business logic into VOs you can consider it outside of the context of an entity's identity. This makes it easier to test this logic, among other things.
2e) Yes that is my understanding of it - there is still a net "side effect". However, there are different ways to attain a side effect. One way is to mutate existing state. Another way is to make the state change explicit with an object representing that change.
UPDATE 2
2d) This particular point can be argued or can be a matter of preference. One perspective is the idea is based on the single-responsibility principle (SRP). The responsibility of an entity is the association of an identity with behavior and state. Behavior combines input with existing state to produce state transitions. Delegating the management of state to the entity and the implementation of behavior to VOs creates certain advantages. One is basic partitioning of responsibilities. Another is more subtle and perhaps more arguable. It is the idea that logic can be considered in a stateless manner. This allows thinking about such logic easier and more like thinking about a mathematical equation where all changes are explicit - no hidden state.
2e.1) Yes, eliminating a net side effect would alter behavior, which is not the goal.
2e.2) Yes.
3) Commands returning void have several advantages. One is that they become naturally more adept in async scenarios - no need to wait for a result. Another is that it allows you to represent the operation as a single command object - again, because there is no return value. This applies in CQRS and also event sourcing. In these cases, any command output is dispatched as an event instead of a result. But again, if these requirements don't apply returning a result object can be appropriate.
UPDATE 3
a) Yes, and this is a specific type of partitioning.
b) The responsibility of the entity is to coordinate behavior by delegating to VOs and applying the resulting state changes.

Does any programming language support defining constraints on primitive data types?

Last night I was thining that programming languages can have a feature in which we should be able to constraints the values assigned to primitive data types.
For example I should be able to say my variable of type int can only have value between 0 and 100
int<0, 100> progress;
This would then act as a normal integer in all scenarios except the fact that you won't be able to specify values out of the range defined in constraint. The compiler will not compile the code progress=200.
This constraint can be carried over with type information.
Is this possible? Is it done in any programming language? If yes then which language has it and what is this technique called?
It is generally not possible. It makes little sense to use integers without any arithmetic operators. With arithmetic operators you have this:
int<0,100> x, u, v;
...
x = u + v; // is it in range?
If you're willing to do checks at run-time, then yes, several mainstream languages support it, starting with Pascal.
I believe Pascal (and Delphi) offers something similar with subrange types.
I think this is not possible at all in Java and in Ruby (well, in Ruby probably it is possible, but requires some effort). I have no idea about other languages, though.
Ada allows something like what you describe with ranges:
type My_Int is range 1..100;
So if you try assign a value to a My_Int that's less than 1 or greater than 100, Ada will raise the exception Constraint_Error.
Note that I've never used Ada. I've only read about this feature, so do your research before you plunge in.
It is certainly possible. There are many different techniques to do that, but 'dependent types' is the most popular.
The constraints can be even checked statically at compile time by compiler. See, for example, Agda2 and ATS (ats-lang.org).
Weaker forms of your 'range types' are possible without full dependent types, I think.
Some keywords to search for research papers:
- Guarded types
- Refinment types
- Subrange types
Certainly! In case you missed it: C. Do you C? You don't C? You don't count short as a constraint on Integer? Ok, so C only gives you pre-packaged constrained types.
BTW: It seems the answer that Pascal has subrange types misses the point of them. In Pascal array bounds violations are not possible. This is because the array index must of the same type as the array was declared with. In turn this means that to use an integer index you must coerce it down to the subrange, and that is where the run time check is done, not accessing the array.
This is a very important idea because it means a for loop over an array index type may access the array components safely without any run time checking.
Pascal has subranges. Ada extended that a bit, so you can do something like a subrange, or you can create an entirely new type with characteristics of the existing type, but not compatible with it (e.g., even if it was in the right range, you wouldn't be able to assign an Integer to your new type based off of Integer).
C++ doesn't support the idea directly, but is flexible enough that you can implement it if you want to. If you decide to support all the compound assignment operators (+=, -=, *=, etc.) this can be a lot of work though.
Other languages that support operator overloading (e.g., ML and company) can probably support it in much the same way as C++.
Also note that there are a few non-trivial decisions involved in the design. In particular, if the type is used in a way that could/does result in an intermediate result that overflows the specified range, but produces a final result that's within the specified range, what do you want to happen? Depending on your situation, that might be an error, or it might be entirely acceptable, and you'll have to decide which.
I really doubt that you can do that. Afterall these are primitive datatypes, with emphasis on primitive!
adding a constraint will make the type a subclass of its primitive state, thus extending it.
from wikipedia:
a basic type is a data type provided by a programming language as a basic building block. Most languages allow more complicated composite types to be recursively constructed starting from basic types.
a built-in type is a data type for which the programming language provides built-in support.
So personally, even if it is possible i wouldnt do, since its a bad practice. Instead just create an object that returns this type and the constraints (which i am sure you thought of this solution).
SQL has domains, which consist of a base type together with a dynamically-checked constraint. For example, you might define the domain telephone_number as a character string with an appropriate number of digits, valid area code, etc.

Can a variable like 'int' be considered a primitive/fundamental data structure?

A rough definition of a data structure is that it allows you to store data and apply a set of operations on that data while preserving consistency of data before and after the operation.
However some people insist that a primitive variable like 'int' can also be considered as a data structure. I get that part where it allows you to store data but I guess the operation part is missing. Primitive variables don't have operations attached to them. So I feel that unless you have a set of operations defined and attached to it you cannot call it a data structure. 'int' doesn't have any operation attached to it, it can be operated upon with a set of generic operators.
Please advise if I got something wrong here.
To say that something is structured implies that there is a form or formatting that defines HOW the data is structured. Note this has nothing to do with how the data is actually stored. You could for example create a data structure that exists entirely within a single Integer, yet represents a number of different values.
A data structure is an arbitrary construct used to describe how to store data in a system. It may be as simple as a single primitive, or as complex as a class. So the answer is largely subjective. It's "yes" if you decide to use a primitive as such, that a simple primitive may be considered a primitive data structure, because it describes HOW you wish to store an element of data. The answer is also "no", because it describes an element of a structure and not necessarily the whole structure in itself.
As for how this relates to operations, strictly speaking a data structure has nothing to do with behaviour, it is simply a storage mechanism. Preserving consistency of data is really a behavioural thing. Yes, your compiler probably spits out errors if you try to shoe-horn a 32-bit value into a Byte, but that's symptomatic of the behaviour of the system (Ie: compilation) acting on the data structure of your application, of which your primitives are an element.
I don't think your definition of data structure is correct.
It seems to me that a struct (with no methods) is a valid data structure, but it has no real 'operations'. And that's not important. It's holding data.
To that end, and int holds data, an Object holds data. They are data structures (technically).
That said, I don't ever find myself saying "What datastructure shall I use? I know! an int!".
I would say you need to re-evaluate the meaning of "data structure".
Your definition of a data structure isn't quite correct. A data structure doesn't necessarily have any attached behaviors or operations. An ADT or Abstract Data Type is what you are describing as a data structure. An ADT includes the data and the behaviors or operations that work on that data. An int by itself is not an ADT, but I suppose you could call it a data structure. If you encapsulate an int and its operations then you have an ADT which is what I think you are trying to describe as a data structure. Classes provide a mechanism for implementation of ADTs in modern languages.
wikipedia has a good description of abstract data types.
I would argue that "int" is a data structure - it has a defined representation and meaning. That is, depending on your system, it has a specific length, a specific set of operators available to it, and a specified representation (be it twos-compliment). It is designed to hold "integer numbers".
Practically, the distinction isn't particularly relevant.
Primitives do have operations attached to them; however, they may not be in the format of methods as you would expect in an object-oriented paradigm.
Assignment =, addition +, subtraction -, comparison ==, etc are all operations. Especially if you consider that you can explicitly define, override, or overload these operations for arbitrary classes (i.e.: data structures) in some languages (e.g. C++), then the primitive int, char, or what have you, are not very different.
Primitive variables don't have operations attached to them. So I feel that unless you have a set of operations defined and attached to it you cannot call it a data structure.
'int' doesn't have any operation attached to it, it can be operated upon with a set of generic operators.
Generic? Then how come 2+2 works, but "ninja" + List<float> doesn't? If the operator was generic, it'd work on anything. It doesn't. It only works on a few predefined types, such as integers.
Ints certainly have a set of operations defined on them. Arithmetic operations such as addition, subtraction, multiplication or division, for example. Most languages also have some kind of ToString()-like functionality defined on integers. But you can't do just anything with an int. For example, you can't pass an int to a function expecting a string. ints have a very specific set of operations defined on them. Those operations just aren't member methods. They come in the form of operators and non-member functions or member methods of other classes. But they are still operations that work on integers.
I don't think, (ref: Wikipedia entry) that Data Structure includes the definition of permissible operations (on operators). Of course, we could cite the C++ class as a counter-example in which we can define overloaded operators. At the same time, we define a structure as just a composite/user defined datatype and do not declare any permissible operations on them. We allow the compiler to figure that out.
'int' doesn't have any operation attached to it, it can be operated upon with a set of generic operators.
Operations are intrinsically linked with the things that they operate on; there's no such thing as generic operations.
This is true in the mathematical sense (< works for in set of integers, but has no meaning for complex numbers), and also in the computer scientific sense (evaluating a + b requires that a and b are or can be converted to compatible types on which the + operation is defined).
Of course, it depends what you mean by "data structure." Others have focused on whether your definition is correct and raise good questions. But what if we say, "Let's ignore the term for now, and focus on what you described?" In other words, what if look at
A piece of data that has a designated interpretation of its value
A set of operations on that data
Then certainly, int qualifies. (If there were no operations on int, we'd all be stuck!)
For a more mathematical approach to programming that begins with these questions, and takes them to what some have called "an algebra of computation," see Elements of Programming by Alex Stepanov and Paul McJones.

What's the best way to model an unordered list (i.e., a set)?

What's the most natural way to model a group of objects that form a set? For example, you might have a bunch of user objects who are all subscribers to a mailing list.
Obviously you could model this as an array, but then you have to order the elements and whoever is using your interface might be confused as to why you're encoding arbitrary ordering data.
You can use a hash where the members are keys that map to "1" or "true", but in most languages there are restrictions on what data types a hash key can be.
What's the standard way to do this in modern languages (PHP, Perl, Ruby, Python, etc)?
In Python, you would use the set datatype. A set supports containing any hashable object, so if you have a custom class you need to store in a set and the default hashable behaviour is not appropriate, you can implement __hash__ to implement the behaviour you want.
C# has the HashSet<T> generic collection.
public class EmailAddress // probably needs to override GetHashCode()
{
...
}
var addresses = new HashSet<EmailAddress>();
Most modern languages are going to have some form of Set data structure. Java has HashSet, which implements the Set interface.
In PHP you can use an array to store your data. Either search the array before you add a new element, or use array_unique to remove duplicates after inserting all elements.
In c as a stand-in for understanding the machine directly:
For small, discrete and well defined ranges: use a bitwise array to indicate the presence of each possible item (set for present, unset for absent).
Use a hash-table for all other cases.
Write functions to implement adding and removing items, testing for presence or absence, testing for sub-sets, etc as needed.
As the other answers note, however, if you just want the functionality, use a language feature or third-party library that is already well debugged.
A lot of the time hash-based sets are the correct thing to use, but if you don't need to do key-based lookups and don't worry about enforcing unique values, a vector or list is fine. There is overhead to a hash table, after all.
You seem to be concerned that people will think that the order in the vector is important, but I think that it is a common enough usage that, with documentation, you shouldn't confuse people.
It really depends on how you want to access and use the data.
and Array is usually the simplest way to store data, without any other requirements. Usually other data types are used for different reasons (you want to append data, you want to search data in constant time, you need quick set union/intersection, etc) If your only concern is the abstraction you could wrap it in some kind of unordered facade.
In Perl I would use a hash, definitely. In other languages I would lament the lack of a hash.