For a class of type T, the following members can be generated by the compiler, depending on the class:
default constructor: T::T()
copy constructor: T::T(const T&)
move constructor: T::T(T&&)
copy assignment operator: T& T::operator=(const T&)
move assignment operator: T& T::operator=(T&&)
In C++14, and in C++17, what are the rules that lead to the generation of constexpr versions of these functions by the compiler?
The rule is simple: if the generated definition satisfies the requirements of a constexpr function, then it will be a constexpr function. For example, from C++17, [class.ctor]/7:
If that user-written default constructor would satisfy the requirements of a constexpr constructor (10.1.5), the implicitly-defined default constructor is constexpr.
The wording around implicit default constructors is describes in terms of what a "user-written default constructor" would look like. So "that user-written default constructor" means "what the compiler generates".
Similar wording exists for the copy/move constructors.
The wording is slightly more complex for the assignment operators, but it boils down to the same thing. The type must be a literal type and the assignment operators selected to do the copy/move for each subobject (non-static data member and base class) must be constexpr.
Related
Documentation:
Argument-type declarations normally have no impact on performance:
regardless of what argument types (if any) are declared, Julia
compiles a specialized version of the function for the actual argument
types passed by the caller. For example, calling fib(1) will trigger
the compilation of specialized version of fib optimized specifically
for Int arguments, which is then re-used if fib(7) or fib(15) are
called.
The provided example:
fib(n::Integer) = n ≤ 2 ? one(n) : fib(n-1) + fib(n-2)
but then in the Performance Tips:
# This will not specialize:
function f_type(t) # or t::Type
x = ones(t, 10)
return sum(map(sin, x))
end
From my understanding of the performance tips, with or without type annotations fib(b::Integer) or fib(b) shouldn't specialize specifically for Int arguments.
It seems with or without the type annotation no specialization will happen, but why does the manual seem to indicate that it will specialize?
The example in the Performance Tips page is about the specific case where you pass the type itself as the argument to the function. As the beginning of that section says:
Julia avoids automatically specializing on argument type parameters in three specific cases: Type, Function, and Vararg
f_type in that example is a function that accepts a Type t. A call to that function will look like, for eg., f_type(Int). The section says that type specialization isn't done based on the Int type in this case, where the Int is passed as a value of the parameter itself (and then passed through to another function). (Specifically, the Type isn't specialized into a specific parametrized type like Type{Int}, as it normally would be if the argument wasn't one of the above mentioned three cases.)
This doesn't apply to the fib method (or most methods we normally define). fib accepts a value whose type is (a subtype of) Integer, not the type itself. In such cases, a call like fib(1) or fib(UInt8(2)) does create type-specialized compiled versions of the function.
In layman words, Julia will always specialize on function arguments either with or without annotation, except in three cases. One of them is when the arguments are passed to another function, since specializing twice will slow compilation down, and it will finally be specialized at the inner function call, so no performance hit occurs.
So, fib(n:Int) is equivalent to fib(n) and both will use a specialized compiled version of the function for the Int64 type unless n is already of a different type than Int64, it will specialize for that type.
Some languages have ways of tracking properties like purity or the presence/absence of exceptions at compile time. The fact that they do this sort of resembles an effect system.
There seem to be two broad categories of these effect-like systems, one in which any value can be "wrapped in an effect" like the IO Monad in Haskell and one in which only functions can be annotated (like noexcept in C++ or checked exceptions in Java). constexpr in C++ is a weird case that I don't know how to think about and am intentionally ignoring, since it means very different things when applied to a function and a non-function value.
I'm wondering what you call the Haskell-IO-Monad style of effect tracking vs the checked-exception style. It seems like the only reason you would use the latter is backwards compatibility with a language that doesn't track the effect you're interested in.
More explicitly, Haskell tracks effects at the type level, via the IO Monad.
a -> b is the type of a pure function with no side effects.
Impure functions return an IO value, which has a special status within the runtime / semantics of Haskell.
The nearest equivalent to an impure function in other languages would be something like
a -> IO b or, said differently, a computation producing a b parameterized by a. The type constructor -> still has the same meaning of pure function that it did before.
C++ has a distinction between functions that can potentially throw exceptions (the default) and functions that can't.
int add(int x, int y) noexcept {
return x + y;
}
However it isn't possible to mark a non-function as noexcept.
// BAD!
int x noexcept = <expr>;
The noexcept applies to the function itself, in effect giving you a different type constructor than ->.
Sequence containers need to have fill constructors and range constructors, i.e. these must both work, assuming MyContainer models a sequence container whose value_type is int and size_type is std::size_t:
// (1) Constructs a MyContainer containing the number '42' 4 times.
MyContainer<int> c = MyContainer<int>(4, 42);
// (2) Constructs a MyContainer containing the elements in the range (array.begin(), array.end())
std::array<int, 4> array = {1, 2, 3, 4};
MyContainer<int> c2 = MyContainer<int>(array.begin(), array.end());
Trouble is, I'm not sure how to implement these two constructors. These signatures don't work:
template<typename T>
MyContainer<T>::MyContainer(const MyContainer::size_type n, const MyContainer::value_type& val);
template<typename T>
template<typename OtherIterator>
MyContainer<T>::MyContainer(OtherIterator i, OtherIterator j);
In this case, an instantiation like in example 1 above selects the range constructor instead of fill constructor, since 4 is an int, not a size_type. It works if I pass in 4u, but if I understand the requirements correctly, any positive integer should work.
If I template the fill constructor on the size type to allow other integers, the call is ambiguous when value_type is the same as the integer type used.
I had a look at the Visual C++ implementation for std::vector and they use some special magic to only enable the range constructor when the template argument is an iterator (_Is_iterator<_Iter>). I can't find any way of implementing this with standard C++.
So my question is... how do I make it work?
Note: I am not using a C++11 compiler, and boost is not an option.
I think you've got the solution space right: Either disambiguate the call by passing in only explicitly size_t-typed ns, or use SFINAE to only apply the range constructor to actual iterators. I'll note, however, that there's nothing "magic" (that is, nothing based on implementation-specific extensions) about MSVC's _Is_iterator. The source is available, and it's basically just a static test that the type isn't an integral type. There's a whole lot of boilerplate code backing it up, but it's all standard C++.
A third option, of course, would be to add another fill constructor overload which takes a signed size.
The concept of lambdas (anonymous functions) is very clear to me. And I'm aware of polymorphism in terms of classes, with runtime/dynamic dispatch used to call the appropriate method based on the instance's most derived type. But how exactly can a lambda be polymorphic? I'm yet another Java programmer trying to learn more about functional programming.
You will observe that I don't talk about lambdas much in the following answer. Remember that in functional languages, any function is simply a lambda bound to a name, so what I say about functions translates to lambdas.
Polymorphism
Note that polymorphism doesn't really require the kind of "dispatch" that OO languages implement through derived classes overriding virtual methods. That's just one particular kind of polymorphism, subtyping.
Polymorphism itself simply means a function allows not just for one particular type of argument, but is able to act accordingly for any of the allowed types. The simplest example: you don't care for the type at all, but simply hand on whatever is passed in. Or, to make it not quite so trivial, wrap it in a single-element container. You could implement such a function in, say, C++:
template<typename T> std::vector<T> wrap1elem( T val ) {
return std::vector(val);
}
but you couldn't implement it as a lambda, because C++ (time of writing: C++11) doesn't support polymorphic lambdas.
Untyped values
...At least not in this way, that is. C++ templates implement polymorphism in rather an unusual way: the compiler actually generates a monomorphic function for every type that anybody passes to the function, in all the code it encounters. This is necessary because of C++' value semantics: when a value is passed in, the compiler needs to know the exact type (its size in memory, possible child-nodes etc.) in order to make a copy of it.
In most newer languages, almost everything is just a reference to some value, and when you call a function it doesn't get a copy of the argument objects but just a reference to the already-existing ones. Older languages require you to explicitly mark arguments as reference / pointer types.
A big advantage of reference semantics is that polymorphism becomes much easier: pointers always have the same size, so the same machine code can deal with references to any type at all. That makes, very uglily1, a polymorphic container-wrapper possible even in C:
typedef struct{
void** contents;
int size;
} vector;
vector wrap1elem_by_voidptr(void* ptr) {
vector v;
v.contents = malloc(sizeof(&ptr));
v.contents[0] = ptr;
v.size = 1;
return v;
}
#define wrap1elem(val) wrap1elem_by_voidptr(&(val))
Here, void* is just a pointer to any unknown type. The obvious problem thus arising: vector doesn't know what type(s) of elements it "contains"! So you can't really do anything useful with those objects. Except if you do know what type it is!
int sum_contents_int(vector v) {
int acc = 0, i;
for(i=0; i<v.size; ++i) {
acc += * (int*) (v.contents[i]);
}
return acc;
}
obviously, this is extremely laborious. What if the type is double? What if we want the product, not the sum? Of course, we could write each case by hand. Not a nice solution.
What would we better is if we had a generic function that takes the instruction what to do as an extra argument! C has function pointers:
int accum_contents_int(vector v, void* (*combine)(int*, int)) {
int acc = 0, i;
for(i=0; i<v.size; ++i) {
combine(&acc, * (int*) (v.contents[i]));
}
return acc;
}
That could then be used like
void multon(int* acc, int x) {
acc *= x;
}
int main() {
int a = 3, b = 5;
vector v = wrap2elems(a, b);
printf("%i\n", accum_contents_int(v, multon));
}
Apart from still being cumbersome, all the above C code has one huge problem: it's completely unchecked if the container elements actually have the right type! The casts from *void will happily fire on any type, but in doubt the result will be complete garbage2.
Classes & Inheritance
That problem is one of the main issues which OO languages solve by trying to bundle all operations you might perform right together with the data, in the object, as methods. While compiling your class, the types are monomorphic so the compiler can check the operations make sense. When you try to use the values, it's enough if the compiler knows how to find the method. In particular, if you make a derived class, the compiler knows "aha, it's ok to call that method from the base class even on a derived object".
Unfortunately, that would mean all you achieve by polymorphism is equivalent to compositing data and simply calling the (monomorphic) methods on a single field. To actually get different behaviour (but controlledly!) for different types, OO languages need virtual methods. What this amounts to is basically that the class has extra fields with pointers to the method implementations, much like the pointer to the combine function I used in the C example – with the difference that you can only implement an overriding method by adding a derived class, for which the compiler again knows the type of all the data fields etc. and you're safe and all.
Sophisticated type systems, checked parametric polymorphism
While inheritance-based polymorphism obviously works, I can't help saying it's just crazy stupid3 sure a bit limiting. If you want to use just one particular operation that happens to be not implemented as a class method, you need to make an entire derived class. Even if you just want to vary an operation in some way, you need to derive and override a slightly different version of the method.
Let's revisit our C code. On the face of it, we notice it should be perfectly possible to make it type-safe, without any method-bundling nonsense. We just need to make sure no type information is lost – not during compile-time, at least. Imagine (Read ∀T as "for all types T")
∀T: {
typedef struct{
T* contents;
int size;
} vector<T>;
}
∀T: {
vector<T> wrap1elem(T* elem) {
vector v;
v.contents = malloc(sizeof(T*));
v.contents[0] = &elem;
v.size = 1;
return v;
}
}
∀T: {
void accum_contents(vector<T> v, void* (*combine)(T*, const T*), T* acc) {
int i;
for(i=0; i<v.size; ++i) {
combine(&acc, (*T) (v[i]));
}
}
}
Observe how, even though the signatures look a lot like the C++ template thing on top of this post (which, as I said, really is just auto-generated monomorphic code), the implementation actually is pretty much just plain C. There are no T values in there, just pointers to them. No need to compile multiple versions of the code: at runtime, the type information isn't needed, we just handle generic pointers. At compile time, we do know the types and can use the function head to make sure they match. I.e., if you wrote
void evil_sumon (int* acc, double* x) { acc += *x; }
and tried to do
vector<float> v; char acc;
accum_contents(v, evil_sumon, acc);
the compiler would complain because the types don't match: in the declaration of accum_contents it says the type may vary, but all occurences of T do need to resolve to the same type.
And that is exactly how parametric polymorphism works in languages of the ML family as well as Haskell: the functions really don't know anything about the polymorphic data they're dealing with. But they are given the specialised operators which have this knowledge, as arguments.
In a language like Java (prior to lambdas), parametric polymorphism doesn't gain you much: since the compiler makes it deliberately hard to define "just a simple helper function" in favour of having only class methods, you can simply go the derive-from-class way right away. But in functional languages, defining small helper functions is the easiest thing imaginable: lambdas!
And so you can do incredible terse code in Haskell:
Prelude> foldr (+) 0 [1,4,6]
11
Prelude> foldr (\x y -> x+y+1) 0 [1,4,6]
14
Prelude> let f start = foldr (\_ (xl,xr) -> (xr, xl)) start
Prelude> :t f
f :: (t, t) -> [a] -> (t, t)
Prelude> f ("left", "right") [1]
("right","left")
Prelude> f ("left", "right") [1, 2]
("left","right")
Note how in the lambda I defined as a helper for f, I didn't have any clue about the type of xl and xr, I merely wanted to swap a tuple of these elements which requires the types to be the same. So that would be a polymorphic lambda, with the type
\_ (xl, xr) -> (xr, xl) :: ∀ a t. a -> (t,t) -> (t,t)
1Apart from the weird explicit malloc stuff, type safety etc.: code like that is extremely hard to work with in languages without garbage collector, because somebody always needs to clean up memory once it's not needed anymore, but if you didn't watch out properly whether somebody still holds a reference to the data and might in fact need it still. That's nothing you have to worry about in Java, Lisp, Haskell...
2There is a completely different approach to this: the one dynamic languages choose. In those languages, every operation needs to make sure it works with any type (or, if that's not possible, raise a well-defined error). Then you can arbitrarily compose polymorphic operations, which is on one hand "nicely trouble-free" (not as trouble-free as with a really clever type system like Haskell's, though) but OTOH incurs quite a heavy overhead, since even primitive operations need type-decisions and safeguards around them.
3I'm of course being unfair here. The OO paradigm has more to it than just type-safe polymorphism, it enables many things e.g. old ML with it's Hindler-Milner type system couldn't do (ad-hoc polymorphism: Haskell has type classes for that, SML has modules), and even some things that are pretty hard in Haskell (mainly, storing values of different types in a variable-size container). But the more you get accustomed to functional programming, the less need you will feel for such stuff.
In C++ polymorphic (or generic) lambda starting from C++14 is a lambda that can take any type as an argument. Basically it's a lambda that has auto parameter type:
auto lambda = [](auto){};
Is there a context that you've heard the term "polymorphic lambda"? We might be able to be more specific.
The simplest way that a lambda can be polymorphic is to accept arguments whose type is (partly-)irrelevant to the final result.
e.g. the lambda
\(head:tail) -> tail
has the type [a] -> [a] -- e.g. it's fully-polymorphic in the inner type of the list.
Other simple examples are the likes of
\_ -> 5 :: Num n => a -> n
\x f -> f x :: a -> (a -> b) -> b
\n -> n + 1 :: Num n => n -> n
etc.
(Notice the Num n examples which involve typeclass dispatch)
I am trying to decipher a fortran code. It passes a pointer to a function as an actual argument, and the formal argument is instead a target. It defines and allocates a pointer of type globalDATA in the main program, then it calls a function passing that pointer:
module dataGLOBAL
type globalDATA
type (gl_1) , pointer :: gl1
type (gd_2) , pointer :: gd2
type (gdt_ok) , pointer :: gdtok
...
...
end type globalDATA
end module dataGLOBAL
Program main
....
....
use dataGLOBAL
...
type(globalDATA),pointer :: GD
allocate(GD)
returnvalue = INIT(GD)
....
....
end
The function reads:
integer function INIT(GD) result(returnvalue)
....
....
use dataGLOBAL
type(globalDATA) , target :: GD
allocate (GD%gl1)
allocate (GD%gd2)
allocate (GD%gdtok)
....
....
end function INIT
What is the meaning of doing this? And why do both the pointer in the main program and the single components of the target structure have to be allocated?
thanks
A.
A few things may come into play...
When you provide a pointer as an actual argument to a procedure where the corresponding dummy argument does NOT have the POINTER attribute (the case here), the thing that is associated with the dummy argument is the target of the actual argument pointer. So in this case, the thing being passed is the object that GD (in the main program) is pointing to - the thing that was allocated by the allocate statement. (When both the actual and dummy arguments have the POINTER argument, then the POINTER itself is "passed" - you can change what the POINTER points to and that change is reflected back in the calling scope.)
Because the GD dummy argument inside the function has the target attribute, pointers inside the function can be pointed at the dummy argument. You don't show any declarations for such pointers, but perhaps they are in elided code. If nothing is ever pointed at the GD dummy argument (including inside any procedures that might be called by the INIT function), then the TARGET attribute is superfluous, but harmless apart from inhibiting some optimisations.
Things that have the pointer attribute also (automatically by language rules) have the TARGET attribute - so GD in the main program has the TARGET attribute. The fact that GD in the main program and in the function BOTH have the target attribute may be relevant because...
When the dummy argument has the TARGET attribute and the thing passed as the actual argument has the TARGET attribute, then pointers associated with the dummy argument inside the procedure are also "usually" (there are exceptions/processor dependencies for coindexed things/non-contiguous arrays/vector subscripted sections too complicated for me to remember) associated with the corresponding actual argument. If a pointer is not a local variable (perhaps it is a pointer declared in a module) then this association survives past the end of the procedure. Perhaps that's relevant in the elided code. (Alternatively, if the the actual argument does not have the TARGET attribute, then any pointers associated with the dummy argument become undefined when the procedure ends.)
The components of the globalDATA type are themselves pointers. Consequently, GD in the main program is a pointer to something (that something being allocated by the single ALLOCATE statement in the main program) that itself contains pointers to other things (those other things being allocated by the numerous ALLOCATE statements in the function). You have two levels of pointer, hence two levels of ALLOCATE.
Before Fortran 2003 (or Fortran 95 with the "allocatable TR") you couldn't have ALLOCATABLE components in derived types, and you couldn't have ALLOCATABLE dummy arguments - when the need for dynamic allocation collided with these former restrictions you had to use pointers instead, even if you were only using the pointers as values. I strongly suspect your code dates from this era (Support for the allocatable TR became widespread about a decade ago). In very "modern" Fortran pointers are (should?) only used when you might want variables that point at other things (where other things includes "no thing").
With a pointer variable that is a user-defined type that itself contains pointers, you have to allocate (i.e., create the storage) both the overall variable and the component pointers. The components aren't automatically allocated when the overall variable is. Someone made a design choice to allocate the overall variable in the main program and the components in a subroutine. Maybe they thought that allocating the overall variable was simple but allocating all of the components was getting complicated and wanted to relegate that to a subroutine.
Because the pointer attribute is not specified for the dummy argument, the entire derived type GD is passed from the main code (not the pointer to it). On the subroutine side, you could explicitely write
integer function INIT(GD) result(returnvalue)
...
use dataGLOBAL
type(globalDATA), intent(inout), target :: GD
to make it more clear. The target attribute of the dummy argument only ensures, that you can point to that argument inside the subroutine via pointer assignment.
As long as you are only manipulating the fields of the derived type, but not the derived type as whole (e.g. by allocating or deallocating it), it should not make a difference, whether you call the INIT routine by passing a pointer or the derived type itself.
As noted already in other answers, the purpose of the program seems to separate the allocation of the derived type and its components from each other. One possible advantage of this strategy is the possibility to pass both, pointers and statically allocated derived types to the INIT routine.