When can "creating" and "declaring" be used synonymously? - language-agnostic

Once in a while you stumble over a technical article where "creating" and "declaring" are used synonymously.
E.g.
declares an array of ints
creates an array of ints
But aren't declaring and creating two different things? Or does it depend on the context?

The two are clearly separate, at least under some circumstances. Just for example, in C or C++, there are declarations that just declare things, and there are definitions that both declare and create them, and (only in C++) there are new expressions that create objects without declaring them (and, arguably, a malloc sort of does the same in C).
Likewise, in a language that supports lambda expressions, creation and declaration are separate -- a lambda expression creates something (e.g., a function) but doesn't (by itself) declare it (bind a name to it).

In C and C++, the creation of an object also declares it:
int main() {
int a[10]; // create a
a[3] = 42; // use a - no need for a separate declaration
}

Related

'Auxiliary' function in Haskell

My lecturer at the moment has a strange habit I've not seen before, I'm wondering if this is a Haskell standard or a quirk of his programming style.
Basically, he'll often do thing such as this:
functionEx :: String -> Int
functionEx s = functionExA s 0
functionExA :: String -> Int -> Int
functionExA s n = --function code
He calls these 'auxiliary' functions, and for the most part the only advantage I can see to these is to make a function callable with fewer supplied arguments. But most of these are hidden away in code anyway and in my view adding the argument to the original call is much more readable.
As I said, I'm not suggesting my view is correct, I've just not seen it done like this before and would like to know if it's common in Haskell.
Yes, this is commonplace, and not only in functional programming. It's good practice in your code to separate the interface to your code (in this case, that means the function signature: what arguments you have to pass) from the details of the implementation (the need to have a counter or similar in recursive code).
In real-world programming, one manifestation of this is having default arguments or multiple overloads of one function. Another common way of doing this is returning or taking an instance of an interface instead of a particular class that implements that interface. In Java, this might mean returning a List from a method instead of ArrayList, even when you know that the code actually uses an ArrayList (where ArrayList implements the List interface). In Haskell, typeclasses often serve the same function.
The "one argument which always should be zero at the start" pattern happens occasionally in the real world, but it's especially common in functional programming teaching, because you want to show how to write the same function in a recursive style vs. tail-recursive. The wrapper function is also important to demonstrate that both the implementations actually have the same result.
In Haskell, it's more common to use where as follows:
functionEx :: String -> Int
functionEx s = functionExA s 0 where
functionExA s n = --function code
This way, even the existence of the "real" function is hidden from the external interface. There's no reason to expose the fact that this function is (say) tail-recursive with a count argument.
If the special case definition is used frequently, it can be an advantage to do this. For example, the sum function is just a special case of the fold function. So why don't we just use foldr (+) 0 [1, 2, 3] each time instead of sum [1,2,3]? Because sum is much more readable.

Go functions accessed through variables [duplicate]

This question already has an answer here:
Do Go built-ins use generics?
(1 answer)
Closed 11 months ago.
I am just starting to study Go and some things caught my attention.
Functions like:
delete(map, "Answer") // for maps
append(slice, 0) // for slices
len(slice), cap(slice) // again for slices
and so on. As someone coming from C like languages, I am wondering:
1) Can these functions be called through the variable itself (as in map.delete("Answer"))?
2) Is this a common practice (defining a generic function and let it figure out the type and what it should do), or is this just for the built in types. For example, if I would define my own type, like MyCoolLinkedList, should I define the len and append functions inside the type and have them called like
list := new(MyCoolLinkedList)
list.len()
or should I define a function that receives the list, like:
len(list)
1 - You can't call builtin methods "attached" to types or values, e.g.
m := map[int]string{1: "one"}
m.delete(1)
is a compile time error which you can verify easily.
2 - Go doesn't have generics. But to ease the "pain", it provides several builtin functions which can accept values of different types. They are builtin because –as mentioned– due to the lack of generics, they need the help of the compiler to be able to accept values of different types. Some also accept a type instead of an expression as the first argument (e.g. make([]int, 1)) which is also something you can't create. Builtin functions do not have standard Go types, they can only appear in call expressions.
You can't create such functions that accept values of different types. Having said that, when you create your own type and you create a "function" for it, it is advisable to declare it as a method instead of a helper function; as if the "function" operates on your concrete type, you could not use it for other types anyway.
So it makes sense to declare it as a method, and then you can call it more "elegantly" like
value.Method()
Being a method also "counts" toward the method set of the type, should you need to implement an interface, e.g. in case of your MyCoolLinkedList it would make sense to implement sort.Interface to be able to sort the list, which requires a Len() int method for example.
Choosing a method over a helper function also has the advantage that your method will be available via reflection. You can list and call methods of a type using the reflect package, but you can't do the same with "just" functions.

Is vala a "pass by reference" or "pass by value"?

Or there exists pointers and references like C?
I'm trying to get started with vala but is good to know if vala is "pass by reference" or "pass by value"
First of all you should understand that the default vala compiler valac compiles to C (as an itermediate language). The code is then compiled using a C compiler (usually gcc).
valac -C example.vala will compile to example.c
So you can inspect the produced C code yourself.
Now to the real question:
Vala supports both call-by-value and call-by-reference. It is even a bit more fine grained than that.
Let's take an example using a plain C data type (int).
Call-by-value:
public void my_func (int value) {
// ...
}
The value will be copied into the function, no matter what you do with value inside my_func it won't affect the caller.
Call-by-reference using ref:
public void my_func (ref int value) {
// ...
}
The address will be copied into the function. Everything you do with value inside my_func will be reflected on the caller side as well.
Call-by-reference using out:
public void my_func (out int value) {
// ...
}
Basically the same as ref, but the value doesn't have to be initialized before calling my_func.
For GObject based data types (non-static classes) it gets more complicated, because you have to take memory management into account.
Since those are always managed using pointers (implictly) the ref and `out´ modifiers now reflect how the (implicit) pointer is passed.
It adds one more level of indirection so to speak.
string and array data types are also internally managed using pointers and automatic reference counting (ARC).
Though discouraged, Vala also does support pointers, so you can have an int * or MyClass * just like in C.
Technically, it pass by value since the underlying code is converted to C. Simple types (numeric types, booleans, enums, flags) are passed by value. Strings are passed by reference, but since they are immutable, they might as well be pass by value.
However, arrays, objects, and structs are all passed using pointers in C, so they are pass by reference. There is also the ref and out modifiers to function parameters that force those parameters to be passed by reference.

What is the advantage of having this/self pointer mandatory explicit?

What is the advantage of having this/self/me pointer mandatory explicit?
According to OOP theory a method is supposed to operate mainly (only?) on member variables and method's arguments. Following this, it should be easier to refer to member variables than to external (from the object's side of view) variables... Explicit this makes it more verbose thus harder to refer to member variables than to external ones. This seems counter intuitive to me.
In addition to member variables and method parameters you also have local variables. One of the most important things about the object is its internal state. Explicit member variable dereferencing makes it very clear where you are referencing that state and where you are modifying that state.
For instance, if you have code like:
someMethod(some, parameters) {
... a segment of code
foo = 42;
... another segment of code
}
when quickly browsing through it, you have to have a mental model of the variables defined in the preceding segment to know if it's just a temporary variable or does it mutate the objects state. Whereas this.foo = 42 makes it obvious that the objects state is mutated. And if explicit dereferencing is exclusively used, you can be sure that the variable is temporary in the opposite case.
Shorter, well factored methods make it a bit less important, but still, long term understandability trumps a little convenience while writing the code.
You need it to pass the pointer/reference to the current object elsewhere or to protect against self-assignment in an assignment operator.
What if the arguments to a method have the same name as the member variables? Then you can use this.x = x for example. Where this.x is the member variable and x is the method argument.
That's just one (trivial) example.
I generally use this (in C++) only when I am writing the assignment operator or the copy constructor as it helps in clearly identifying the variables. Other place where I can think of using it is if your function parameter variable names are same as your member variable names or I want to kill my object using delete this.
Eg would be where member names are same as those passed to method
public void SetScreenTemplate(long screenTemplateID, string screenTemplateName, bool isDefault)
{
this.screenTemplateID = screenTemplateID;
this.screenTemplateName = screenTemplateName;
this.isDefault = isDefault;
}

What is Type-safe?

What does "type-safe" mean?
Type safety means that the compiler will validate types while compiling, and throw an error if you try to assign the wrong type to a variable.
Some simple examples:
// Fails, Trying to put an integer in a string
String one = 1;
// Also fails.
int foo = "bar";
This also applies to method arguments, since you are passing explicit types to them:
int AddTwoNumbers(int a, int b)
{
return a + b;
}
If I tried to call that using:
int Sum = AddTwoNumbers(5, "5");
The compiler would throw an error, because I am passing a string ("5"), and it is expecting an integer.
In a loosely typed language, such as javascript, I can do the following:
function AddTwoNumbers(a, b)
{
return a + b;
}
if I call it like this:
Sum = AddTwoNumbers(5, "5");
Javascript automaticly converts the 5 to a string, and returns "55". This is due to javascript using the + sign for string concatenation. To make it type-aware, you would need to do something like:
function AddTwoNumbers(a, b)
{
return Number(a) + Number(b);
}
Or, possibly:
function AddOnlyTwoNumbers(a, b)
{
if (isNaN(a) || isNaN(b))
return false;
return Number(a) + Number(b);
}
if I call it like this:
Sum = AddTwoNumbers(5, " dogs");
Javascript automatically converts the 5 to a string, and appends them, to return "5 dogs".
Not all dynamic languages are as forgiving as javascript (In fact a dynamic language does not implicity imply a loose typed language (see Python)), some of them will actually give you a runtime error on invalid type casting.
While its convenient, it opens you up to a lot of errors that can be easily missed, and only identified by testing the running program. Personally, I prefer to have my compiler tell me if I made that mistake.
Now, back to C#...
C# supports a language feature called covariance, this basically means that you can substitute a base type for a child type and not cause an error, for example:
public class Foo : Bar
{
}
Here, I created a new class (Foo) that subclasses Bar. I can now create a method:
void DoSomething(Bar myBar)
And call it using either a Foo, or a Bar as an argument, both will work without causing an error. This works because C# knows that any child class of Bar will implement the interface of Bar.
However, you cannot do the inverse:
void DoSomething(Foo myFoo)
In this situation, I cannot pass Bar to this method, because the compiler does not know that Bar implements Foo's interface. This is because a child class can (and usually will) be much different than the parent class.
Of course, now I've gone way off the deep end and beyond the scope of the original question, but its all good stuff to know :)
Type-safety should not be confused with static / dynamic typing or strong / weak typing.
A type-safe language is one where the only operations that one can execute on data are the ones that are condoned by the data's type. That is, if your data is of type X and X doesn't support operation y, then the language will not allow you to to execute y(X).
This definition doesn't set rules on when this is checked. It can be at compile time (static typing) or at runtime (dynamic typing), typically through exceptions. It can be a bit of both: some statically typed languages allow you to cast data from one type to another, and the validity of casts must be checked at runtime (imagine that you're trying to cast an Object to a Consumer - the compiler has no way of knowing whether it's acceptable or not).
Type-safety does not necessarily mean strongly typed, either - some languages are notoriously weakly typed, but still arguably type safe. Take Javascript, for example: its type system is as weak as they come, but still strictly defined. It allows automatic casting of data (say, strings to ints), but within well defined rules. There is to my knowledge no case where a Javascript program will behave in an undefined fashion, and if you're clever enough (I'm not), you should be able to predict what will happen when reading Javascript code.
An example of a type-unsafe programming language is C: reading / writing an array value outside of the array's bounds has an undefined behaviour by specification. It's impossible to predict what will happen. C is a language that has a type system, but is not type safe.
Type safety is not just a compile time constraint, but a run time constraint. I feel even after all this time, we can add further clarity to this.
There are 2 main issues related to type safety. Memory** and data type (with its corresponding operations).
Memory**
A char typically requires 1 byte per character, or 8 bits (depends on language, Java and C# store unicode chars which require 16 bits).
An int requires 4 bytes, or 32 bits (usually).
Visually:
char: |-|-|-|-|-|-|-|-|
int : |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-|
A type safe language does not allow an int to be inserted into a char at run-time (this should throw some kind of class cast or out of memory exception). However, in a type unsafe language, you would overwrite existing data in 3 more adjacent bytes of memory.
int >> char:
|-|-|-|-|-|-|-|-| |?|?|?|?|?|?|?|?| |?|?|?|?|?|?|?|?| |?|?|?|?|?|?|?|?|
In the above case, the 3 bytes to the right are overwritten, so any pointers to that memory (say 3 consecutive chars) which expect to get a predictable char value will now have garbage. This causes undefined behavior in your program (or worse, possibly in other programs depending on how the OS allocates memory - very unlikely these days).
** While this first issue is not technically about data type, type safe languages address it inherently and it visually describes the issue to those unaware of how memory allocation "looks".
Data Type
The more subtle and direct type issue is where two data types use the same memory allocation. Take a int vs an unsigned int. Both are 32 bits. (Just as easily could be a char[4] and an int, but the more common issue is uint vs. int).
|-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-|
|-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-|
A type unsafe language allows the programmer to reference a properly allocated span of 32 bits, but when the value of a unsigned int is read into the space of an int (or vice versa), we again have undefined behavior. Imagine the problems this could cause in a banking program:
"Dude! I overdrafted $30 and now I have $65,506 left!!"
...'course, banking programs use much larger data types. ;) LOL!
As others have already pointed out, the next issue is computational operations on types. That has already been sufficiently covered.
Speed vs Safety
Most programmers today never need to worry about such things unless they are using something like C or C++. Both of these languages allow programmers to easily violate type safety at run time (direct memory referencing) despite the compilers' best efforts to minimize the risk. HOWEVER, this is not all bad.
One reason these languages are so computationally fast is they are not burdened by verifying type compatibility during run time operations like, for example, Java. They assume the developer is a good rational being who won't add a string and an int together and for that, the developer is rewarded with speed/efficiency.
Many answers here conflate type-safety with static-typing and dynamic-typing. A dynamically typed language (like smalltalk) can be type-safe as well.
A short answer: a language is considered type-safe if no operation leads to undefined behavior. Many consider the requirement of explicit type conversions necessary for a language to be strictly typed, as automatic conversions can sometimes leads to well defined but unexpected/unintuitive behaviors.
A programming language that is 'type-safe' means following things:
You can't read from uninitialized variables
You can't index arrays beyond their bounds
You can't perform unchecked type casts
An explanation from a liberal arts major, not a comp sci major:
When people say that a language or language feature is type safe, they mean that the language will help prevent you from, for example, passing something that isn't an integer to some logic that expects an integer.
For example, in C#, I define a function as:
void foo(int arg)
The compiler will then stop me from doing this:
// call foo
foo("hello world")
In other languages, the compiler would not stop me (or there is no compiler...), so the string would be passed to the logic and then probably something bad will happen.
Type safe languages try to catch more at "compile time".
On the down side, with type safe languages, when you have a string like "123" and you want to operate on it like an int, you have to write more code to convert the string to an int, or when you have an int like 123 and want to use it in a message like, "The answer is 123", you have to write more code to convert/cast it to a string.
To get a better understanding do watch the below video which demonstrates code in type safe language (C#) and NOT type safe language ( javascript).
http://www.youtube.com/watch?v=Rlw_njQhkxw
Now for the long text.
Type safety means preventing type errors. Type error occurs when data type of one type is assigned to other type UNKNOWINGLY and we get undesirable results.
For instance JavaScript is a NOT a type safe language. In the below code “num” is a numeric variable and “str” is string. Javascript allows me to do “num + str” , now GUESS will it do arithmetic or concatenation .
Now for the below code the results are “55” but the important point is the confusion created what kind of operation it will do.
This is happening because javascript is not a type safe language. Its allowing to set one type of data to the other type without restrictions.
<script>
var num = 5; // numeric
var str = "5"; // string
var z = num + str; // arthimetic or concat ????
alert(z); // displays “55”
</script>
C# is a type safe language. It does not allow one data type to be assigned to other data type. The below code does not allow “+” operator on different data types.
Concept:
To be very simple Type Safe like the meanings, it makes sure that type of the variable should be safe like
no wrong data type e.g. can't save or initialized a variable of string type with integer
Out of bound indexes are not accessible
Allow only the specific memory location
so it is all about the safety of the types of your storage in terms of variables.
Type-safe means that programmatically, the type of data for a variable, return value, or argument must fit within a certain criteria.
In practice, this means that 7 (an integer type) is different from "7" (a quoted character of string type).
PHP, Javascript and other dynamic scripting languages are usually weakly-typed, in that they will convert a (string) "7" to an (integer) 7 if you try to add "7" + 3, although sometimes you have to do this explicitly (and Javascript uses the "+" character for concatenation).
C/C++/Java will not understand that, or will concatenate the result into "73" instead. Type-safety prevents these types of bugs in code by making the type requirement explicit.
Type-safety is very useful. The solution to the above "7" + 3 would be to type cast (int) "7" + 3 (equals 10).
Try this explanation on...
TypeSafe means that variables are statically checked for appropriate assignment at compile time. For example, consder a string or an integer. These two different data types cannot be cross-assigned (ie, you can't assign an integer to a string nor can you assign a string to an integer).
For non-typesafe behavior, consider this:
object x = 89;
int y;
if you attempt to do this:
y = x;
the compiler throws an error that says it can't convert a System.Object to an Integer. You need to do that explicitly. One way would be:
y = Convert.ToInt32( x );
The assignment above is not typesafe. A typesafe assignement is where the types can directly be assigned to each other.
Non typesafe collections abound in ASP.NET (eg, the application, session, and viewstate collections). The good news about these collections is that (minimizing multiple server state management considerations) you can put pretty much any data type in any of the three collections. The bad news: because these collections aren't typesafe, you'll need to cast the values appropriately when you fetch them back out.
For example:
Session[ "x" ] = 34;
works fine. But to assign the integer value back, you'll need to:
int i = Convert.ToInt32( Session[ "x" ] );
Read about generics for ways that facility helps you easily implement typesafe collections.
C# is a typesafe language but watch for articles about C# 4.0; interesting dynamic possibilities loom (is it a good thing that C# is essentially getting Option Strict: Off... we'll see).
Type-Safe is code that accesses only the memory locations it is authorized to access, and only in well-defined, allowable ways.
Type-safe code cannot perform an operation on an object that is invalid for that object. The C# and VB.NET language compilers always produce type-safe code, which is verified to be type-safe during JIT compilation.
Type-safe means that the set of values that may be assigned to a program variable must fit well-defined and testable criteria. Type-safe variables lead to more robust programs because the algorithms that manipulate the variables can trust that the variable will only take one of a well-defined set of values. Keeping this trust ensures the integrity and quality of the data and the program.
For many variables, the set of values that may be assigned to a variable is defined at the time the program is written. For example, a variable called "colour" may be allowed to take on the values "red", "green", or "blue" and never any other values. For other variables those criteria may change at run-time. For example, a variable called "colour" may only be allowed to take on values in the "name" column of a "Colours" table in a relational database, where "red, "green", and "blue", are three values for "name" in the "Colours" table, but some other part of the computer program may be able to add to that list while the program is running, and the variable can take on the new values after they are added to the Colours table.
Many type-safe languages give the illusion of "type-safety" by insisting on strictly defining types for variables and only allowing a variable to be assigned values of the same "type". There are a couple of problems with this approach. For example, a program may have a variable "yearOfBirth" which is the year a person was born, and it is tempting to type-cast it as a short integer. However, it is not a short integer. This year, it is a number that is less than 2009 and greater than -10000. However, this set grows by 1 every year as the program runs. Making this a "short int" is not adequate. What is needed to make this variable type-safe is a run-time validation function that ensures that the number is always greater than -10000 and less than the next calendar year. There is no compiler that can enforce such criteria because these criteria are always unique characteristics of the problem domain.
Languages that use dynamic typing (or duck-typing, or manifest typing) such as Perl, Python, Ruby, SQLite, and Lua don't have the notion of typed variables. This forces the programmer to write a run-time validation routine for every variable to ensure that it is correct, or endure the consequences of unexplained run-time exceptions. In my experience, programmers in statically typed languages such as C, C++, Java, and C# are often lulled into thinking that statically defined types is all they need to do to get the benefits of type-safety. This is simply not true for many useful computer programs, and it is hard to predict if it is true for any particular computer program.
The long & the short.... Do you want type-safety? If so, then write run-time functions to ensure that when a variable is assigned a value, it conforms to well-defined criteria. The down-side is that it makes domain analysis really difficult for most computer programs because you have to explicitly define the criteria for each program variable.
Type Safety
In modern C++, type safety is very important. Type safety means that you use the types correctly and, therefore, avoid unsafe casts and unions. Every object in C++ is used according to its type and an object needs to be initialized before its use.
Safe Initialization: {}
The compiler protects from information loss during type conversion. For example,
int a{7}; The initialization is OK
int b{7.5} Compiler shows ERROR because of information loss.\
Unsafe Initialization: = or ()
The compiler doesn't protect from information loss during type conversion.
int a = 7 The initialization is OK
int a = 7.5 The initialization is OK, but information loss occurs. The actual value of a will become 7.0
int c(7) The initialization is OK
int c(7.5) The initialization is OK, but information loss occurs. The actual value of a will become 7.0