What is cbc_param when we use pkcs11.C_EncryptInit - hsm

Moving from PTK to Luna and now in code, I need to pass cbc_param, before the inside object was the only mechanism.
Can somebody explain what is this about?
How PTK manage cbc_param?
What is the difference with and without cbc_param?
var cbc_param = pkcs11.C_GenerateRandom(new Buffer(16));
pkcs11.C_EncryptInit(
session,
{
mechanism: pkcs11js.CKM_AES_CBC,
parameter: cbc_param
},
secretKey
);

According to the PKCS11 documentation CBC mode has a 16-byte initialization vector (IV) parameter. This parameter is mandatory, basically it is 16 random bytes that you need use to encrypt/decrypt in CBC mode. It's ok to save it with cipher text, IV don't have to be a secret, but it must be random.

Related

Type of field 7 in auth response message of google cast protocol v2

The Google Cast protocol v2 has widely been reverse-engineered and is therefore already well-known. A good example of this is the Cast v2 Node library repository on GitHub which includes a detailed description of the cast v2 protocol.
However, whilst writing my own implementation of the protocol in Java using Netty, I realized that the auth response message is way more complex than described in the linked repository.
According to the repository, the message should look like:
message AuthResponse {
required bytes signature = 1;
required bytes client_auth_certificate = 2;
repeated bytes client_ca = 3;
}
However, the client sends 3 more fields. They have the indices 4, 6 and 7.
Field 4 is of wiretype VARINT and stands, as far as I know, for the SignatureAlgorithm the Cast-enabled device (Chromecast Gen2 and Chromecast Audio) has been challenged with.
Field 6 is also of type VARINT, but I have no idea what it stands for. During testing, it always had the value 0. (Maybe it stands for the client_ca certificate used for signing the client_auth_certificate?)
Field 7 is of wiretype LENGTH_DELIMITED. It is definetly not an UTF-8 encoded String since printing it out results in an unreadable mess. However, the sequence printed out contains the complete address that's also been used in the client_ca and client_auth_certificate, so I believe it has something to do with it. I've already tested whether this might be a certificate or RSA key, but both tests were negative. A file containing the raw byte sequence can be found here.
This brings me finally to my question:
Do you know what fields 6 and 7 stand for? Guesses based on the file's structure are also highly appreciated.
As I've found out, the protocol is practically open-source since the Chromium project includes the corresponding .proto-files in order to support streaming on Cast-enabled devices.
The complete protocol can be found here: https://github.com/chromium/chromium/blob/master/components/cast_channel/proto/cast_channel.proto
The structure of the AuthResponse message is therefore
message AuthResponse {
required bytes signature = 1;
required bytes client_auth_certificate = 2;
repeated bytes intermediate_certificate = 3;
optional SignatureAlgorithm signature_algorithm = 4
[default = RSASSA_PKCS1v15];
optional bytes sender_nonce = 5;
optional HashAlgorithm hash_algorithm = 6 [default = SHA1];
optional bytes crl = 7;
}

Geocortex "Required parameter is null or empty"

I am currently creating a workflow in which the user captures a point geometry by clicking on the map (this works), then the map zooms to the point's extent (this also works), and then buffers the point (this does not work).
My BufferTask activity gives me this: "Unhandled exception: 'Required parameter is null or empty. Parameter name: Geometry.SpatialReference in activity '1.3: BufferTask'"
This doesn't make sense to me since I have indeed entered a value for this parameter.
sidenote: Geocortex's documentation is virtually non-existent. My inner-cynic is telling me that this is on purpose so that you keep paying them to do things for you.
My 2 best guesses would be either that the buffer spatial reference is null (the spatial reference used to actually create the buffer itself), or that the selectedLocation spatial reference is null for some reason.
For the first, see if there's a value entered for "Buffer Spatial Reference" which you'll see in the properties panel on the right side of the designer when you've got the Buffer Task selected. Note that using Web Mercator as the Buffer Spatial Reference will likely give you an inaccurate buffer due to distance distortion in that projection.
For the second, you could explicitly assign a SpatialReference to selectedLocation.SpatialReference using an assign activity (making sure that the assigned SpatialReference matches the actual spatial reference of your location).
Check to make sure you specified a correct spatialReference, so the workflow module can assess the "length" of buffer.
In addition isolate the problem.
Create a clean variable of type spatialReference and assign a NEW SpatialRef(wkid). Then use that variable.

How can decrypt Cakephp3 encrypted data right from MySQL?

I have a very specific requirement where some columns need to be encrypted using aes_encrypt / aes_decrypt. We need to encrypt the information at SQL level using a eas so it can be read using another app or directly from MySQL using a query and aes_encrypt / aes_decrypt.
Our app was developed using CakePHP 3 and database is MySQL 5.6.25.
I found and carefully follow the instruction on this selected answer: Encyption/Decryption of Form Fields in CakePHP 3
Now the data is being saved encrypted on the database... the problem is that we still need to be able to use aes_decrypt on MySQL to decrypt the information and it's returning NULL.
On CakePHP 3, config/app.php:
'Security' => ['salt' => '1234567890']
Then encrypted using:
Security::encrypt($value, Security::salt());
Data is saved on MySQL but aes_decrypt() returns NULL
SELECT AES_DECRIPT(address_enc, '1234567890') FROM address;
How can I setup CakePHP 3 to correctly encrypt information so I can later decrypt it on MySQL using aes_decrypt() ?
[EDIT]
My MYSQL table:
CREATE TABLE IF NOT EXISTS `address` (
`id` int(11) NOT NULL,
`address` varchar(255) DEFAULT NULL,
`address_enc` blob,
`comment` varchar(255) DEFAULT NULL,
`comment_enc` blob
) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
Note: address and comment are just for testings.
Then, on CakePHP, I created a custom database type:
src/Database/Type/CryptedType.php
<?php
namespace App\Database\Type;
use Cake\Database\Driver;
use Cake\Database\Type;
use Cake\Utility\Security;
class CryptedType extends Type
{
public function toDatabase($value, Driver $driver)
{
return Security::encrypt($value, Security::salt());
}
public function toPHP($value, Driver $driver)
{
if ($value === null) {
return null;
}
return Security::decrypt($value, Security::salt());
}
}
src/config/bootstrap.php
Register the custom type.
use Cake\Database\Type;
Type::map('crypted', 'App\Database\Type\CryptedType');
src/Model/Table/AddressTable.php
Finally map the cryptable columns to the registered type, and that's it, from now on everything's being handled automatically.
use Cake\Database\Schema\Table as Schema;
class AddressTable extends Table
{
// ...
protected function _initializeSchema(Schema $table)
{
$table->columnType('address_enc', 'crypted');
$table->columnType('comment_enc', 'crypted');
return $table;
}
// ...
}
Do you really need to do that?
I'm not going to argue about the pros and cons of storing encrypted data in databases, but whether trying to decrypt on SQL level is a good idea, is a question that should be asked.
So ask yourself whether you really need to do that, maybe it would be better to implement the decryption at application level instead, it would probably make things easier with regards to replicating exactly what Security::decrypt() does, which is not only decrypting, but also integrity checking.
Just take a look at what Security::decrypt() does internally.
https://github.com/cakephp/cakephp/blob/3.1.7/src/Utility/Security.php#L201
https://github.com/cakephp/cakephp/blob/3.1.7/src/Utility/Crypto/OpenSsl.php#L77
https://github.com/cakephp/cakephp/blob/3.1.7/src/Utility/Crypto/Mcrypt.php#L89
It should be pretty easy to re-implement that in your other application.
Watch out, you may be about to burn your fingers!
I am by no means an encryption expert, so consider the following as just a basic example to get things started, and inform yourself about possible conceptual, and security related problems in particular!
Handling encryption/decryption of data without knowing exactly what you are doing, is a very bad idea - I can't stress that enough!
Decrypting data at SQL level
That being said, using the example code from my awful (sic) answer that you've linked to, ie using Security::encrypt(), and Security::salt() as the encryption key, will by default leave you with a value that has been encrypted in AES-256-CBC mode, using an encryption key derived from the salt concatenated with itself (first 32 bytes of its SHA256 representation).
But that's not all, additionally the encrypted value gets an HMAC hash, and the initialization vector pepended, so that you do not end up with "plain" encrypted data that you could directly pass to AES_DECRYPT().
So if you'd wanted to decrypt this on MySQL level (for whatever reason), then you'd first of all have to set the proper block encryption mode
SET block_encryption_mode = 'aes-256-cbc';
sparse out the HMAC hash (first 64 bytes) and the initialization vector (following 16 bytes)
SUBSTRING(`column` FROM 81)
and use the first 32 bytes of hash('sha256', Security::salt() . Security::salt()) as the encryption key, and the initialization vector from the encrypted value for decryption
SUBSTRING(`column`, 65, 16)
So in the end you'd be left with something like
SET block_encryption_mode = 'aes-256-cbc';
SELECT
AES_DECRYPT(
SUBSTRING(`column` FROM 81), -- the actual encryted data
'the-encryption-key-goes-here',
SUBSTRING(`column`, 65, 16) -- the intialization vector
)
FROM table;
Finally you maybe also want to cast the value (CAST(AES_DECRYPT(...) AS CHAR)), and remove possible zero padding (not sure whether AES_DECRYPT() does that automatically).
Data integrity checks
It should be noted that the HMAC hash that is prepended to the encrypted value, has a specific purpose, it is used to ensure integrity, so by just dropping it, you'll lose that. In order to keep it, you'd have to implement a (timing attack safe) HMAC256 generation/comparison on SQL level too. This leads us back to the intial question, do you really need to decrypt on SQL level?
[Solution] The solution for this particular requirement (we need to encrypt the information at SQL level using a eas so it can be read using another app or directly from MySQL using a query and aes_encrypt / aes_decryp) was to create a custom database type in CakePHP them, instead of using CakePHP encryption method, we implemented PHP Mcrypt.
Now the information is saved to the database from our CakePHP 3 app and the data be read at MySQL/phpMyAdmin level using eas_decrypt and aes_encrypt.
FOR ANYONE STRUGGLING TO DECRYPT WITH MYSQL: This generally applies to anyone using symmetric AES encryption/decryption - specifically when trying to decrypt with AES_DECRYPT.
For instance, if you are using aes-128-ecb, and your encrypted data is 16 bytes long with no padding, you need to add padding bytes to your encrypted data before trying to decrypt (because mySQL is expecting PKCS7 padding). Because MySQL uses PKCS7, you need to add 16 more bytes, in this case those pad bytes are 0x10101010101010101010101010101010. We take the left 16 bytes because when we encrypt the 0x10101010101010101010101010101010, we get 32 bytes, and we only need the first 16.
aes_decrypt(concat(<ENCRYPTED_BYTES>, left(aes_encrypt(<PAD BYTES>, <KEY>), 16)), <KEY>)

Is currying just a way to avoid inheritance?

So my understanding of currying (based on SO questions) is that it lets you partially set parameters of a function and return a "truncated" function as a result.
If you have a big hairy function takes 10 parameters and looks like
function (location, type, gender, jumpShot%, SSN, vegetarian, salary) {
//weird stuff
}
and you want a "subset" function that will let you deal with presets for all but the jumpShot%, shouldn't you just break out a class that inherits from the original function?
I suppose what I'm looking for is a use case for this pattern. Thanks!
Currying has many uses. From simply specifying default parameters for functions you use often to returning specialized functions that serve a specific purpose.
But let me give you this example:
function log_message(log_level, message){}
log_error = curry(log_message, ERROR)
log_warning = curry(log_message, WARNING)
log_message(WARNING, 'This would be a warning')
log_warning('This would also be a warning')
In javascript I do currying on callback functions (because they cannot be passed any parameters after they are called (from the caller)
So something like:
...
var test = "something specifically set in this function";
onSuccess: this.returnCallback.curry(test).bind(this),
// This will fail (because this would pass the var and possibly change it should the function be run elsewhere
onSuccess: this.returnCallback.bind(this,test),
...
// this has 2 params, but in the ajax callback, only the 'ajaxResponse' is passed, so I have to use curry
returnCallback: function(thePassedVar, ajaxResponse){
// now in here i can have 'thePassedVar', if
}
I'm not sure if that was detailed or coherent enough... but currying basically lets you 'prefill' the parameters and return a bare function call that already has data filled (instead of requiring you to fill that info at some other point)
When programming in a functional style, you often bind arguments to generate new functions (in this example, predicates) from old. Pseudo-code:
filter(bind_second(greater_than, 5), some_list)
might be equivalent to:
filter({x : x > 5}, some_list)
where {x : x > 5} is an anonymous function definition. That is, it constructs a list of all values from some_list which are greater than 5.
In many cases, the parameters to be omitted will not be known at compile time, but rather at run time. Further, there's no limit to the number of curried delegates that may exist for a given function. The following is adapted from a real-world program.
I have a system in which I send out command packets to a remote machine and receive back response packets. Every command packet has an index number, and each reply bears the index number of the command to which it is a response. A typical command, translated into English, might be "give me 128 bytes starting at address 0x12300". A typical response might be "Successful." along with 128 bytes of data.
To handle communication, I have a routine which accepts a number of command packets, each with a delegate. As each response is received, the corresponding delegate will be run on the received data. The delegate associated with the command above would be something like "Confirm that I got a 'success' with 128 bytes of data, and if so, store them into my buffer at address 0x12300". Note that multiple packets may be outstanding at any given time; the curried address parameter is necessary for the routine to know where the incoming data should go. Even if I wanted to write a "store data to buffer" routine which didn't require an address parameter, it would have no way of knowing where the incoming data should go.

What is Type-safe?

What does "type-safe" mean?
Type safety means that the compiler will validate types while compiling, and throw an error if you try to assign the wrong type to a variable.
Some simple examples:
// Fails, Trying to put an integer in a string
String one = 1;
// Also fails.
int foo = "bar";
This also applies to method arguments, since you are passing explicit types to them:
int AddTwoNumbers(int a, int b)
{
return a + b;
}
If I tried to call that using:
int Sum = AddTwoNumbers(5, "5");
The compiler would throw an error, because I am passing a string ("5"), and it is expecting an integer.
In a loosely typed language, such as javascript, I can do the following:
function AddTwoNumbers(a, b)
{
return a + b;
}
if I call it like this:
Sum = AddTwoNumbers(5, "5");
Javascript automaticly converts the 5 to a string, and returns "55". This is due to javascript using the + sign for string concatenation. To make it type-aware, you would need to do something like:
function AddTwoNumbers(a, b)
{
return Number(a) + Number(b);
}
Or, possibly:
function AddOnlyTwoNumbers(a, b)
{
if (isNaN(a) || isNaN(b))
return false;
return Number(a) + Number(b);
}
if I call it like this:
Sum = AddTwoNumbers(5, " dogs");
Javascript automatically converts the 5 to a string, and appends them, to return "5 dogs".
Not all dynamic languages are as forgiving as javascript (In fact a dynamic language does not implicity imply a loose typed language (see Python)), some of them will actually give you a runtime error on invalid type casting.
While its convenient, it opens you up to a lot of errors that can be easily missed, and only identified by testing the running program. Personally, I prefer to have my compiler tell me if I made that mistake.
Now, back to C#...
C# supports a language feature called covariance, this basically means that you can substitute a base type for a child type and not cause an error, for example:
public class Foo : Bar
{
}
Here, I created a new class (Foo) that subclasses Bar. I can now create a method:
void DoSomething(Bar myBar)
And call it using either a Foo, or a Bar as an argument, both will work without causing an error. This works because C# knows that any child class of Bar will implement the interface of Bar.
However, you cannot do the inverse:
void DoSomething(Foo myFoo)
In this situation, I cannot pass Bar to this method, because the compiler does not know that Bar implements Foo's interface. This is because a child class can (and usually will) be much different than the parent class.
Of course, now I've gone way off the deep end and beyond the scope of the original question, but its all good stuff to know :)
Type-safety should not be confused with static / dynamic typing or strong / weak typing.
A type-safe language is one where the only operations that one can execute on data are the ones that are condoned by the data's type. That is, if your data is of type X and X doesn't support operation y, then the language will not allow you to to execute y(X).
This definition doesn't set rules on when this is checked. It can be at compile time (static typing) or at runtime (dynamic typing), typically through exceptions. It can be a bit of both: some statically typed languages allow you to cast data from one type to another, and the validity of casts must be checked at runtime (imagine that you're trying to cast an Object to a Consumer - the compiler has no way of knowing whether it's acceptable or not).
Type-safety does not necessarily mean strongly typed, either - some languages are notoriously weakly typed, but still arguably type safe. Take Javascript, for example: its type system is as weak as they come, but still strictly defined. It allows automatic casting of data (say, strings to ints), but within well defined rules. There is to my knowledge no case where a Javascript program will behave in an undefined fashion, and if you're clever enough (I'm not), you should be able to predict what will happen when reading Javascript code.
An example of a type-unsafe programming language is C: reading / writing an array value outside of the array's bounds has an undefined behaviour by specification. It's impossible to predict what will happen. C is a language that has a type system, but is not type safe.
Type safety is not just a compile time constraint, but a run time constraint. I feel even after all this time, we can add further clarity to this.
There are 2 main issues related to type safety. Memory** and data type (with its corresponding operations).
Memory**
A char typically requires 1 byte per character, or 8 bits (depends on language, Java and C# store unicode chars which require 16 bits).
An int requires 4 bytes, or 32 bits (usually).
Visually:
char: |-|-|-|-|-|-|-|-|
int : |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-|
A type safe language does not allow an int to be inserted into a char at run-time (this should throw some kind of class cast or out of memory exception). However, in a type unsafe language, you would overwrite existing data in 3 more adjacent bytes of memory.
int >> char:
|-|-|-|-|-|-|-|-| |?|?|?|?|?|?|?|?| |?|?|?|?|?|?|?|?| |?|?|?|?|?|?|?|?|
In the above case, the 3 bytes to the right are overwritten, so any pointers to that memory (say 3 consecutive chars) which expect to get a predictable char value will now have garbage. This causes undefined behavior in your program (or worse, possibly in other programs depending on how the OS allocates memory - very unlikely these days).
** While this first issue is not technically about data type, type safe languages address it inherently and it visually describes the issue to those unaware of how memory allocation "looks".
Data Type
The more subtle and direct type issue is where two data types use the same memory allocation. Take a int vs an unsigned int. Both are 32 bits. (Just as easily could be a char[4] and an int, but the more common issue is uint vs. int).
|-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-|
|-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-| |-|-|-|-|-|-|-|-|
A type unsafe language allows the programmer to reference a properly allocated span of 32 bits, but when the value of a unsigned int is read into the space of an int (or vice versa), we again have undefined behavior. Imagine the problems this could cause in a banking program:
"Dude! I overdrafted $30 and now I have $65,506 left!!"
...'course, banking programs use much larger data types. ;) LOL!
As others have already pointed out, the next issue is computational operations on types. That has already been sufficiently covered.
Speed vs Safety
Most programmers today never need to worry about such things unless they are using something like C or C++. Both of these languages allow programmers to easily violate type safety at run time (direct memory referencing) despite the compilers' best efforts to minimize the risk. HOWEVER, this is not all bad.
One reason these languages are so computationally fast is they are not burdened by verifying type compatibility during run time operations like, for example, Java. They assume the developer is a good rational being who won't add a string and an int together and for that, the developer is rewarded with speed/efficiency.
Many answers here conflate type-safety with static-typing and dynamic-typing. A dynamically typed language (like smalltalk) can be type-safe as well.
A short answer: a language is considered type-safe if no operation leads to undefined behavior. Many consider the requirement of explicit type conversions necessary for a language to be strictly typed, as automatic conversions can sometimes leads to well defined but unexpected/unintuitive behaviors.
A programming language that is 'type-safe' means following things:
You can't read from uninitialized variables
You can't index arrays beyond their bounds
You can't perform unchecked type casts
An explanation from a liberal arts major, not a comp sci major:
When people say that a language or language feature is type safe, they mean that the language will help prevent you from, for example, passing something that isn't an integer to some logic that expects an integer.
For example, in C#, I define a function as:
void foo(int arg)
The compiler will then stop me from doing this:
// call foo
foo("hello world")
In other languages, the compiler would not stop me (or there is no compiler...), so the string would be passed to the logic and then probably something bad will happen.
Type safe languages try to catch more at "compile time".
On the down side, with type safe languages, when you have a string like "123" and you want to operate on it like an int, you have to write more code to convert the string to an int, or when you have an int like 123 and want to use it in a message like, "The answer is 123", you have to write more code to convert/cast it to a string.
To get a better understanding do watch the below video which demonstrates code in type safe language (C#) and NOT type safe language ( javascript).
http://www.youtube.com/watch?v=Rlw_njQhkxw
Now for the long text.
Type safety means preventing type errors. Type error occurs when data type of one type is assigned to other type UNKNOWINGLY and we get undesirable results.
For instance JavaScript is a NOT a type safe language. In the below code “num” is a numeric variable and “str” is string. Javascript allows me to do “num + str” , now GUESS will it do arithmetic or concatenation .
Now for the below code the results are “55” but the important point is the confusion created what kind of operation it will do.
This is happening because javascript is not a type safe language. Its allowing to set one type of data to the other type without restrictions.
<script>
var num = 5; // numeric
var str = "5"; // string
var z = num + str; // arthimetic or concat ????
alert(z); // displays “55”
</script>
C# is a type safe language. It does not allow one data type to be assigned to other data type. The below code does not allow “+” operator on different data types.
Concept:
To be very simple Type Safe like the meanings, it makes sure that type of the variable should be safe like
no wrong data type e.g. can't save or initialized a variable of string type with integer
Out of bound indexes are not accessible
Allow only the specific memory location
so it is all about the safety of the types of your storage in terms of variables.
Type-safe means that programmatically, the type of data for a variable, return value, or argument must fit within a certain criteria.
In practice, this means that 7 (an integer type) is different from "7" (a quoted character of string type).
PHP, Javascript and other dynamic scripting languages are usually weakly-typed, in that they will convert a (string) "7" to an (integer) 7 if you try to add "7" + 3, although sometimes you have to do this explicitly (and Javascript uses the "+" character for concatenation).
C/C++/Java will not understand that, or will concatenate the result into "73" instead. Type-safety prevents these types of bugs in code by making the type requirement explicit.
Type-safety is very useful. The solution to the above "7" + 3 would be to type cast (int) "7" + 3 (equals 10).
Try this explanation on...
TypeSafe means that variables are statically checked for appropriate assignment at compile time. For example, consder a string or an integer. These two different data types cannot be cross-assigned (ie, you can't assign an integer to a string nor can you assign a string to an integer).
For non-typesafe behavior, consider this:
object x = 89;
int y;
if you attempt to do this:
y = x;
the compiler throws an error that says it can't convert a System.Object to an Integer. You need to do that explicitly. One way would be:
y = Convert.ToInt32( x );
The assignment above is not typesafe. A typesafe assignement is where the types can directly be assigned to each other.
Non typesafe collections abound in ASP.NET (eg, the application, session, and viewstate collections). The good news about these collections is that (minimizing multiple server state management considerations) you can put pretty much any data type in any of the three collections. The bad news: because these collections aren't typesafe, you'll need to cast the values appropriately when you fetch them back out.
For example:
Session[ "x" ] = 34;
works fine. But to assign the integer value back, you'll need to:
int i = Convert.ToInt32( Session[ "x" ] );
Read about generics for ways that facility helps you easily implement typesafe collections.
C# is a typesafe language but watch for articles about C# 4.0; interesting dynamic possibilities loom (is it a good thing that C# is essentially getting Option Strict: Off... we'll see).
Type-Safe is code that accesses only the memory locations it is authorized to access, and only in well-defined, allowable ways.
Type-safe code cannot perform an operation on an object that is invalid for that object. The C# and VB.NET language compilers always produce type-safe code, which is verified to be type-safe during JIT compilation.
Type-safe means that the set of values that may be assigned to a program variable must fit well-defined and testable criteria. Type-safe variables lead to more robust programs because the algorithms that manipulate the variables can trust that the variable will only take one of a well-defined set of values. Keeping this trust ensures the integrity and quality of the data and the program.
For many variables, the set of values that may be assigned to a variable is defined at the time the program is written. For example, a variable called "colour" may be allowed to take on the values "red", "green", or "blue" and never any other values. For other variables those criteria may change at run-time. For example, a variable called "colour" may only be allowed to take on values in the "name" column of a "Colours" table in a relational database, where "red, "green", and "blue", are three values for "name" in the "Colours" table, but some other part of the computer program may be able to add to that list while the program is running, and the variable can take on the new values after they are added to the Colours table.
Many type-safe languages give the illusion of "type-safety" by insisting on strictly defining types for variables and only allowing a variable to be assigned values of the same "type". There are a couple of problems with this approach. For example, a program may have a variable "yearOfBirth" which is the year a person was born, and it is tempting to type-cast it as a short integer. However, it is not a short integer. This year, it is a number that is less than 2009 and greater than -10000. However, this set grows by 1 every year as the program runs. Making this a "short int" is not adequate. What is needed to make this variable type-safe is a run-time validation function that ensures that the number is always greater than -10000 and less than the next calendar year. There is no compiler that can enforce such criteria because these criteria are always unique characteristics of the problem domain.
Languages that use dynamic typing (or duck-typing, or manifest typing) such as Perl, Python, Ruby, SQLite, and Lua don't have the notion of typed variables. This forces the programmer to write a run-time validation routine for every variable to ensure that it is correct, or endure the consequences of unexplained run-time exceptions. In my experience, programmers in statically typed languages such as C, C++, Java, and C# are often lulled into thinking that statically defined types is all they need to do to get the benefits of type-safety. This is simply not true for many useful computer programs, and it is hard to predict if it is true for any particular computer program.
The long & the short.... Do you want type-safety? If so, then write run-time functions to ensure that when a variable is assigned a value, it conforms to well-defined criteria. The down-side is that it makes domain analysis really difficult for most computer programs because you have to explicitly define the criteria for each program variable.
Type Safety
In modern C++, type safety is very important. Type safety means that you use the types correctly and, therefore, avoid unsafe casts and unions. Every object in C++ is used according to its type and an object needs to be initialized before its use.
Safe Initialization: {}
The compiler protects from information loss during type conversion. For example,
int a{7}; The initialization is OK
int b{7.5} Compiler shows ERROR because of information loss.\
Unsafe Initialization: = or ()
The compiler doesn't protect from information loss during type conversion.
int a = 7 The initialization is OK
int a = 7.5 The initialization is OK, but information loss occurs. The actual value of a will become 7.0
int c(7) The initialization is OK
int c(7.5) The initialization is OK, but information loss occurs. The actual value of a will become 7.0