Does anyone have pros and cons together for comparing these encryption algorithms ?
Use AES.
In more details:
DES is the old "data encryption standard" from the seventies. Its key size is too short for proper security (56 effective bits; this can be brute-forced, as has been demonstrated more than ten years ago). Also, DES uses 64-bit blocks, which raises some potential issues when encrypting several gigabytes of data with the same key (a gigabyte is not that big nowadays).
3DES is a trick to reuse DES implementations, by cascading three instances of DES (with distinct keys). 3DES is believed to be secure up to at least "2112" security (which is quite a lot, and quite far in the realm of "not breakable with today's technology"). But it is slow, especially in software (DES was designed for efficient hardware implementation, but it sucks in software; and 3DES sucks three times as much).
Blowfish is a block cipher proposed by Bruce Schneier, and deployed in some softwares. Blowfish can use huge keys and is believed secure, except with regards to its block size, which is 64 bits, just like DES and 3DES. Blowfish is efficient in software, at least on some software platforms (it uses key-dependent lookup tables, hence performance depends on how the platform handles memory and caches).
AES is the successor of DES as standard symmetric encryption algorithm for US federal organizations (and as standard for pretty much everybody else, too). AES accepts keys of 128, 192 or 256 bits (128 bits is already very unbreakable), uses 128-bit blocks (so no issue there), and is efficient in both software and hardware. It was selected through an open competition involving hundreds of cryptographers during several years. Basically, you cannot have better than that.
So, when in doubt, use AES.
Note that a block cipher is a box which encrypts "blocks" (128-bit chunks of data with AES). When encrypting a "message" which may be longer than 128 bits, the message must be split into blocks, and the actual way you do the split is called the mode of operation or "chaining". The naive mode (simple split) is called ECB and has issues. Using a block cipher properly is not easy, and it is more important than selecting between, e.g., AES or 3DES.
All of these schemes, except AES and Blowfish, have known vulnerabilities and should not be used.
However, Blowfish has been replaced by Twofish.
The encryption methods described are symmetric key block ciphers.
Data Encryption Standard (DES) is the predecessor, encrypting data in 64-bit blocks using a 56 bit key. Each block is encrypted in isolation, which is a security vulnerability.
Triple DES extends the key length of DES by applying three DES operations on each block: an encryption with key 0, a decryption with key 1 and an encryption with key 2. These keys may be related.
DES and 3DES are usually encountered when interfacing with legacy commercial products and services.
AES is considered the successor and modern standard. http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
I believe the use of Blowfish is discouraged.
It is highly recommended that you do not attempt to implement your own cryptography and instead use a high-level implementation such as GPG for data at rest or SSL/TLS for data in transit. Here is an excellent and sobering video on encryption vulnerabilities http://rdist.root.org/2009/08/06/google-tech-talk-on-common-crypto-flaws/
AES is a symmetric cryptographic algorithm, while RSA is an asymmetric (or public key) cryptographic algorithm. Encryption and decryption is done with a single key in AES, while you use separate keys (public and private keys) in RSA. The strength of a 128-bit AES key is roughly equivalent to 2600-bits RSA key.
Although TripleDESCryptoServiceProvider is a safe and good method but it's too slow. If you want to refer to MSDN you will get that advise you to use AES rather TripleDES. Please check below link:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.tripledescryptoserviceprovider.aspx
you will see this attention in the remark section:
Note
A newer symmetric encryption algorithm, Advanced Encryption
Standard (AES), is available. Consider using the
AesCryptoServiceProvider class instead of the
TripleDESCryptoServiceProvider class. Use
TripleDESCryptoServiceProvider only for compatibility with legacy
applications and data.
Good luck
DES is the old "data encryption standard" from the seventies.
All of these schemes, except AES and Blowfish, have known vulnerabilities and should not be used.
All of them can actually be securely used if wrapped.
Here is an example of AES wrapping.
DES
AES
Developed
1977
2000
Key Length
56 bits
128, 192, or 256 bits
Cipher Type
Symmetric
Symmetric
Block Size
64 bits
128 bits
Security
inadequate
secure
Performance
Fast
Slow
AES is the currently accepted standard algorithm to use (hence the name Advanced Encryption Standard).
The rest are not.
Related
I notice there is a macro uint4korr in the MySQL/MariaDB source code.
include/byte_order_generic.h
I merely understand this macro is correlated with byte order. But I looked for the comments about this macro, found nothing. I don't know the meaning of the suffix korr. What does the abbreviation express?
I want to know why the code implements like this? What are the effects on different platforms?
"korr" is an abbreviation for "Korrekt" of the phonic and meaning equivalent of the English word "Correct".
The purpose of the code is to provide a uniform byte order of storage and communication components so the storage files are portable between different endian architectures without conversion, and the client/server communication doesn't need to know which endian the other architecture is.
I believe that the related Swedish verb is korrigera, to correct. uint4korr() is kind of the opposite of ntohl(), because it will swap the bytes on a big-endian architecture and not little-endian.
Somewhat related to this, the InnoDB storage engine stores its data in big-endian byte order, so that a simple memcmp() can be used for comparing keys. (It also inverts the sign bit of signed integers due to this.) The InnoDB function mach_read_from_4() is basically ntohl() combined with a 32-bit load via an unaligned pointer. Recent versions of GCC and clang impress me by translating that into the IA-32 or AMD64 instructions mov and bswap or simply movbe.
I am modeling algorithm to hardware mapping with Gecode and standard Gecode::Int::Limits is too small at least because I want to target systems with more than 2^32 memory.
Is there a way to get use of arbitrary-precision arithmetic with Gecode or at least 64-bits integers?
I know that Gecode can be built with MPIR or GMP support, but seems those are just for trigonometric operations?
If I understand Gecode documentation properly:
The totally available number of bits for all variable implementation types used by Gecode is 32
So seems there is no way to model with values bigger than 2147483646, but I still think I'm fundamentally wrong about something, since it's almost obligatory for modeling toolkit/library to have an ability to model with values bigger than that. Especially, Wikipedia says that:
ECLiPSe interfaces to external solvers, in particular ... and the Gecode solver library
but ECLiPSe tutorial stands that
Numbers in ECLiPSe come in several flavors:
Integers can be as large as fits into memory, e.g.:
123 0 -27 393423874981724
I cannot understand how just an interface being able to have numbers bigger than underlying library.
I've been looking for the answer on google and can't seem to find it. But binary is represented in bytes/octets, 8 bits. So the character a (I think) is 01100010, and the word hey is
01101000
01100101
01111001
So my question is, why 8? Is this just a good number for the computer to work with? And I've noticed how 32 bit/ 62 bit computers are all multiples of eight... so does this all have to do with how the first computers were made?
Sorry if this question doesn't meet the Q/A standards... its not code related but I can't think of anywhere else to ask it.
The answer is really "historical reasons".
Computer memory must be addressable at some level. When you ask your RAM for information, you need to specify which information you want - and it will return that to you. In theory, one could produce bit-addressable memory: you ask for one bit, you get one bit back.
But that wouldn't be very efficient, since the interface connecting the processor to the memory needs to be able to convey enough information to specify which address it wants. The smaller the granularity of access, the more wires you need (or the more pushes along the same number of wires) before you've given an accurate enough address for retrieval. Also, returning one bit multiple times is less efficient than returning multiple bits one time (side note: true in general. This is a serial-vs-parallel debate, and due to reduced system complexity and physics, serial interfaces can generally run faster. But overall, more bits at once is more efficient).
Secondly, the total amount of memory in the system is limited in part by the size of the smallest addressable block, since unless you used variably-sized memory addresses, you only have a finite number of addresses to work with - but each address represents a number of bits which you get to choose. So a system with logically byte-addressable memory can hold eight times the RAM of one with logically bit-addressable memory.
So, we use memory logically addressable at less fine levels (although physically no RAM chip will return just one byte). Only powers of two really make sense for this, and historically the level of access has been a byte. It could just as easily be a nibble or a two-byte word, and in fact older systems did have smaller chunks than eight bits.
Now, of course, modern processors mostly eat memory in cache-line-sized increments, but our means of expressing groupings and dividing the now-virtual address space remained, and the smallest amount of memory which a CPU instruction can access directly is still an eight-bit chunk. The machine code for the CPU instructions (and/or the paths going into the processor) would have to grow the same way the number of wires connecting to the memory controller would in order for the registers to be addressable - it's the same problem as with the system memory accessibility I was talking about earlier.
"In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit µ-law encoding. This large investment promised to reduce transmission costs for 8-bit data. The use of 8-bit codes for digital telephony also caused 8-bit data octets to be adopted as the basic data unit of the early Internet"
http://en.wikipedia.org/wiki/Byte
Not sure how true that is. It seems that that's just the symbol and style adopted by the IEEE, though.
One reason why we use 8-bit bytes is because the complexity of the world around us has a definitive structure. On the scale of human beings, observed physical world has finite number of distinctive states and patterns. Our innate restricted abilities to classify information, to distinguish order from chaos, finite amount of memory in our brains - these all are the reasons why we choose [2^8...2^64] states to be enough to satisfy our everyday basic computational needs.
I'm studying for a test and I'm still didn't get it why public key algorithms are way slower than symetric algorithms.
Public-key cryptography is a form of asymmetric cryptography, in which the difference is the use of an extra cryptographic key.
Symmetric algorithms use a "shared secret" in which two systems each use a single cryptographic key to encrypt and decrypt communications.
Public-key cryptography does not use a single shared key, instead it uses mathematical key-pairs: a public and private key. In this system the communications are encrypted with the public key and is decrypted with the private key. Here is a better explanation from Wikipedia:
The distinguishing technique used in
public key cryptography is the use of
asymmetric key algorithms, where the
key used to encrypt a message is not
the same as the key used to decrypt
it. Each user has a pair of
cryptographic keys—a public encryption
key and a private decryption key. The
publicly available encrypting-key is
widely distributed, while the private
decrypting-key is known only to the
recipient. Messages are encrypted with
the recipient's public key and can
only be decrypted with the
corresponding private key. The keys
are related mathematically, but the
private key cannot feasibly (ie. in
actual or projected practice) be
derived from the public key. The
discovery of algorithms that could
produce public/private key pairs
revolutionized the practice of
cryptography beginning in the middle
1970s.
The computational overhead is then quite obvious: the public key is available to any system it's exposed to (a public-key system on the internet, for example exposes the public-key to the entire internet). To compensate, both public and private keys will have to be quite large to ensure a stronger level of encryption. The result, however, is a much stronger level of encryption, as the private decryption key (so far) cannot be reverse-engineered from the public encryption key.
There is more that can affect the "speed" of a public-key infrastructure (PKI). Since one of the issues with this system is trust, most implementations involve a certificate authority (CA), which are entities that are trusted to delegate key pairs and validate the keys' "identity".
So to summarize: larger cryptographic key sizes, two cryptographic keys instead of one, and with the introduction of a certificate authority: extra DNS look-ups, and server response times.
It's because of this extra overhead that most implementations benefit from a hybrid algorithm, where the public and private keys are used to generate a session key (much like a shared secret in symmetrical algorithms) to gain the best of both worlds.
Public key algorithms rely on "trapdoor" calculations, ones that are computationally expensive to encrypt and computationally intractable to decrypt with the secret key. If the first step is too easy (which correlates with speed), the second step becomes less hard (more breakable). Consequently, public key algorithms tend to be resource intensive.
Private key algorithms already have the secret during the encryption phase, so they don't have to do as much work as an algorithm with a public secret.
The above is an over-generalization but should give you a feel for the reasons behind the relative speed differences. That being said, a private key algorithm can be slow and a public key algorithm may have an efficient implementation. The devil is in the details :-)
Encryption and keying methods are a very deep and complex topic that only the smartest mathematical minds in the world can fully understand, but there are top-level views that most people can understand.
The primary difference is that symmetric algorithms require a much, much smaller key than asymmetric (PKI) methods. Because symmetric algorithms work on a "shared secret" (such as abcd1234) which is transferred inside a trusted communication method (for example, I'm going to call you on the telephone and ask you for the shared secret) then they don't need to be as long as they rely on other methods of security (i.e. I trust you not to tell that to anyone).
PK infrastructure involves sending that "key" over the internet, over un-trusted space, and involves using huge prime numbers and massive keys (1024-bit or 2048-bit rather than 128 or 256-bit for example).
A general rule of thumb is that PKI methods are approximately 1,000 times slower than a symmetric key.
I'm considering using mysql's built-in aes_encrypt. I normally use blowfish, but mysql doesn't seem to support it natively. How do the 2 compare together? Is one stronger than the other?
AES has a higher design strength than Blowfish - in particular it uses 128 bit blocks, in contrast with Blowfish's 64 bit block size. It's also just much newer - it has the advantage of incorporating several more years of advances in the cryptographic art.
It may interest you to know that the designers behind Blowfish went on to design an improved algorithm called Twofish, which was an entrant (and finalist) in the AES competition.
If you are only looking at security then these two algorithms ranks more or less the same. There is some implementation differences so unless you want to use an external function just go with the build in AES function. If you are going to do it yourself you might want to use a newer encryption algorithm than Blowfish.
You may be interested in the best public cryptanalysis for both algorithms:
For AES, there exists a related-key attack on the 192-bit and 256-bit versions, discovered by Alex Biryukov and Dmitry Khovratovich, which exploits AES's key scheduling in 2^99.5 operations. This is faster than brute force, but still somewhat infeasible. 128-bit AES is not affected by this attack.
For Blowfish, four of its rounds are susceptible to a second-order differential attack (Rijmen, 1997). It can also be distinguished (as in, "Hey, this box is using Blowfish") for a class of weak keys. However, there is no effective cryptanalysis on the full-round version of Blowfish at this moment.
This is pretty subjective, but I'd say AES is more widely used than Blowfish and has been proven secure over the years. So, why not?