Is AES_256 stronger than blowfish - mysql

I'm considering using mysql's built-in aes_encrypt. I normally use blowfish, but mysql doesn't seem to support it natively. How do the 2 compare together? Is one stronger than the other?

AES has a higher design strength than Blowfish - in particular it uses 128 bit blocks, in contrast with Blowfish's 64 bit block size. It's also just much newer - it has the advantage of incorporating several more years of advances in the cryptographic art.
It may interest you to know that the designers behind Blowfish went on to design an improved algorithm called Twofish, which was an entrant (and finalist) in the AES competition.

If you are only looking at security then these two algorithms ranks more or less the same. There is some implementation differences so unless you want to use an external function just go with the build in AES function. If you are going to do it yourself you might want to use a newer encryption algorithm than Blowfish.

You may be interested in the best public cryptanalysis for both algorithms:
For AES, there exists a related-key attack on the 192-bit and 256-bit versions, discovered by Alex Biryukov and Dmitry Khovratovich, which exploits AES's key scheduling in 2^99.5 operations. This is faster than brute force, but still somewhat infeasible. 128-bit AES is not affected by this attack.
For Blowfish, four of its rounds are susceptible to a second-order differential attack (Rijmen, 1997). It can also be distinguished (as in, "Hey, this box is using Blowfish") for a class of weak keys. However, there is no effective cryptanalysis on the full-round version of Blowfish at this moment.

This is pretty subjective, but I'd say AES is more widely used than Blowfish and has been proven secure over the years. So, why not?

Related

Can you get use of arbitrary-precision arithmetic?

I am modeling algorithm to hardware mapping with Gecode and standard Gecode::Int::Limits is too small at least because I want to target systems with more than 2^32 memory.
Is there a way to get use of arbitrary-precision arithmetic with Gecode or at least 64-bits integers?
I know that Gecode can be built with MPIR or GMP support, but seems those are just for trigonometric operations?
If I understand Gecode documentation properly:
The totally available number of bits for all variable implementation types used by Gecode is 32
So seems there is no way to model with values bigger than 2147483646, but I still think I'm fundamentally wrong about something, since it's almost obligatory for modeling toolkit/library to have an ability to model with values bigger than that. Especially, Wikipedia says that:
ECLiPSe interfaces to external solvers, in particular ... and the Gecode solver library
but ECLiPSe tutorial stands that
Numbers in ECLiPSe come in several flavors:
Integers can be as large as fits into memory, e.g.:
123 0 -27 393423874981724
I cannot understand how just an interface being able to have numbers bigger than underlying library.

MySQL: AES_ENCRYPT vs DES_ENCRYPT [duplicate]

Does anyone have pros and cons together for comparing these encryption algorithms ?
Use AES.
In more details:
DES is the old "data encryption standard" from the seventies. Its key size is too short for proper security (56 effective bits; this can be brute-forced, as has been demonstrated more than ten years ago). Also, DES uses 64-bit blocks, which raises some potential issues when encrypting several gigabytes of data with the same key (a gigabyte is not that big nowadays).
3DES is a trick to reuse DES implementations, by cascading three instances of DES (with distinct keys). 3DES is believed to be secure up to at least "2112" security (which is quite a lot, and quite far in the realm of "not breakable with today's technology"). But it is slow, especially in software (DES was designed for efficient hardware implementation, but it sucks in software; and 3DES sucks three times as much).
Blowfish is a block cipher proposed by Bruce Schneier, and deployed in some softwares. Blowfish can use huge keys and is believed secure, except with regards to its block size, which is 64 bits, just like DES and 3DES. Blowfish is efficient in software, at least on some software platforms (it uses key-dependent lookup tables, hence performance depends on how the platform handles memory and caches).
AES is the successor of DES as standard symmetric encryption algorithm for US federal organizations (and as standard for pretty much everybody else, too). AES accepts keys of 128, 192 or 256 bits (128 bits is already very unbreakable), uses 128-bit blocks (so no issue there), and is efficient in both software and hardware. It was selected through an open competition involving hundreds of cryptographers during several years. Basically, you cannot have better than that.
So, when in doubt, use AES.
Note that a block cipher is a box which encrypts "blocks" (128-bit chunks of data with AES). When encrypting a "message" which may be longer than 128 bits, the message must be split into blocks, and the actual way you do the split is called the mode of operation or "chaining". The naive mode (simple split) is called ECB and has issues. Using a block cipher properly is not easy, and it is more important than selecting between, e.g., AES or 3DES.
All of these schemes, except AES and Blowfish, have known vulnerabilities and should not be used.
However, Blowfish has been replaced by Twofish.
The encryption methods described are symmetric key block ciphers.
Data Encryption Standard (DES) is the predecessor, encrypting data in 64-bit blocks using a 56 bit key. Each block is encrypted in isolation, which is a security vulnerability.
Triple DES extends the key length of DES by applying three DES operations on each block: an encryption with key 0, a decryption with key 1 and an encryption with key 2. These keys may be related.
DES and 3DES are usually encountered when interfacing with legacy commercial products and services.
AES is considered the successor and modern standard. http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
I believe the use of Blowfish is discouraged.
It is highly recommended that you do not attempt to implement your own cryptography and instead use a high-level implementation such as GPG for data at rest or SSL/TLS for data in transit. Here is an excellent and sobering video on encryption vulnerabilities http://rdist.root.org/2009/08/06/google-tech-talk-on-common-crypto-flaws/
AES is a symmetric cryptographic algorithm, while RSA is an asymmetric (or public key) cryptographic algorithm. Encryption and decryption is done with a single key in AES, while you use separate keys (public and private keys) in RSA. The strength of a 128-bit AES key is roughly equivalent to 2600-bits RSA key.
Although TripleDESCryptoServiceProvider is a safe and good method but it's too slow. If you want to refer to MSDN you will get that advise you to use AES rather TripleDES. Please check below link:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.tripledescryptoserviceprovider.aspx
you will see this attention in the remark section:
Note
A newer symmetric encryption algorithm, Advanced Encryption
Standard (AES), is available. Consider using the
AesCryptoServiceProvider class instead of the
TripleDESCryptoServiceProvider class. Use
TripleDESCryptoServiceProvider only for compatibility with legacy
applications and data.
Good luck
DES is the old "data encryption standard" from the seventies.
All of these schemes, except AES and Blowfish, have known vulnerabilities and should not be used.
All of them can actually be securely used if wrapped.
Here is an example of AES wrapping.
DES
AES
Developed
1977
2000
Key Length
56 bits
128, 192, or 256 bits
Cipher Type
Symmetric
Symmetric
Block Size
64 bits
128 bits
Security
inadequate
secure
Performance
Fast
Slow
AES is the currently accepted standard algorithm to use (hence the name Advanced Encryption Standard).
The rest are not.

Does anyone know what "Quantum Computing" is?

In physics, its the ability for particles to exist in multiple/parallel dynamic states at a particular point in time. In computing, would it be the ability of a data bit to equal 1 or 0 at the same time, a third value like NULL[unknown] or multiple values?.. How can this technology be applied to: computer processors, programming, security, etc.?.. Has anyone built a practical quantum computer or developed a quantum programming language where, for example, the program code dynamically changes or is autonomous?
I have done research in quantum computing, and here is what I hope is an informed answer.
It is often said that qubits as you see them in a quantum computer can exist in a "superposition" of 0 and 1. This is true, but in a more subtle way than you might first guess. Even with a classical computer with randomness, a bit can exist in a superposition of 0 and 1, in the sense that it is 0 with some probability and 1 with some probability. Just as when you roll a die and don't look at the outcome, or receive e-mail that you haven't yet read, you can view its state as a superposition of the possibilities. Now, this may sound like just flim-flam, but the fact is that this type of superposition is a kind of parallelism and that algorithms that make use of it can be faster than other algorithms. It is called randomized computation, and instead of superposition you can say that the bit is in a probabilistic state.
The difference between that and a qubit is that a qubit can have a fat set of possible superpositions with more properties. The set of probabilistic states of an ordinary bit is a line segment, because all there is a probability of 0 or 1. The set of states of a qubit is a round 3-dimensional ball. Now, probabilistic bit strings are more complicated and more interesting than just individual probabilistic bits, and the same is true of strings of qubits. If you can make qubits like this, then actually some computational tasks wouldn't be any easier than before, just as randomized algorithms don't help with all problems. But some computational problems, for example factoring numbers, have new quantum algorithms that are much faster than any known classical algorithm. It is not a matter of clock speed or Moore's law, because the first useful qubits could be fairly slow and expensive. It is only sort-of parallel computation, just as an algorithm that makes random choices is only in weak sense making all choices in parallel. But it is "randomized algorithms on steroids"; that's my favorite summary for outsiders.
Now the bad news. In order for a classical bit to be in a superposition, it has be a random choice that is secret from you. Once you look a flipped coin, the coin "collapses" to either heads for sure or tails for sure. The difference between that and a qubit is that in order for a qubit to work as one, its state has to be secret from the rest of the physical universe, not just from you. It has to be secret from wisps of air, from nearby atoms, etc. On the other hand, for qubits to be useful for a quantum computer, there has to be a way to manipulate them while keeping their state a secret. Otherwise its quantum randomness or quantum coherence is wrecked. Making qubits at all isn't easy, but it is done routinely. Making qubits that you can manipulate with quantum gates, without revealing what is in them to the physical environment, is incredibly difficult.
People don't know how to do that except in very limited toy demonstrations. But if they could do it well enough to make quantum computers, then some hard computational problems would be much easier for these computers. Others wouldn't be easier at all, and great deal is unknown about which ones can be accelerated and by how much. It would definitely have various effects on cryptography; it would break the widely used forms of public-key cryptography. But other kinds of public-key cryptography have been proposed that could be okay. Moreover quantum computing is related to the quantum key distribution technique which looks very safe, and secret-key cryptography would almost certainly still be fairly safe.
The other factor where the word "quantum" computing is used regards an "entangled pair". Essentially if you can create an entangled pair of particles which have a physical "spin", quantum physics dictates that the spin on each electron will always be opposite.
If you could create an entangled pair and then separate them, you could use the device to transmit data without interception, by changing the spin on one of the particles. You can then create a signal which is modulated by the particle's information which is theoretically unbreakable, as you cannot know what spin was on the particles at any given time by intercepting the information in between the two signal points.
A whole lot of very interested organisations are researching this technique for secure communications.
Yes, there is quantum encryption, by which if someone tries to spy on your communication, it destroys the datastream such that neither they nor you can read it.
However, the real power of quantum computing lies in that a qubit can have a superposition of 0 and 1. Big deal. However, if you have, say, eight qubits, you can now represent a superposition of all integers from 0 to 255. This lets you do some rather interesting things in polynomial instead of exponential time. Factorization of large numbers (IE, breaking RSA, etc.) is one of them.
There are a number of applications of quantum computing.
One huge one is the ability to solve NP-hard problems in P-time, by using the indeterminacy of qubits to essentially brute-force the problem in parallel.
(The struck-out sentence is false. Quantum computers do not work by brute-forcing all solutions in parallel, and they are not believed to be able to solve NP-complete problems in polynomial time. See e.g. here.)
Just a update of quantum computing industry base on Greg Kuperberg's answer:
D-Wave 2 System is using quantum annealing.
The superposition quantum states will collapse to a unique state when a observation happened. The current technologies of quantum annealing is apply a physical force to 2 quantum bits, the force adds constrains to qubits so when observation happened, the qubit will have higher probability to collapse to a result that we are willing to see.
Reference:
How does a quantum machine work
I monitor recent non-peer reviewed articles on the subject, this is what I extrapolate from what I have read. a qubit, in addition to what has been said above. namely that they can hold values in superposition, they can also hold multiple bits, for example spin up/+ spin down/+ spin -/vertical , I need to abbreviate +H,-H,+V,-V Left+, LH,LV also not all of the combinations are valid and there are additional values that can be placed on the type of qubit
each used similar to ram vs rom etc. photon with a wavelength, electron with a charge, photon with a charge, photon with a spin, you get the idea, some combinations are not valid and some require additional algorithms in order to pass the argument to the next variable(location where data is stored) or qubit(location of superposition of values to be returned, if you will simply because the use of wires is by necessity limited due to size and space. One of the greatest challenges is controlling or removing Q.(quantum) decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. November 2011 researchers factorised 143 using 4 qubits. that same year, D-Wave Systems announced the first commercial quantum annealer on the market by the name D-Wave One. The company claims this system uses a 128 qubit processor chipset.May 2013, Google Inc announced that it was launching the Q. AI. Lab, hopefully to boost AI. I really do Hope I didn't waste anyones time with things they already knew. If you learned something please up.
As I can not yet comment, it really depends on what type of qubit you are working with to know the number of states for example the UNSW silicon Q. bit" vs a Diamond-neutron-valency or a SSD NMR Phosphorus - silicon vs Liquid NMR of the same.

Is GPGPU a hack?

I had started working on GPGPU some days ago and successfully implemented cholesky factorization with good performacne and I attended a conference on High Performance Computing where some people said that "GPGPU is a Hack".
I am still confused what does it mean and why they were saying it hack. One said that this is hack because you are converting your problem into a matrix and doing operations on it. But still I am confused that does people think it is a hack or if yes then why?
Can anyone help me, why they called it a hack while I found nothing wrong with it.
One possible reason for such opinion is that the GPU was not originally intended for general purpose computations. Also programming a GPU is less traditional and more hardcore and therefore more likely to be perceived as a hack.
The point that "you convert the problem into a matrix" is not reasonable at all. Whatever task you solve with writing code you choose reasonable data structures. In case of GPU matrices are likely the most reasonable datastructures and it's not a hack but just a natural choice to use them.
However I suppose that it's a matter of time for GPGPU becoming widespread. People just have to get used to the idea. After all who cares which unit of the computer runs the program?
On the GPU, having efficient memory access is paramount to achieving optimal performance. This often involves restructuring or even choosing entirely new algorithms and data structures. This is reason why GPU programming can be perceived as a hack.
Secondly, adapting an existing algorithm to run on the GPU is not in and of itself science. The relatively low scientific contribution of some GPU algorithm-related papers has led to a negative perception of GPU programming as strictly "engineering".
Obviously, only the person who said that can say for certain why he said it, but, here's my take:
A "Hack" is not a bad thing.
It forces people to learn new programming languages and concepts. For people who are just trying to model the weather or protein folding or drug reactions, this is an unwelcome annoyance. They didn't really want to learn FORTRAN (or whatever) in the first place, and now the have to learn another programming system.
The programming tools are NOT very mature yet.
The hardware isn't as reliable as CPUs (yet) so all of the calculations have to be done twice to make sure you've got the right answer. One reason for this is that GPUs don't come with error-correcting memory yet, so if you're trying to build a supercomputer with thousands of processors, the probability of a cosmic ray flipping a bit in you numbers approaches certainty.
As for the comment "you are converting your problem into a matrix and doing operations on it", I think that shows a lot of ignorance. Virtually ALL of high-performance computing fits that description!
One of the major problems in GPGPU for the past few years and probably for the next few is that programming them for arbitrary tasks was not very easy. Up until DX10 there was no integer support among GPUs and branching is still very poor. This is very much a situation where in order to get maximum benefit you have to write your code in a very awkward manner to extract all sorts of efficiency gains from the GPU. This is because you're running on hardware that is still dedicated to processing polygons and textures, rather than abstract parallel tasks.
Obviously, thats my take on it and YMMV
The GPGPU harks back to the days of the math co-processor. A hack is a shortcut to solving a long winded problem. GPGPU is a hack just like NAT on top of IPV4 is a hack. Computational problems just like networks are getting bigger as we try to do more, GPGPU is an useful interim solution, whether it stays outside the core CPU chip and has separate cranky API or gets sucked into the CPU via API or manufacture is up to the path finders.
I suppose he meant that using GPGPU forced you to restructure your implementation, so that it fitted the hardware, not the problem domain. Elegant implementation should fit the latter.
Note, that the word "hack" may have several different meanings:
http://www.urbandictionary.com/define.php?term=hack

ANN for decompiler?

Has there ever been any attempts at utilizing artificial neural networks in decompilation? It would be nice if it was possible to provide the trimmed semantics of source along with the code in to a neural network so it could learn the connection between the two. I assume this would likely lose it's effectiveness when there is optimizations and maybe work better for high level languages too but I'm interested in hearing any attempts anyone has had at this.
I added this as a comment, but I think I will go ahead and post it as an answer as well. It looks like in the 11 years sense this question has been posted, there has been work done in this direction. Here is a link:
https://www.groundai.com/project/a-neural-based-program-decompiler/1
And here is the abstract
A Neural-based Program Decompiler
Reverse engineering of binary executables is a critical problem in the computer security domain. On the one hand, malicious parties may recover interpretable source codes from the software products to gain commercial advantages. On the other hand, binary decompilation can be leveraged for code vulnerability analysis and malware detection. However, efficient binary decompilation is challenging. Conventional decompilers have the following major limitations: (i) they are only applicable to specific source-target language pair, hence incurs undesired development cost for new language tasks; (ii) their output high-level code cannot effectively preserve the correct functionality of the input binary; (iii) their output program does not capture the semantics of the input and the reversed program is hard to interpret. To address the above problems, we propose Coda111Coda is the abbreviation for CodeAttack., the first end-to-end neural-based framework for code decompilation. Coda decomposes the decompilation task into of two key phases: First, Coda employs an instruction type-aware encoder and a tree decoder for generating an abstract syntax tree (AST) with attention feeding during the code sketch generation stage. Second, Coda then updates the code sketch using an iterative error correction machine guided by an ensembled neural error predictor. By finding a good approximate candidate and then fixing it towards perfect, Coda achieves superior performance compared to baseline approaches. We assess Coda’s performance with extensive experiments on various benchmarks. Evaluation results show that Coda achieves an average of 82% program recovery accuracy on unseen binary samples, where the state-of-the-art decompilers yield 0% accuracy. Furthermore, Coda outperforms the sequence-to-sequence model with attention by a margin of 70% program accuracy. Our work reveals the vulnerability of binary executables and imposes a new threat to the protection of Intellectual Property (IP) for software development.
I'm assuming you mean decompilation to human readable C/C++ as compared to Assembly then,
Given the input size (optimized/compiled code) and the output size of succinct code, and the multi-line stateful nature of decomplilation process, I would have though this is a larger problem that a ANN could ever handle.