I am trying to port a WP8 application to Win 8.1 store App. The app uses password based AES 256 for encryption. The confusion I am having is that I am able to decrypt the text from WP8, encrypted using 256 bit key, but only using a 32 bit key in Win RT.
Below is the Window Phone 8 code. Here note that Rfc2898DeriveBytes wants it in bytes, so the key used is 256 bits.
private static string Decrypt(string dataToDecrypt, string password, string salt)
{
AesManaged aes = null;
MemoryStream memoryStream = null;
try
{
//Generate a Key based on a Password and HMACSHA1 pseudo-random number generator
Rfc2898DeriveBytes rfc2898 = new Rfc2898DeriveBytes(password, Encoding.UTF8.GetBytes(salt), 10000);
//Create AES algorithm
aes = new AesManaged();
aes.KeySize = aes.LegalKeySizes[0].MaxSize; // returns 256
aes.BlockSize = aes.LegalBlockSizes[0].MaxSize; // return 128
aes.Key = rfc2898.GetBytes(aes.KeySize);
aes.IV = rfc2898.GetBytes(aes.BlockSize);
//Create Memory and Crypto Streams
memoryStream = new MemoryStream();
CryptoStream cryptoStream = new CryptoStream(memoryStream, aes.CreateDecryptor(), CryptoStreamMode.Write);
//Decrypt Data
byte[] data = Convert.FromBase64String(dataToDecrypt);
cryptoStream.Write(data, 0, data.Length);
cryptoStream.FlushFinalBlock();
//Return Decrypted String
byte[] decryptBytes = memoryStream.ToArray();
//Dispose
if (cryptoStream != null)
cryptoStream.Dispose();
//Retval
return Encoding.UTF8.GetString(decryptBytes, 0, decryptBytes.Length);
}
catch (Exception ex)
{
MasterData.CryptographyExceptionOccured = true;
Debug.WriteLine(ex.Message);
Debug.WriteLine(ex.StackTrace);
return "";
}
finally
{
if (memoryStream != null)
memoryStream.Dispose();
if (aes != null)
aes.Clear();
}
}
Below is the Win RT code (Win 8.1) Here note that the CryptographicEngine.DeriveKeyMaterial method requires the key size in bits. But I am able to decrypt the 256 bit encryption in WP8 using the below Win RT code, which seems to use 32 bits only. ( The password and salt used are same in WP8 / Win RT)
private static void GenerateKeyMaterial(string password, string salt, uint iterationCount, out IBuffer keyMaterial, out IBuffer iv)
{
// Setup KDF parameters for the desired salt and iteration count
IBuffer saltBuffer = CryptographicBuffer.ConvertStringToBinary(salt, BinaryStringEncoding.Utf8);
KeyDerivationParameters kdfParameters = KeyDerivationParameters.BuildForPbkdf2(saltBuffer, iterationCount);
// Get a KDF provider for PBKDF2, and store the source password in a Cryptographic Key
KeyDerivationAlgorithmProvider kdf = KeyDerivationAlgorithmProvider.OpenAlgorithm(KeyDerivationAlgorithmNames.Pbkdf2Sha256);
// KeyDerivationAlgorithmProvider kdf = KeyDerivationAlgorithmProvider.OpenAlgorithm(KeyDerivationAlgorithmNames.Pbkdf2Sha1); IBuffer passwordBuffer = CryptographicBuffer.ConvertStringToBinary(password, BinaryStringEncoding.Utf8);
CryptographicKey passwordSourceKey = kdf.CreateKey(passwordBuffer);
// Generate key material from the source password, salt, and iteration count. Only call DeriveKeyMaterial once,
// since calling it twice will generate the same data for the key and IV.
int keySize = 256 / 8;
int ivSize = 128 / 8;
uint totalDataNeeded = (uint)(keySize + ivSize);
IBuffer keyAndIv = CryptographicEngine.DeriveKeyMaterial(passwordSourceKey, kdfParameters, totalDataNeeded);
// Split the derived bytes into a seperate key and IV
byte[] keyMaterialBytes = keyAndIv.ToArray();
keyMaterial = WindowsRuntimeBuffer.Create(keyMaterialBytes, 0, keySize, keySize);
iv = WindowsRuntimeBuffer.Create(keyMaterialBytes, keySize, ivSize, ivSize);
}
So to use a 256 bit encryption in Win RT, what should be the key size in the above code? How is it possible to decrypt a 256 bit encrypted string using a 32 bit key. See the above WP8 and Win RT code.
Related
I met a blocking issue when I tried to immigrate my project from Windows Phone Silverlight 8.1 to Windows Runtime.
In Windows Runtime the AES-encrypted string is not the same as the one on Silverlight before.
In Silverlight:
public static string EncryptAES(string encryptString)
{
AesManaged aes = null;
MemoryStream ms = null;
CryptoStream cs = null;
string encryptKey = "testtest123";
string salt = "abcabcabcd";
try
{
Rfc2898DeriveBytes rfc2898 = new Rfc2898DeriveBytes(encryptKey, Encoding.UTF8.GetBytes(salt));
aes = new AesManaged();
aes.Key = rfc2898.GetBytes(aes.KeySize / 8);
aes.IV = rfc2898.GetBytes(aes.BlockSize / 8);
ms = new MemoryStream();
cs = new CryptoStream(ms, aes.CreateEncryptor(), CryptoStreamMode.Write);
byte[] data = Encoding.UTF8.GetBytes(encryptString);
cs.Write(data, 0, data.Length);
cs.FlushFinalBlock();
return Convert.ToBase64String(ms.ToArray());
}
catch
{
return encryptString;
}
finally
{
if (cs != null)
cs.Close();
if (ms != null)
ms.Close();
if (aes != null)
aes.Clear();
}
}
In Windows Runtime:
public static string EncryptAES(string plainText)
{
string pw = "testtest123";
string salt = "abcabcabcd";
IBuffer plainBuffer = CryptographicBuffer.ConvertStringToBinary(plainText, BinaryStringEncoding.Utf8);
IBuffer saltBuffer = CryptographicBuffer.ConvertStringToBinary(salt, BinaryStringEncoding.Utf8);
IBuffer pwBuffer = CryptographicBuffer.ConvertStringToBinary(pw, BinaryStringEncoding.Utf8);
KeyDerivationAlgorithmProvider keyDerivationProvider = Windows.Security.Cryptography.Core.KeyDerivationAlgorithmProvider.OpenAlgorithm(KeyDerivationAlgorithmNames.Pbkdf2Sha256);
// using salt and 1000 iterations
KeyDerivationParameters pbkdf2Parms = KeyDerivationParameters.BuildForPbkdf2(saltBuffer, 1000);
// create a key based on original key and derivation parmaters
CryptographicKey keyOriginal = keyDerivationProvider.CreateKey(pwBuffer);
IBuffer keyMaterial = CryptographicEngine.DeriveKeyMaterial(keyOriginal, pbkdf2Parms, 32);
CryptographicKey derivedPwKey = keyDerivationProvider.CreateKey(pwBuffer);
// derive buffer to be used for encryption salt from derived password key
IBuffer saltMaterial = CryptographicEngine.DeriveKeyMaterial(derivedPwKey, pbkdf2Parms, 16);
// display the buffers - because KeyDerivationProvider always gets cleared after each use, they are very similar unforunately
string keyMaterialString = CryptographicBuffer.EncodeToBase64String(keyMaterial);
string saltMaterialString = CryptographicBuffer.EncodeToBase64String(saltMaterial);
//AES_CBC_PKCS7
SymmetricKeyAlgorithmProvider symProvider = SymmetricKeyAlgorithmProvider.OpenAlgorithm("AES_CBC_PKCS7");
// create symmetric key from derived password key
CryptographicKey symmKey = symProvider.CreateSymmetricKey(keyMaterial);
// encrypt data buffer using symmetric key and derived salt material
IBuffer resultBuffer = CryptographicEngine.Encrypt(symmKey, plainBuffer, saltMaterial);
string result = CryptographicBuffer.EncodeToBase64String(resultBuffer);
return result;
}
In Silverlight Project, string "123456" encrypted by AES is "4UfdhC/0MFQlMhl7N7gqLg==";
While in Windows Runtime Project, the AES-encrypted string is "jxsR5EuhPXgRsHLs4N3EGQ=="
So how can I get the same string on WinRT as the one on Silverlight ?
The AES classes default to a random IV on the Microsoft runtimes. To get the same ciphertext you'll need to use a static IV. That's however not secure. Instead you should just check if you get the same key bytes and let the ciphertext differ for each run. Otherwise you can clearly distinguish identical plaintext.
You also seem to be using the wrong hash algorithm, Rfc2898DeriveBytes uses SHA-1 instead of SHA-256 as underlying hash function.
I've created class which works for me as replacement for System.Security.Cryptography.Rfc2898DeriveBytes:
public class Rfc2898DeriveBytes
{
private readonly string _password;
private readonly byte[] _salt;
public Rfc2898DeriveBytes(string password, byte[] salt)
{
_password = password;
_salt = salt;
IterationCount = 1000;
}
public uint IterationCount { get; set; }
public byte[] GetBytes(uint cb)
{
var provider = KeyDerivationAlgorithmProvider.OpenAlgorithm(KeyDerivationAlgorithmNames.Pbkdf2Sha1);
var buffSecret = CryptographicBuffer.ConvertStringToBinary(_password, BinaryStringEncoding.Utf8);
var buffsalt = CryptographicBuffer.CreateFromByteArray(_salt);
var keyOriginal = provider.CreateKey(buffSecret);
var par = KeyDerivationParameters.BuildForPbkdf2(buffsalt, IterationCount);
var keyMaterial = CryptographicEngine.DeriveKeyMaterial(keyOriginal, par, cb);
byte[] result;
CryptographicBuffer.CopyToByteArray(keyMaterial, out result);
return result;
}
}
Is it possible in C# which internet connection to use?
This is for check:
Ping myPing = new Ping();
String host = "google.com";
byte[] buffer = new byte[32];
int timeout = 1000;
PingOptions pingOptions = new PingOptions();
PingReply reply = myPing.Send(host, timeout, buffer, pingOptions);
if (reply.Status == IPStatus.Success) {
// presumably online
}
I'm working on a student project and trying to send JSON data (based on the twitter hashtag '#tune') from Processing to Arduino, but the method 'myPort.write(status);' isn't usable with JSON, I've looked around online but not sure what command to use - Am I on the right track? Here's the code:
Processing:
import processing.serial.*; //Serial connection for Arduino
import java.lang.reflect.Method;
import com.temboo.core.*; // Temboo library
import com.temboo.Library.Twitter.Search.*; // Temboo Twitter search library
// Create a session using your Temboo account application details
TembooSession session = new TembooSession("MYUSERNAME", "MYTEMBOOAPPNAME", "MYTEMBOOCODE");
// Setup objects
Serial myPort; // Create object from Serial class
int portNo = 7; // Define portNo as int
int baudRate = 9600; // Define baudRate as int
void setup() {
// Run the Tweets Choreo function
runTweetsChoreo();
String portName = Serial.list()[portNo]; // Setup String for port ([7] is the port number for my machine)
myPort = new Serial(this, portName, baudRate); // Setting up serial port
}
void runTweetsChoreo() {
// Create the Choreo object using your Temboo session
Tweets tweetsChoreo = new Tweets(session);
// Set credential
tweetsChoreo.setCredential("ArduinoUkulele");
// Set inputs
// Run the Choreo and store the results
TweetsResultSet tweetsResults = tweetsChoreo.run();
// retrieve the results as JSON
JSONObject results = parseJSONObject(tweetsResults.getResponse());
// retrieve the statuses from the results
JSONArray statuses = results.getJSONArray("statuses");
// loop through the statuses
for (int i = 0; i < statuses.size(); i++){
JSONObject status = statuses.getJSONObject(i);
println(status.getString("text"));
println(" -- -- -- -- -- -- -- -- -- -- -- -- -- -- ");
myPort.write(status); // THIS IS THE CODE NOT WORKING WITH JSON
}
}
Arduino:
char val; // Data received from the serial port
int ledPin = 13; // Set the pin to digital I/O 13
void setup(){
pinMode(ledPin, OUTPUT); // Set pin as OUTPUT
Serial.begin(9600); // Start serial communication at 9600 bps
}
void loop(){
if (Serial.available()) { // If data is available to read,
val = Serial.read(); // read it and store it in val
Serial.println(val);
delay(10); // Wait 10 milliseconds for next reading
}
}
I'm sure I'm just looking for a certain command - once I've received the data I'm just looking to turn the LED on based on a new hashtag being receieved. Any help would be greatly appreciated!
Cheers
Arthur
write() method cannot take JSONObject as input, only byte[], String. But you already have everything you need: just use getString() method as you are doing it two lines above (using appropriate key string, which you should know from API that you are using on top of JSON):
myPort.write(status.getString("text"));
I have a mySQL database with hashed passwords that I cannot abandon. I need to duplicate the encrypt() function of mySQL so that I can be consistent in my hash creation for login in an iOS app I'm creating. (I'm using the first 2 characters of the password as the salt for the encrypt function)
Has anyone done this before? I tried to add the following category to NSString based on code I found elsewhere, but the resulting string isn't even close. (I have a base64 category on NSData and yes, I'm new to the CCCrypt call)
-(NSString*) encryptWithSalt:(NSString *)salt {
NSString *token = self;
const void *vplainText;
size_t plainTextBufferSize = [token length];
vplainText = (const void *) [token UTF8String];
CCCryptorStatus ccStatus;
uint8_t *bufferPtr = NULL;
size_t bufferPtrSize = 0;
size_t movedBytes = 0;
bufferPtrSize = (plainTextBufferSize + kCCBlockSizeDES) & ~(kCCBlockSizeDES - 1);
bufferPtr = malloc( bufferPtrSize * sizeof(uint8_t));
memset((void *)bufferPtr, 0x0, bufferPtrSize);
uint8_t iv[kCCBlockSizeDES];
memset((void *) iv, 0x0, (size_t) sizeof(iv)); // zero out iv
const void *vkey = (const void *) [salt UTF8String];
ccStatus = CCCrypt(kCCEncrypt,
kCCAlgorithmDES,
kCCOptionPKCS7Padding | kCCModeCBC,
vkey,
kCCKeySizeDES,
iv,
vplainText,
plainTextBufferSize,
(void *)bufferPtr,
bufferPtrSize,
&movedBytes);
NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes];
NSString* hash;
if (ccStatus == kCCSuccess) {
hash = [myData base64EncodedString];
}
return hash;
}
The crypt() function cannot be emulated using CCCrypt, as the salt is used to alter the DES E-box. This makes the cipher in use no longer really DES.
However, it's still just the crypt() function. Call it directly:
- (NSString*) encryptWithSalt:(NSString *)salt {
return [NSString stringWithUTF8String:crypt([self UTF8String], [salt UTF8String])];
}
Incidentally, using the first two characters of the password as the salt is a gigantic security hole, as it narrows the password search space down to six characters (because the first two are given away by the salt).
I get "Length of the data to decrypt is invalid." exception when i try to decrypt a memory stream. I am beginner, cant figure out whats wrong. whats wrong?
public bool EncryptStream()
{
string password = #"myKey123"; // Your Key Here
UnicodeEncoding UE = new UnicodeEncoding();
byte[] key = UE.GetBytes(password);
s_EncryptedStream = new MemoryStream();
int NoOfBytes;
byte[] b_Buffer = new byte[8192];
s_MemoryStream.Seek(0, SeekOrigin.Begin);
RijndaelManaged RMCrypto = new RijndaelManaged();
s_CrytpoStream = new CryptoStream(s_EncryptedStream,
RMCrypto.CreateEncryptor(key, key),
CryptoStreamMode.Write);
while (s_MemoryStream.Length < s_MemoryStream.Position)
{
NoOfBytes = s_MemoryStream.Read(b_Buffer, 0, 8192);
s_CrytpoStream.Write(b_Buffer, 0, NoOfBytes);
}
s_MemoryStream.Seek(0, SeekOrigin.Begin);
while (s_EncryptedStream.Position < s_EncryptedStream.Length)
{
NoOfBytes = s_EncryptedStream.Read(b_Buffer, 0, 8192);
s_MemoryStream.Write(b_Buffer, 0, NoOfBytes);
}
s_CrytpoStream.Flush();
s_CrytpoStream.Close();
return true;
}
public bool DecryptStream()
{
string password = #"myKey123"; // Your Key Here
UnicodeEncoding UE = new UnicodeEncoding();
byte[] key = UE.GetBytes(password);
int NoOfBytes;
byte[] b_Buffer = new byte[8192];
s_DecryptedStream = new MemoryStream();
RijndaelManaged RMCrypto = new RijndaelManaged();
s_CrytpoStream = new CryptoStream(s_MemoryStream,
RMCrypto.CreateDecryptor(key, key),
CryptoStreamMode.Read);
s_MemoryStream.Seek(0, SeekOrigin.Begin);
while (s_MemoryStream.Length > s_MemoryStream.Position)
{
NoOfBytes = s_CrytpoStream.Read(b_Buffer, 0, 8192);
s_DecryptedStream.Write(b_Buffer, 0, NoOfBytes);
}
s_DecryptedStream.Seek(0, SeekOrigin.Begin);
s_MemoryStream.Seek(0, SeekOrigin.Begin);
while (s_DecryptedStream.Position < s_DecryptedStream.Length)
{
NoOfBytes = s_DecryptedStream.Read(b_Buffer, 0, 8192);
s_MemoryStream.Write(b_Buffer, 0, NoOfBytes);
}
s_CrytpoStream.Flush();
s_CrytpoStream.Close();
return true;
}
For a start, this while loop condition is never right:
while (s_MemoryStream.Length < s_MemoryStream.Position)
How can the position be beyond the length?
Rather than using the length of a stream, the usual pattern to copy a stream is to read repeatedly until the value returned isn't positive. As you're doing that twice in this code anyway, you might as well encapsulate it:
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8192];
int read;
while ( (read = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
}
}
It's also best to use using statements to clean up strings. Additionally, the Encoding.Unicode property means you don't have to create a new UnicodeEncoding yourself. Also, I generally find that setting the Position property is more readable than using Seek. Finally, there's no point in a method returning a value if it's always going to be true. So, your code would become:
public void EncryptStream()
{
string password = #"myKey123"; // Your Key Here
byte[] key = Encoding.Unicode.GetBytes(password);
s_EncryptedStream = new MemoryStream();
s_MemoryStream.Position = 0;
RijndaelManaged RMCrypto = new RijndaelManaged();
using (Stream crytpoStream = new CryptoStream(s_EncryptedStream,
RMCrypto.CreateEncryptor(key, key),
CryptoStreamMode.Write))
{
CopyStream(s_MemoryStream, cryptoStream);
}
s_MemoryStream.Position = 0;
s_EncryptedStream.Position = 0;
CopyStream(s_EncryptedStream, s_MemoryStream);
}
public void DecryptStream()
{
string password = #"myKey123"; // Your Key Here
byte[] key = Encoding.Unicode.GetBytes(password);
s_DecryptedStream = new MemoryStream();
s_MemoryStream.Position = 0;
RijndaelManaged RMCrypto = new RijndaelManaged();
using (Stream crytpoStream = new CryptoStream(s_MemoryStream,
RMCrypto.CreateDecryptor(key, key),
CryptoStreamMode.Read))
{
CopyStream(cryptoStream, s_DecryptedStream);
}
s_DecryptedStream.Position = 0;
s_MemoryStream.Position = 0;
CopyStream(s_DecryptedStream, s_MemoryStream);
}
Even after amending this code, it feels like you've got way too many non-local variables here. I can't see why any of this should be in instance variables. Make the stream to be encrypted or decrypted a parameter (along with the password) and return a memory stream with the encrypted/decrypted data in, or just a byte array.
You may need to call the FlushFinalBlock method on the CryptoStream after you have finished reading the input data. (ie. crytpoStream.FlushFinalBlock() after CopyStream)
I have come up with a solution and I have it posted on my new Blog
constotech.blogspot.com/2009/05/net-encryption-using-symmetricalgorithm.html