I am currently playing around with encryption in the Windows Runtime. When using certain encryption algorithms I get either a NotImplementedException (AesCcm, AesGcm) or an ArgumentException (AesEcb, AesEcbPkcs7, DesEcb, DesEcbPkcs7, Rc2Ecb, Rc2EcbPkcs7, Rc4, TripleDesEcb, TripleDesEcbPkcs7).
I use the correct key length for each algorithm (I figured that a wrong key length triggers an ArgumentException). For RC4 I use a key of size 1024 since the key is variable. When using the version without padding I pad the data myself to the block length. I kind of understand that AES with CCM and GCM is obviously not implemented in Windows 8, 64 Bit. But the ArgumentException of the variants with ECB cipher mode and of RC4 are strange.
Here is a sample code:
SymmetricKeyAlgorithmProvider symmetricKeyAlgorithmProvider =
SymmetricKeyAlgorithmProvider.OpenAlgorithm(SymmetricAlgorithmNames.AesEcbPkcs7);
byte[] plainText = {1, 2, 3, 4, 5, 6, 7, 9, 9, 0};
const uint keySize = 256;
byte[] key = CryptographicBuffer.GenerateRandom(keySize).ToArray();
uint blockLength = symmetricKeyAlgorithmProvider.BlockLength;
byte[] initializationVector =
CryptographicBuffer.GenerateRandom(blockLength).ToArray();
CryptographicKey cryptographicKey =
symmetricKeyAlgorithmProvider.CreateSymmetricKey(key.AsBuffer());
// This line throws an ArgumentException. The exception gives no hint what
// argument is meant and why the value is invalid.
byte[] cipherText = CryptographicEngine.Encrypt(cryptographicKey,
plainText.AsBuffer(), initializationVector.AsBuffer()).ToArray();
By the way: I know that ECB is not considered safe. But Microsoft included ECB for certain algorithms. That must have a reason (parallelization or so).
The very same code works using AesCbcPkcs7 for example. A similar code for .NET using AES with ECB and PKCS7, a key length of 256 and an IV sized equal to the block length works as well on the same machine.
What could be meant by the ArgumentException?
I found the answer to the ArgumentException myself: I passed an initialization vector even for algorithmns that does not make use of it (like ECB cipher modes or RC4). These algorithms require that the initialization vector is passed as null.
Related
I've been using strings to represent decoded JSON integers larger than 32 bits. It seems the string_of_int is capable of dealing with large integer inputs. So a decoder, written (in the Json.Decode namespace):
id: json |> field("id", int) |> string_of_int, /* 'id' is string */
is succefully dealing with integers of at least 37 bits.
Encoding, on the other hand, is proving troublesome for me. The remote server won't accept a string representation, and is expecting an int64. Is it possible to make bs-json support the int64 type? I was hoping something like this could be made to work:
type myData = { id: int64 };
let encodeMyData = (data:myData) => Json.Encode.(object_([("id", int64(myData.id)]))
Having to roll my own encoder is not nearly as formidable as a decoder, but ... I'd rather not.
You don't say exactly what problem you have with encoding. The int encoder does literally nothing except change the type, trusting that the int value is actually valid. So I would assume it's the int_of_string operation that causes problems. But that begs the question, if you can successfully decode it as an int, why are you then converting it to a string?
The underlying problem here is that JavaScript doesn't have 64 bit integers. The max safe integer is 253 - 1. JavaScript doesn't actually have integers at all, only floats, which can represent a certain range of integers, but can't efficiently do integer arithmetic unless they're converted to either 32-bit or 64-bit ints. And so for whatever reason, probably consistent overflow handling, it was decided in the EcmaScript specification that binary bitwise operations should operate on 32-bit integers. And so that opened the possibility for an internal 32-bit representation, a notation for creating 32-bit integers, and the possibility of optimized integer arithmetic on those.
So to your question:
Would it be "safe" to just add external int64 : int64 -> Js.Json.t = "%identity" to the encoder files?
No, because there's no 64-bit integer representation in JavaScript, int64 values are represented as an array of two Numbers I believe, but is also an internal implementation detail that's subject to change. Just casting it to Js.Json.t will not yield the result you expect.
So what can you do then?
I would recommend using float. In most respects this will behave exactly like JavaScript numbers, giving you access to its full range.
Alternatively you can use nativeint, which should behave like floats except for division, where the result is truncated to a 32-bit integer.
Lastly, you could also implement your own int_of_string to create an int that is technically out of range by using a couple of lightweight JavaScript functions directly, though I wouldn't really recommend doing this:
let bad_int_of_string = str =>
str |> Js.Float.fromString |> Js.Math.floor_int;
Given a to-be-checked string s, an indipendently verified "salt" t, an append operator + and the arbitrarily sized cyclic redundancy check functions crc2X() and crcX(), is it the case that crcX(s)+crcX(t+s) has the same data degradation detection capability of crc2X(s) for the string s?
No.
crcX(t+s) can be calculated from crcX(t), crcX(s), and the length of t. Therefore you have added exactly zero error detection information about s by appending crcX(t+s). All you have added is error detection information about t.
crc2X(s), for a properly chosen CRC polynomial, will have better error detection capability than crcX(s), simply because it has more bits.
I have this issue in converting a HEX string to Number in as3
I have a value
str = "24421bff100317"; decimal value = 10205787172373271
but when I parseInt it I get
parseInt(str, 16) = 10205787172373272
Can anyone please tell me what am I doing wrong here
Looks like adding one ("24421bff100318") works fine. I have to assume that means this is a case of precision error.
Because there are only a finite amount of numbers that can be represented with the memory available, there will be times that the computer is estimating. This is common when working with decimals and very large numbers. It's visible, for example, in this snippet where apparently the computer can't add basic decimals:
for(var i=0;i<3;i+=0.2){
trace(i);
}
There are a few workarounds if accuracy at this level is critical, namely using datatypes that store more information ("long" instead of "int" in Java - I believe "Number" might work in AS3 but I have not tested it for your scenario) or if that fails, breaking the numbers down into smaller parts and adding them together.
For further reading to understand this topic (since I do think it's fascinating), look up "precision errors" and "data types".
I'm writing a decoder-handler for a binary message format which is possible to decode up to the last complete element of the message in the available bytes. So if there's an array of int64 values, you read the highest multiple of 8 bytes which is less than ByteBuf.readableBytes(). I'm using netty 5.0.0.Alpha2.
My problem is that it looks as though Netty is discarding the unread bytes left in the ByteBuf, rather than appending new network-bytes to them; this means that when I try to resume decoding, it fails since there are missing bytes and a corrupt stream.
Are there ChannelHandlerContext or ByteBuf or Channel methods which I should be invoking to preserve those unread bytes? Or is the current/only solution to save them in a scratch space within the handler myself? I suspect that buffer-pooling is the reason why a different buffer is being used for the subsequent read.
Thanks
Michael
PS: I'm not keen on using the ReplayingDecoder or ByteToMessageDecoder classes as fitting my decoder library around them would be too intrusive (IMHO).
Are there ChannelHandlerContext or ByteBuf or Channel methods which I should be invoking to preserve those unread bytes? Or is the current/only solution to save them in a scratch space within the handler myself? I suspect that buffer-pooling is the reason why a different buffer is being used for the subsequent read.
That's what ByteToMessageDecoder does. If I understood correctly, you want your buffer always has n * 8 bytes.
public class Int64ListDecoder extends ByteToMessageDecoder {
#Override
protected void decode(ctx, in, out) {
final int inLen = in.readableBytes();
final int outLen = inLen / 8 * 8;
out.add(in.readSlice(outLen).retain());
}
}
I'm trying to read a csv file. The issue is that it is too large and I have had to use an error handler. Within the error handler, I have to call csv.field_size_limit(). Which does not work even by itself as I keep receiving a 'limit must be an integer' error. From further research, I have found that this is probably an install error. I've installed all third party tools using the Package Manager so I am not sure what could be going wrong. Any ideas about how to correct this issue?
import sys
import csv
maxInt = sys.maxsize
decrement = True
while decrement:
decrement = False
try:
csv.field_size_limit(maxInt)
except OverflowError:
maxInt = int(maxInt/10)
decrement = True
with open("Data.csv", 'rb') as textfile:
text = csv.reader(textfile, delimiter=" ", quotechar='|')
for line in text:
print ' '.join(line)
Short answer: I am guessing that you are on 64-bit Windows. If so, then try using sys.maxint instead of sys.maxsize. Actually, you will probably still run into problems because I think that csv.field_size_limit() is going to try to preallocate memory of that size. You really want to estimate the actual field size that you need and maybe double it. Both sys.maxint and sys.maxsize are much too big for this.
Long explanation: Python int objects store C long integers. On all relevant 32-bit platforms, both the size of a pointer or memory offset and C long integers are 32-bits. On most UNIXy 64-bit platforms, both the size of a pointer or memory offset and C long integers are 64-bits. However, 64-bits Windows decided to keep C long integers 32-bits while bumping up the pointer size to 64-bits. sys.maxint represents the biggest Python int (and thus C long) while sys.maxsize is the biggest memory offset. Consequently, on 64-bit Windows, sys.maxsize is a Python long integer because the Python int type cannot hold a number of that size. I suspect that csv.field_size_limit() actually requires a number that fits into a bona fide Python int object. That's why you get the OverflowError and the limit must be an integer errors.