I'm working on storing a JSON snippet onto a type 31 QR Code so that I can scan it with a smartphone and parse the JSON.
I'm running into a few challenges..
A type 31 QR Code is the "densest" (for lack of better words) code that I can get my Android device to reliably scan. This can store 2677 Alphanumeric characters factoring in 7% error correction.
What are my options for compressing my optimized/minified JSON object and encoding a QR code with it? Conceivably, how much more data can I store? Or am I even barking up the right tree?
It all depends, really.
Is Wi-Fi available? If so, put your JSON snippets on a web server and encode their URLs in the QR codes. Problem solved.
If this is for general consumption, then you need to be aware that some phones are better than others. Mine really struggled to scan a version 25 QR code. I'd consider anything higher than version 20 as unreliable.
There's little benefit in using the alphanumeric mode. It only stores uppercase letters, digits 0-9 and a handful of punctuation marks. At 5½ bits per character (11 bits per pair), its storage capacity is almost identical to the corresponding binary mode (8 bits per character).
In a quick test gzip -n -9 reduced a 545-byte JSON file to 219 bytes (40% of the original size). You could do a lot better than this if you stored your data in a compact binary format instead of a verbose tagged format.
If you're putting these QR codes out in public, you'll need to include some sort of authentication mechanism (e.g., a 32-bit checksum) to prevent malicious code injections and other tomfoolery.
Related
I notice there is a macro uint4korr in the MySQL/MariaDB source code.
include/byte_order_generic.h
I merely understand this macro is correlated with byte order. But I looked for the comments about this macro, found nothing. I don't know the meaning of the suffix korr. What does the abbreviation express?
I want to know why the code implements like this? What are the effects on different platforms?
"korr" is an abbreviation for "Korrekt" of the phonic and meaning equivalent of the English word "Correct".
The purpose of the code is to provide a uniform byte order of storage and communication components so the storage files are portable between different endian architectures without conversion, and the client/server communication doesn't need to know which endian the other architecture is.
I believe that the related Swedish verb is korrigera, to correct. uint4korr() is kind of the opposite of ntohl(), because it will swap the bytes on a big-endian architecture and not little-endian.
Somewhat related to this, the InnoDB storage engine stores its data in big-endian byte order, so that a simple memcmp() can be used for comparing keys. (It also inverts the sign bit of signed integers due to this.) The InnoDB function mach_read_from_4() is basically ntohl() combined with a 32-bit load via an unaligned pointer. Recent versions of GCC and clang impress me by translating that into the IA-32 or AMD64 instructions mov and bswap or simply movbe.
I've noticed a lot of payload data encoded as Base64 before transmission in many IoT use cases. Particularly in LPWAN (LoRa, LTE-M, NBIoT Sigfox, etc).
For simplicity sake, sending JSON payloads makes a lot of sense. Also it's my understanding that Base64 encoding adds some additional weight to the payload size, which for low bandwidth use cases it seems counter intuitive.
Could someone explain the benefits of using Base64 in IoT (or otherwise) applications?
Thanks!
Well, base64 is generally used to encode binary formats. Since binary is the native representation of data in a computer, it is obviously the easiest format for resource-constrained embedded devices to handle. It's also reasonably compact.
As an algorithm, base64 is a fairly simple conceptually and requires very few resources to implement, so it's a good compromise for squeezing binary data through a text-only channel. Building a JSON record, on the other hand, typically requires a JSON library which consumes RAM and code space - not horribly much, but still more than base64.
Not to mention that the data channels you've mentioned are rather starved for bandwidth. E.g. public LoRaWAN deployments are notorious for permitting a device to send a few dozen bytes of data a few dozen times per day.
If I want to encode a data record consisting of, say a 32-bit timestamp, an 8-bit code specifying the type of data (i.e. temperature, voltage or pressure) and 32-bit data sample:
struct {
time_t time;
uint8_t type;
uint32_t value;
}
This will use 9 bytes. It grows to around 12 bytes after being encoded with base64.
Compare that with a simple JSON record which is 67 bytes after leaving out all whitespace:
{
"time": "2012-04-23T18:25:43.511Z",
"type": "temp",
"value": 26.94
}
So 12 B or 67 B - not much competition for bandwidth starved data channels. On a LoRaWAN link that could make the different between squeezing into your precious uplink slot 5-6 data records or 1 data record.
Regarding data compression - on a resource constrained embedded device it's much, much more practical to encode data as compact binary instead of transforming it into a verbose format and compressing that.
I recently started reading and employing gRPC in my work. gRPC uses protocol-buffers internally as its IDL and I keep reading everywhere that protocol-buffers perform much better, faster as compared to JSON and XML.
What I fail to understand is - how do they do that? What design in protocol-buffers actually makes them perform faster compared to XML and JSON?
String representations of data:
require text encode/decode (which can be cheap, but is still an extra step)
requires complex parse code, especially if there are human-friendly rules like "must allow whitespace"
usually involves more bandwidth - so more actual payload to churn - due to embedding of things like names, and (again) having to deal with human-friendly representations (how to tokenize the syntax, for example)
often requires lots of intermediate string instances that are used for member-lookups etc
Both text-based and binary-based serializers can be fast and efficient (or slow and horrible)... just: binary serializers have the scales tipped in their advantage. This means that a "good" binary serializer will usually be faster than a "good" text-based serializer.
Let's compare a basic example of an integer:
json:
{"id":42}
9 bytes if we assume ASCII or UTF-8 encoding and no whitespace.
xml:
<id>42</id>
11 bytes if we assume ASCII or UTF-8 encoding and no whitespace - and no namespace noise like namespaces.
protobuf:
0x08 0x2a
2 bytes
Now imagine writing a general purpose xml or json parser, and all the ambiguities and scenarios you need to handle just at the text layer, then you need to map the text token "id" to a member, then you need to do an integer parse on "42". In protobuf, the payload is smaller, plus the math is simple, and the member-lookup is an integer (so: suitable for a very fast switch/jump).
While binary protocols have an advantage in theory, in practice, they can lose in performance to JSON or other protocol with textual representation depending on the implementation.
Efficient JSON parsers like RapidJSON or jsoniter-scala parse most JSON samples at speed 2-8 cycles per byte. They serialize even more efficiently, except some edge cases like numbers with floating points when serialization speed can drop down to 16-32 cycles per byte.
But for most domains which don't have a lot of floats or doubles their speed is quite competitive with the best binary serializers. Please see results of benchmarks where jsoniter-scala parses and serializes on par with Java and Scala libraries for ProtoBuf:
https://github.com/dkomanov/scala-serialization/pull/8
I'd have to argue that Binary Protocols will typically always win in performance vs text based protocols. Ha, you won't find many (or any) video streaming applications using JSON to represent the frame data. However, any poorly designed data structure will struggle when being parsed. I've worked on many communications projects to where the text based protocols were replaced with "binary protocols".
I'am working on a small perl script. And I store the data using JSON.
I decode the JSON string using from_json an encode with to_json.
To be more specific:
The data scale could be something like 100,000 items in a hash
The data is stored in a file in the disk.
So to decode it, I'll have to read it from the disk first
And my question is:
There is a huge difference in the speed between the decoding and encoding process.
The encoding process seems to be much faster than the decoding process.
And I wonder what makes that difference ?
Parsing is much more computationally expensive than formatting.
from_json has to parse the json structures and convert them into perl data structures, to_json merely has to iterate through the data structure and "print" out each item in a formatted way.
Parsing is a complex topic that still is the focus of CS theory work. However at the base level, parsing is a 2 step operation. You need to parse the input stream for tokens and then validate the sequence of tokens as a valid statement in the language. Encoding is on the other hand a single step operation, you already know it's valid, you simply have to convert it to the representation.
JSON (the module) is not a parser/encoder. It's merely a front-end for JSON::XS (very fast) or JSON::PP (not so much). JSON will use JSON::XS if it's installed, but defaults to JSON::PP if it's not. You might see very different numbers depending on whether you have JSON::XS installed or not.
I could see a Perl parser (like JSON::PP) having varying performances for encoding and decoding because it's hard to write something optimal because of all the overhead, but the difference should be much smaller using JSON::XS.
It might still be a bit slower to decode using JSON::XS because of all the memory blocks it has to allocate. Allocating memory is a relatively expensive process, and it needs to be done fare more time when decoding than when encoding. For example, a Perl string consists of a three memory blocks (scalar head, scalar body and the string buffer itself). When encoding, allocating memory is only done when the output buffer needs to be enlarged.
In attempting to understand the concept of binary, my question is "How does a stored image or video look in binary on the hard drive?"
As for how it is physically stored, it depends on the technology of your storage device. For a hard disk drive you can read about it on Wikipedia.
The next layer is how the controller on the storage device sends the data to the motherboard.
Then how the motherboard sends the data to the operating system.
Then how the operating system stores the data on the disk (what file system it uses; NTFS is common in modern Windows installations.)
Finally, what you'll see when reading the data is groups of 8 bits (bytes) which are basically 8 on/off flags, which together form 256 possible combinations. Which is why most image formats are stored with colors varying from 0-255 for each channel (red, green, blue.) Most raw formats are stored linearly, so you can actually try reading them yourself. A raw image where the first pixel is red (assuming it stores the pixels left-to-right, top-to-bottom) would look like this in bits:
11111111 00000000 00000000
red green blue
For more information, you'll have to be more specific.
Every file on disk is basically a number of bits in a row.
The difference between "binary" and "something else" (often called ASCII, or text, or...) is that non-binary is basically human readable when opened in a text editor. In other words: the bytes in the file map to human readable letter (and other) characters in some way a generic text editor knows how to handle.
So called binary files can only be interpreted back to that data that they actually contain when you know the format which was used to map the content (image, sound, movie, whatever) to a stream of zeros and ones. This mapping is called the file format and is usually part of the file name in the form of an extension. You need a piece of software that knows the mapping and can interpret the row of bits back into the original content.
Mind you: this is usually only a hint. Renaming a JPEG image file to have a .mp3 extension doesn't change it into an audio file; it is still just an image file, containing the image (=dimensions of the image in pixels + the color values for each pixel, basically) encoded into a stream of zeros and ones in the way described in the JPEG file format encoding description.
Check out the link: Binary File Format
The images are sequential flow of colored dots... But it's not hardware dependent i.e. your hard-disk will store any thing in any format which your OS provide it to... However the OS maintain standards of saving file formats other wise a JPG image will not be valid one across different platforms...
Simillarly the videos are flows of images and voice data multiplexed into a sequential flow.
All data on commercial computer systems are stored in binary format (we'll ignore scientific studies into quantum and optical computing).
At the lowest level all files and processing by a computer are performed in binary. This is because our computing systems are powered by the flow of electrons. They either flow or don't. Electric current is on or off. 1 and 0.
The data stored on a hard disk is there due to pulsing of the hard disk write head coil which magnetises spots of hard disk material. These magnetised spots cause a current pulse in the read coil (in actual fact the read and write coils are the same) as the hard disk head passes over them. Hence the data is read as a stream of current pulses, 1s and 0s.
Now processors are built to accept process a finite number of binary "pulses" or data bits simultaneously (it can be anything from 4 bits upwards). Hence a modern 64bit PC can process 64 binary data bits i.e. 64 1s and 0s, at any one time.
Now at a higher level, although all files are stored as binary and can be read in binary format we help the processing of them by telling the processor what format to read them in. This is so that it process the file data as small chunks e.g. 8 bits or 1 byte for ASCII text.
The operating system provides the processor with a template for any given file. This is set up in an extension relation table. And according to what the file extension is the operating system will expect that data to be in a particular format and link it to code that can be used by the processor to interpret it. Hence changing a file name extension will confuse the processor as it won't interpret the data correctly. That's why changing the filename from *.jpg to *.exe won't show the image, as the processor has been told to expect executable code, which the data within the file clearly isn't.
So back to your original question the image within the jpeg file has been encoded as series of 1s and 0s in a specific order.
I'm not sure how exactly they are arranged, but as an example:
A picture was captured and stored as a bitmap at a resoultion of 800 x 600 in 24bit colour. The first pixel is stored as 3 bytes (8 bit binary) representing a red, green and blue value. The value of each byte dictates the intensity of that colour. 0 - 255, with 0 being none at all to 255 being the highest value. Unsigned 255 in binary is 11111111, I won't confuse you with 2's complement for signed values. So the full picture will require a file of minimum 1,440,000 bytes or about 1,406 kilobytes (a kilobyte being 1024 bytes).
The binary as follows: 000010101011010101101010101 would be stored on a hard drive actual microscopic bumps and troughs by changing the polarity of the metalic grains on the disk in specific regions.Binary is actually read from right to left, obviously the opposite way of how most people read text.
If your question is really "how does it look": See Figure 4 on this page; it shows high resolution measurements of a hard drive.
Although googletorp's answer does not look very helpful, it's not totally untrue. To store binary data, the only thing you need is the possibility to have two different states for each storage unit (be it an on/off switch, hole or no hole in a punchcard, or, as in the case of hard drives, the direction of ferromagnetic particles).
The Wikipedia page for the BMP File Format contains an example(Including all hex values) of a 2x2 pixel bitmap image, it should be very good at explaining the basis of the binary representation of an image.
In general if you're really curious how the binary looks for a file you could always use a Hex Viewer and take a look yourself :) I normally use od on Linux to dump the binary information of a file. I'm sure you can google a good Hex Editor for Windows (or maybe someone can suggest one.)
Headers ? Every file created contains header information, that are also stored as binary bits along with the data. The header bits of a files holds the information of header length, file type, file location and length. Now each application is designed to read certain file types. If the application tries to open a file on hard disk which has a header with a different file format, that is not supported by the application, it fails to read the file. Thus a text file cannot be opened using a media player. Because a media player expects a file that contains a header with audio file format binary pattern. Similarly, same in case of picture files.