I hope this isn't too opinionated for SO; it may not have a good answer.
In a portion of a library I'm writing, I have a byte array that gets populated with values supplied by the user. These values might be of type Float, Double, Int (of different sizes), etc. with binary representations you might expect from C, say. This is all we can say about the values.
I have an opportunity for an optimization: I can initialize my byte array with the byte MAGIC, and then whenever no byte of the user-supplied value is equal to MAGIC I can take a fast path, otherwise I need to take the slow path.
So my question is: what is a principled way to go about choosing my magic byte, such that it will be reasonably likely not to appear in the (variously-encoded and distributed) data I receive?
Part of my question, I suppose, is whether there's something like a Benford's law that can tell me something about the distribution of bytes in many sorts of data.
Capture real-world data from a diverse set of inputs that would be used by applications of your library.
Write a quick and dirty program to analyze dataset. It sounds like what you want to know is which bytes are most frequently totally excluded. So the output of the program would say, for each byte value, how many inputs do not contain it.
This is not the same as least frequent byte. In data analysis you need to be careful to mind exactly what you're measuring!
Use the analysis to define your architecture. If no byte never appears, you can abandon the optimization entirely.
I was inclined to use byte 255 but I discovered that is also prevalent in MSWord files. So I use byte 254 now, for EOF code to terminate a file.
Concerning modern web applications and browsers.
When sending from server to client (and vice versa) large objects (1-10MB raw text JSON size), does it make sense to shorten property name from:
people: {
name: 'Alex'
age: '999'
}
for example to:
p: {
n: 'Alex'
a: '999'
}
if we have huge number of such objects in the data?
Thus we can significantly reduce raw data size (up to 2-3 times). But does it make sense if GZip is used?
It makes some sense, depending on your circumstances.
Obviously, if the value is quite large, there's not much point in shortening the key, but if you have very large JSON objects with relatively small values then shorter keys can save both storage on your system and transmission time.
But you do, of course, need to beware of "obfuscating" the JSON unnecessarily, leading to coding errors. In particular, it's probably best to use meaningful keys during development, then shorten them, if deemed necessary, prior to "ship".
In addition, if gzip (or similar) compression is used the shorter keys will make almost no difference in the size of the compressed object.
I'll start off with a solid example: I have a function that generates hashes (32-bit integers) and saves them in localStorage. This is to implement a "don't show me again" feature for common notifications: if the hash is in the list, don't show the notification.
After my first attempt at coding this solution, my localStorage entry looked like this:
616845040,796177849,848184043,1133088406,1205053317,1478518197,1525440546,1686606993,1753347541,1908577591,2056496592,-864967541,-1185668678,-835401591,-1017499054,-559563441,-1842092814,-1069291933,-1887162563
19 hashes, 210 bytes of data.
A little later, I revisited the code. Instead of just dumping the integers as decimal strings, I converted them into actual binary data. In other words, each hash is now a string of four characters in length representing the binary value of the integer. My localStorage entry now looks like this:
$ÄNð/tµ¹2BëCGÓ§X eµZì`"dhõÕqÂ7z¥Ðᅩq¤ᄍT!ºᅫ4ÈᅢZ2R¥½Oメ3äòCæcマ/=
19 hashes, 76 bytes of data (There's some non-printable characters in there)
That's a savings of 63.8%.
Now, I am well aware that localStorage provides, by default, 5MB of storage space. I could easily store tens of thousands of hashes with the first method with no issues at all. But I like being efficient. I certainly wouldn't want a 5MB file on my computer when I could have the same data in 1.8MB (same compression ratio as above). That's why I save all my PNGs as indexed-palette when possible.
Is this a good mentality to have? Or am I just being pedantic? I guess this question could be summarised as: Should I compress, or just not care due to having more resources than I'll ever need?
Pedantic is good when coming to code. Compress when you can, but be sure that when reading your code, it's still readable and understandable that hashes are kept in whatever way.
What I mean is, don't sacrifice your code readability and maintainability for efficiency.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
When designing a file format for recording binary data, what attributes would you think the format should have? So far, I've come up with the following important points:
have some "magic bytes" at the beginning, to be able to recognize the files (in my specific case, this should also help to distinguish the files from "legacy" files)
have a file version number at the beginning, so that the file format can be changed later without breaking compatibility
specify the endianness and size of all data items; or: include some space to describe endianness/size of data (I would tend towards the former)
possibly reserve some space for further per-file attributes that might be necessary in the future?
What else would be useful to make the format more future-proof and minimize headache in the future?
Take a look at the PNG spec. This format has some very good rationale behind it.
Also, decide what's important for your future format: compactness, compatibility, allowing to embed other formats (different compression algorithms) inside it. Another interesting example would be the Google's protocol buffers, where size of the transferred data is the king.
As for endianness, I'd suggest you to pick one option and stick with it, not allowing different byte orders. Otherwise, reading and writing libraries will only get more complex and slower.
I agree that these are good ideas:
Magic numbers at the beginning. Pretty much required in *nix:
File version number for backwards compatibility.
Endianness specification.
But your fourth one is overkill, because #2 lets you add fields as long as you change the version number (and as long as you don't need forward compatibility).
possibly reserve some space for further per-file attributes that might be necessary in the future?
Also, the idea of imposing a block-structure on your file, expressed in many other answers, seems less like a universal requirement for binary files than a solution to a problem with certain kinds of payloads.
In addition to 1-3 above, I'd add these:
simple checksum or other way of detecting that the contents are intact. Otherwise you can't trust magic bytes or version numbers. Be careful to spec which bytes are included in the checksum. Typically you would include all bytes in the file that don't already have error detection.
version of your software (including the most granular number you have, e.g. build number) that wrote the file. You're going to get a bug report with an attached file from someone who can't open it and they will have no clue when they wrote the file because the error didn't occur then. But the bug is in the version that wrote it, not in the one trying to read it.
Make it clear in the spec that this is a binary format, i.e. all values 0-255 are allowed for all bytes (except the magic numbers).
And here are some optional ones:
If you do need forward compatibility, you need some way of expressing which "chunks" are "optional" (like png does), so that a previous version of your software can skip over them gracefully.
If you expect these files to be found "in the wild", you might consider embedding some clue to find the spec. Imagine how helpful it would be to find the string http://www.w3.org/TR/PNG/ in a png file.
It all depends on the purpose of the format, of course.
One flexible approach is to structure entire file as TLV (Tag-Length-Value) triplets.
For example, make your file comprized of records, each record beginning with a 4-byte header:
1 byte = record type
3 bytes = record length
followed by record content
Regarding the endianness, if you store endianness indicator in the file, all your applications will have to support all endianness formats. On the other hand, if you specify a particular endianness for your files, only applications on platforms with non-matching endiannes will have to do additional work, and it can be decided at compile time (using conditional compilation).
Another point, taken from .xz file spec (http://tukaani.org/xz/xz-file-format.txt): one of the first few bytes should be a non-character, "to prevent applications from misdetecting the file as a text file.". Note sure how many header bytes are usually inspected by editors and other tools, but using a non-binary byte in the first four or eight bytes seems useful.
One of the most important things to know before even starting is how your file will be used.
Will random or sequential access be the norm?
How often will the data be read?
How often will the data be written?
Will you write out the file in one go or will you be slowing writing it as data comes in.
Will the file need to be portable? Not all formats need to be.
Does it need to be compatible with other versions? Maybe updating the file is sufficient.
Does it need to be easy to read/write?
Size/Speed/Compexity tradeoff.
Most answers here give good advise on the portability/compatibility front so I am not going to add more. But consider the following (often overlooked) things.
Some files are often written and rarely read (backups, logs, ...) and you may want to focus on filesize and easy-writing.
Converting endianness is slow (relatively) if your file will never leave the host, or leaves rarely enough that conversion is a good option you can get a significant performance boost. Consider writing a number such as 0x1234 as part of the header so that you can detect (and instruct the user to convert) if this is the case.
Sometimes easy reading is really useful. If you are doing logs or text documents, consider compressing all in one go rather than per-entry so that you can zcat | strings the file and see what is inside.
There are many things to keep in mind and designing a good format takes a lot of planning and foresight. The little things such as zcating a file and getting useful information or the small performance boost from using native integers can give your product an edge, however you need to be careful that you don't sacrifice something important to get it.
One way to future proof the file would be to provide for blocks. Straight after your file header data, you can begin the first block. The block could have a byte or word code for the type of block, then a size in bytes. Now you can arbitrarily add new block types, and you can skip to the end of a block.
I would consider defining a substructure that higher levels use to store data, a little like a mini file system inside the file.
For example, even though your file format is going to store application-specific data, I would consider defining records / streams etc. inside the file in such a way that application-agnostic code is able to understand the layout of the file, but not of course understand the opaque payloads.
Let's get a little more concrete. Consider the usual ways of storing data in memory: generally they can be boiled down to either contiguous expandable arrays / lists, pointer/reference-based graphs, and binary blobs of data in particular formats.
Thus, it may be fruitful to define the binary file format along similar lines. Use record headers which indicate the length and composition of the following data, whether it's in the form of an array (a list of identically-typed records), references (offsets to other records in the file), or data blobs (e.g. string data in a particular encoding, but not containing any references).
If carefully designed, this can permit the file format to be used not just for persisting data in and out all in one go, but on an incremental, as-needed basis. If the substructure is properly designed, it can be application agnostic yet still permit e.g. a garbage collection application to be written, which understands the blobs, arrays and reference record types, and is able to trace through the file and eliminate unused records (i.e. records that are no longer pointed to).
That's just one idea. Other places to look for ideas are in general file system designs, or relational database physical storage strategies.
Of course, depending on your requirements, this may be overkill. You may simply be after a binary format for persisting in-memory data, in which case an approach to consider is tagged records.
In this approach, every piece of data is prefixed with a tag. The tag indicates the type of the immediately following data, and possibly its length and name. Lists may be suffixed with an "end-list" tag that has no payload. The tag may have an embedded identifier, so tags that aren't understood can be ignored by the serialization mechanism when it's reading things in. It's a bit like XML in this respect, except using binary idioms instead.
Actually, XML is a good place to look for long-term longevity of a file format. Look at its namespacing capabilities. If you construct your reading and writing code carefully, it ought to be possible to write applications that preserve the location and content of tagged (recursively) data they don't understand, possibly because it's been written by a later version of the same application.
Make sure that you reserve a tag code (or better yet reserve a bit in each tag) that specifies a deleted/free block/chunk.
Blocks can then be deleted by simply changing a block's current tag code to the deleted tag code or set the tag's deleted bit.
This way you don't need to right away completely restructure your file when you delete a block.
Reserving a bit in the tag provides the the option of possibly undeleting the block
(if you leave the block's data unchanged).
For security, however you might want to zero out the deleted block's data, in this case you would use a special deleted/free tag.
I agree with Stepan, that you should choose an endianess, but I would also have an endianess indicator in the file.
If you use an endianess indicator you might consider using
one of the UniCode Byte Order Marks also as an inidicator of any UniCode text encoding used for any text blocks. The BOM is usually the first few bytes of UniCoded text files, so if your BOM is the first entry in your file there might be a problem of some utility identifying your file as UniCode text (I don't think this is much an issue).
I would treat/reserve the BOM as one of your normal tags (using either the UTF16 BOM if using the 16bit tags or the UTF32 BOM if using 32bit tags) with a 0 length block/chunk.
See also http://en.wikipedia.org/wiki/File_format
I agree with atzz's suggestion of using a Tag Length Value system. For future compatibility, you could store a set of "pointers" to TLV entries at the start (or maybe Tag,Pointer and have the pointer point to a Length,Value; or perhaps Tag,Length,Pointer and then have all the data together elsewhere?).
So, my file could look something like:
magic number/file id
version
tag for first data entry
pointer to first data entry --------+
tag for second data entry |
pointer to second data entry |
... |
length of first data entry <--------+
value for first data entry
...
magic number, version, tags, pointers and lengths would all be a predefined set length, for easy decoding. Say, 2 bytes. Or 4, depending on what you need. They don't all need to be the same (eg, all tags are 1 byte, pointers are 4 etc).
The tag lets you know what is being stored. The pointer tells you where (either an offset or absolute value, in bytes), the length tells you how large the data is, and the value is length bytes of data of type tag.
If you use a MyFileFormat v1 decoder on a MyFileFormat v2 file, the pointers allow you to skip sections which the v1 decoder doesn't understand. If you simply skip invalid tags, you can probably simply use TLV instead of TPLV.
I would either hand code something like that, or maybe define my format in ASN.1 and generate a codec (I work in telecommunications, so ASN.1/TLV makes sense to me :-D)
If you're dealing with variable-length data, it's much more efficient to use pointers: Have an array of pointers to your data, ideally near the start of the file, rather than storing the data in an array directly.
Indirection is preferrable in this instance because it allows random access, which is only possible if all items are the same size. If the data was directly stored in an array, without specifying the locations of any records, data access would take O(n) time in the worst case; in order for your file-reading code to access a particular element it would have to know the length of all previous elements, and the only way to find that out is to look at each one. If you're reading the entire file at once, then you'd be doing this anyway, so it wouldn't be a problem. But if you only want one thing, then this isn't the way to go.
Whereas with an array of pointers, it's O(1) time all around: all you need is an index number, and you can retrieve and follow the pointer to get at your data.
When writing a file using this method, you would of course have to build up your table in memory before doing any writing.
I'm planning to write a program in Ruby to analyse some data which has come back from an online questionnaire. There are hundreds of thousands of responses, and each respondent answers about 200 questions. Each question is multiple-choice, so there are a fixed number of possible responses to each.
The intention is to use a piece of demographic data given by each respondent to train a system which can then guess that same piece of demographic data (age, for example) from a respondent who answers the same questionnaire, but doesn't specify the demographic data.
So I plan to use a vector (in the mathematical sense, not in the data structure sense) to represent the answers for a given respondent. This means each vector will be large (over 200 elements), and the total data set will be huge. I plan to store the data in a MySQL database.
So. 2 questions:
How should I store this in the database? One row per response to a single question, or one row per respondent? Or something else?
I'm planning to use something like the k-nearest neighbour algorithm, or a simple machine learning algorithm like a naive bayesian classifier to learn to classify new responses. Should I manipulate the data purely through SQL or should I load it into memory and store it in some kind of vast array?
First thing that comes to mind: Storing it in Memory can be absolutely reasonable for processing purposes. Lets say you reserve one byte for each answer, you have a million responses and 200 questions, then you have a 200 MB array. Not small but definitely not memory exhausting on a modern desktop, even with a 32 bit OS.
As for the database I think you should have three tables. One for the respondent with the demographical data, one for the questions, and, since you have a n:m relation between these tables, a third one with the Respondent-ID, the Question-ID and the Answercode.
If you don't need additional data for the questions (like the question-text or something) you can even optimize away the question table.
Use an array of arrays, in memory. I just created a 500000x200 array and it required about 500MB of RAM. Easily manageable on a 2GB machine, and many, many orders of magnitude faster than using SQL.
Personally, I wouldn't bother putting the data in MySQL at all. Just Marshal it in and out, and/or use JSON or CSV.
If you definitely need database storage, and the comments elsewhere about alternatives are worth considering, then I'd advise against storing 200-odd responses in 200-odd rows: you don't seem to have any obvious need for the flexibility that such a design would give and performance across hundreds of thousands of respondents is going to be dire.
Using a RDBMS gives you the ability to store very large amounts of data, access them in a variety of multi-dimensional ways and extend the structure of your data ad hoc over time. But what you gain in flexibility over a flat file (or Marshalled, or other) option you often lose in performance. I have to confess to reaching for third normal form far too early myself. I guess the questions are, how much flexibility in querying do you expect to need, and how much change do you think your data is likely to undergo? If you think you're at the low end of both, consider leaving the SQL on the shelf. If you abstract your data access into a separate layer then changing should be cheap later. Just a thought...
I'd expect you can encode an individual's response in such a way that it can easily be used in code and it's unlikely to take more than 200 characters, less if you use some sort of packing or bit-mapping. I rather like the idea of bit-mapping, come to think of it - it makes simple comparison using something like Hamming distance an absolute breeze.
I'm not a great database person, so I'll just answer #2:
If you'd really like to save on memory (or foresee a situation where there will be a lot more data) you could take the best of both worlds: Use ruby as essentially a data-mining tool. Have it pull some of the data from the DB, then write the results back to the DB (probably under a different table or database altogether). This has the benefit of only using as much memory as you want it to.
Don't forget that Ruby is a dynamic object language, as such, a simple integer will probably take up more space than a simple int in C. It needs additional space to be able to characterise if it has been 'garnished' with any additional information, methods etc.