Store large chart data points in MySQL - mysql

I am creating an application that stores ECG data.
I want to eventually graph this data in react but for now I need help storing it.
The biggest problem is storing the data points that will go along the x & y axis on the graph. Along the bottom is time and the y axis will be some value between. There are no limits but as it’s basically a heart rhythm most points will lie close to 0.
What is the best way to store the x and y data??
An example of the y data : [204.77, 216.86 … 3372.872]
The files that I will be getting this data from can contain millions of data points, depending on the sampling rate and the time the experiment took.
What is the best way to store this type of data in MySQL. I cannot use any other DB as they’re not installed on the server this will be hosted on.
Thanks

Well as you said there are million of points, so
JSON is the best way to store these points.
The space required to store a JSON document is roughly the same as for LONGBLOB or LONGTEXT;
Please have a look into this -
https://dev.mysql.com/doc/refman/8.0/en/json.html

The JSON encoding of your sample data would take 7-8 bytes per reading. Multiply that by the number of readings you will get at a single time. There is a practical limit of 16MB for a string being fed to MySQL. That seems "too big".
A workaround is to break the list into, say, 1K points at a time. Then there would be several rows, each row being manageable. There would be virtually no limit on the number of points you could store.
FLOAT is 4 bytes, but you would need a separate row for each reading. So, let's say about 25 bytes per row (including overhead); So size is not a problem, however, two other problems could arise. 7 digits is about the limit of precision for FLOAT. Fetching a million rows will not be very fast.
DOUBLE is 8 bytes, 16 digits of precision.
DECIMAL(6,2) is 3 bytes and overflows above 9999.99.
Considering that a computer monitor has less than 4 digits of precision (4K pixels < 10^4), I would argue for FLOAT as "good enough".
Another option is to take the JSON string, compress it, then store that in a LONGBLOB. The compression will give you an average of about 2.5 bytes per reading and the row for one complete reading will be a few megabytes.
I have experience difficulty in INSERTing a row bigger than 1MB. Changing a setting let me got to 16MB; I have not tried any row bigger than that. If you run into troubles there, start a new question with just that topic. I will probably come along and explain how to chunk up the data, thereby even allowing a "row" spread over multiple database rows that could effectively be even bigger than 4GB. That is the 'hard' limit for JSON, LONGTEXT and LONGBLOB.
You did not mention the X values. Are you assuming that they are evenly spaced? If you need to provide X,Y pairs, the computations above get a bit messier, but I have provided some of the data for analysis.

Related

Cracking a binary file format if I have the contents of one of these files

I have about 300 measurements (each stored in a dat file) that I would like to read using MATLAB or Python. The files can be exported to text or csv using a proprietary program, but this has to be done one by one.
The question is: what would be the best approach to crack the format of the binary file using the known content from the exported file?
Not sure if this makes any difference to make the cracking easier, but the files are just two columns of (900k) numbers, and from the dat files' size (1,800,668 bytes), it appears as if each number is 16 bits (float) and there is some other information (possible the header).
I tried using HEX-Editor, but wasn't able to pick up any trends from there.
Lastly, I want to make sure to specify that these are measurements I made and the data in them belongs to me. I am not trying to obtain data that I am not supposed to.
Thanks for any help.
EDIT: Reading up a little more I realized that there my be some kind of compression going on. When you look at the data in StreamWare, it gives 7 decimal places, leading me to believe that it is a single precision value (4 bytes). However, the size of the files suggests that each value only takes 2 bytes.
After thinking about it a little more, I finally figured it out. This is very specific, but just in case another Dantec StreamWare user runs into the same problem, it could save him/her a little time.
First, the data is actually only a single vector. The time column is calculated from the length of the recorded signal and the sampling frequency. That information is probably in the header (but I wasn't able to crack that portion).
To obtain the values in MATLAB, I skipped the header bytes using fseek(fid, 668, 'bof'), then I read the data as uint16 using fread(fid, 900000, 'uint16'). This gives you integers.
To get the float value, all you have to do is divide by 2^16 (it's a 16 bit resolution system) and multiply by ten. I assume the factor of ten depends on the range of your data acquisition system.
I hope this helps.

Is PayPal's "PNREF" always 12 characters?

Does anyone know if PayPal's "PNREF" (returned from zero-dollar authorizations) is always 12 characters?
This I ask because I want to optimize my mySQL storage.
And also, I trust SO's answer more than PP's :-D
Don't "optimize" your storage. Not only do server-grade terabyte sized drives cost just a few hundred dollars, making the cost of storing a handful of bytes nearly zero, but VARCHAR(255) columns only take up as much space as you have content because they are variable length.
If you ran a million transactions and saved ten bytes on each, you've saved all of ten megabytes of data, or about $0.0001 worth of storage. I'm presuming if you've run a million transactions you can afford the bytes. The PayPal fees will be literally several quadrillion times higher.
In actuality there's zero savings between 12 characters in VARCHAR(12) and VARCHAR(255). Internally these are represented as a single length byte plus N bytes for the content. For regular 7-bit values that means 13 bytes per entry.
The only difference is you're arbitrarily limiting the former to 12 characters and will get truncation errors (if this flag is set, as it is on newer versions of MySQL) if you insert longer values, or you'll lose data and have no idea until it's probably too late to fix it.
Just use VARCHAR(255) so that your code doesn't explode when PayPal decides today's the day to use 14 characters. These things can change without warning and without any logical reason.

Sorting a AS3 ByteArray

I am designing an Air application that needs to store thousands of records in memory, and needs to sort them efficiently, by various keys.
I thought of using a ByteArray, since that would avoid all the overhead of normal AS3 objects, and would allow me to use memory more efficiently.
However, the challenge is how to sort the records inside the ByteArray. I have thought of two possibilities:
1- Implement quick-sort or heap-sort in AS3, and sort the array this way. However, I am not sure this will be performant enough. For example, ByteArrays do not have methods to copy chunks of memory around; it has to be done byte-by-byte.
2- Create an Air Native Extension (ANE) that takes the ByteArray and sorts it, using C. the drawback of this is that it will be harder to implement for all the platforms it needs to run on.
What would you recommend? Do you have any previous experience doing something similar?
I'd say use Array or Vector objects, there's a possibility to sort Arrays on whatever key you want via sortOn(), and Vectors via sort(), so you can achieve whatever behavior you need, as the latter accepts a function as its parameter, check here. And I believe you won't get anywhere with ByteArrays, since what is actually done in sorting objects is sorting links in there, while a ByteArray will contain actual data.
You should never design anything that HAS to have hundreds of thousands of anything in the memory at once. Offload stuff while you don't need it. Do you know how much 100,000 is? Taking a single byte and multiplying by 100,000 gives you a MB. For every 1 byte of data in a record, you will generate 1MB of memory. Recording 100,000 ints takes 4MB.
If your records have 2 20 character strings (a first and last name), a String character is represented with 8 bytes, so you have just filled the memory with 640 MB of nothing more than first and last names. Most 'bad' computers have like, what... 2GB of memory? Good Job taking up 1/4 of that. Even if you managed to truncate this down to ByteArray level with superhuman uber bitshifing, you're still talking about reducing data by a factor of 8. So now you have 80MB for just first and last names and no other data. You could survive on that- except I suspect your records have more data then 2 strings. 20 strings? You're eating 800MB of data. Offload everything but 100 records at a time, and you're down to 640KB of memory for those names. And yes, you can load and offload while sorting.
Chunks of memory don't copy faster than bytes. It's all the same. The reasons Vectors of Objects are performant when switching is because they switch references/pointers/one single 32 bit/64 bit number instead of copying chunks of memory.
It's not clear what you're sorting. Bytes only go up to values of 256, so clearly you're using more bytes than 1 for each record. You want to evaluate each set of... like 2000+ bytes against each other set of 2000+ bytes? Like "Ah, last name is bytes 32-83, so extract those bytes, for every group of 4 bytes, bit shift them 0, 8, 16, 32 bits respectively, add them together, concatinate their integer values into a a String, do a comparison, now compare bytes 84-124 against the bytes in the next option, now transfer bytes 0-2000 to location 443530-441530 and....... Do these records have variable length strings or arrays in them? Oh lord.
Flash is not the place to write in assembly!
Use objects and test the speed & memory consumption. If either makes you cry, use more conventional methods of reducing load; like offloading materials temporarily into text files. The ugliest you should be getting is avoiding objects by storing each individual property in a different Vector. Vector., etc and having the same index refer to one record across the board.

Best practice for storing GPS data of a tracking app in mysql database

I have a datamodel question for a GPS tracking app. When someone uses our app it will save latitude, longitude, current speed, timestamp and burned_calories every 5 seconds. When a workout is completed, the average speed, total time/distance and burned calories of the workout will be stored in a database. So far so good..
What we want is to also store the data that is saved those every 5 seconds, so we can utilize this later on to plot graphs/charts of a workout for example.
How should we store this amount of data in a database? A single workout can contain 720 rows if someone runs for an hour. Perhaps a serialised/gzcompressed data array in a single row. I'm aware though that this is bad practice..
A relational one/many to many model would be undone? I know MySQL can easily handle large amounts of data, but we are talking about 720 * workouts
twice a week * 7000 users = over 10 million rows a week.
(Ofcourse we could only store the data of every 10 seconds to halve the no. of rows, or every 20 seconds, etc... but it would still be a large amount of data over time + the accuracy of the graphs would decrease)
How would you do this?
Thanks in advance for your input!
Just some ideas:
Quantize your lat/lon data. I believe that for technical reasons, the data most likely will be quantized already, so if you can detect that quantization, you might use it. The idea here is to turn double numbers into reasonable integers. In the worst case, you may quantize to the precision double numbers provide, which means using 64 bit integers, but I very much doubt your data is even close to that resolution. Perhaps a simple grid with about one meter edge length is enough for you?
Compute differences. Most numbers will be fairly large in terms of absolute values, but also very close together (unless your members run around half the world…). So this will result in rather small numbers. Furthermore, as long as people run with constant speed into a constant direction, you will quite often see the same differences. The coarser your spatial grid in step 1, the more likely you get exactly the same differences here.
Compute a Huffman code for these differences. You might try encoding lat and long movement separately, or computing a single code with 2d displacement vectors at its leaves. Try both and compare the results.
Store the result in a BLOB, together with the dictionary to decode your Huffman code, and the initial position so you can return data to absolute coordinates.
The result should be a fairly small set of data for each data set, which you can retrieve and decompress as a whole. Retrieving individual parts from the database is not possible, but it sounds like you wouldn't be needing that.
The benefit of Huffman coding over gzip is that you won't have to artificially introduce an intermediate byte stream. Directly encoding the actual differences you encounter, with their individual properties, should work much better.

Are SPATIAL Geometry indices performance dependant on the size and density of geometry shapes?

Spatial Indexes
Given a spatial index, is the index utility, that is to say the overall performance of the index, only as good as the overall geometrys.
For example, if I were to take a million geometry data types and insert them into a table so that their relative points are densely located to one another, does this make this index perform better to identical geometry shapes whose relative location might be significantly more sparse.
Question 1
For example, take these two geometry shapes.
Situation 1
LINESTRING(0 0,1 1,2 2)
LINESTRING(1 1,2 2,3 3)
Geometrically they are identical, but their coordinates are off by a single point. Imagine this was repeated one million times.
Now take this situation,
Situation 2
LINESTRING(0 0,1 1,2 2)
LINESTRING(1000000 1000000,1000001 10000001,1000002 1000002)
LINESTRING(2000000 2000000,2000001 20000001,2000002 2000002)
LINESTRING(3000000 3000000,3000001 30000001,3000002 3000002)
In the above example:
the lines dimensions are identical to the situation 1,
the lines are of the same number of points
the lines have identical sizes.
However,
the difference is that the lines are massively futher apart.
Why is this important to me?
The reason I ask this question is because I want to know if I should remove as much precision from my input geometries as I possibly can and reduce their density and closeness to each other as much as my application can provide without losing accuracy.
Question 2
This question is similar to the first question, but instead of being spatially close to another geometry shape, should the shapes themselves be reduced to the smalest possible shape to describe what it is that the application requires.
For example, if I were to use a SPATIAL index on a geometry datatype to provide data on dates.
If I wanted to store a date range of two dates, I could use a datetime data type in mysql. However, what if I wanted to use a geometry type, so that I convery the date range by taking each individual date and converting it into a unix_timestamp().
For example:
Date("1st January 2011") to Timestamp = 1293861600
Date("31st January 2011") to Timestamp = 1296453600
Now, I could create a LINESTRING based on these two integers.
LINESTRING(1293861600 0,1296453600 1)
If my application is actually only concerned about days, and the number of seconds isn't important for date ranges at all, should I refactor my geometries so that they are reduced to their smallest possible size in order to fulfil what they need.
So that instead of "1293861600", I would use "1293861600" / (3600 * 24), which happens to be "14975.25".
Can someone help fill in these gaps?
When inserting a new entry, the engine chooses the MBR which would be minimally extended.
By "minimally extended", the engine can mean either "area extension" or "perimeter extension", the former being default in MySQL.
This means that as long as your nodes have non-zero area, their absolute sizes do not matter: the larger MBR's remain larger and the smaller ones remain smaller, and ultimately all nodes will end up in the same MBRs
These articles may be of interest to you:
Overlapping ranges in MySQL
Join on overlapping date ranges
As for the density, the MBR are recalculated on page splits, and there is a high chance that all points too far away from the main cluster will be moved away on the first split to their own MBR. It would be large but be a parent to all outstanding points in few iterations.
This will decrease the search time for the outstanding points and will increase the search time for the cluster points by one page seek.