Optimization of Mysql table size - mysql

I am considering a problem.
In C language, we are suggested that the size of struct should be multiples of 2-byte.
e.q.:
struct text{
int index;//assume int is 4 byte.
char [8] word;
}//alought text is only 12 bytes, compiler would assign 16 bytes for this struct
Therefore, I am wondering does the record size(thanks of Gordon Linoff) of MySQL encounter the same problem?
Moreover, how can I optimize MySQL via controlling table size?

First, you are referring to a record size and not the table size.
Second, databases do not work the way that procedural languages do. Records are stored on pages, which are filled up until no more fit. Then additional pages are used. Typically, there are many records on a page.
You can get an idea of what a page looks like here. They are complicated but basically hidden from the user.
It sounds like you are attempting "premature optimization". This isn't quite the root of all evil, but it is a major distraction to getting things accomplished. In other words, define the record as you need it defined. Do what you want to do. If you have performance problems, then fix those when they arise.
The size of a record is going to be the least of your problems. Databases perform I/O in units of pages, so the difference between 12 and 16 bytes is meaningless for a single record. You still have to read the entire page (which is much larger).

Related

mysql getting rid of redundant values

I am creating a database to store data from a monitoring system that I have created. The system takes a bunch of data points(~4000) a couple times every minute and stores them in my database. I need to be able to down sample based on the time stamp. Right now I am planning on using one table with three columns:
results:
1. point_id
2. timestamp
3. value
so the query I'd be like to do would be:
SELECT point_id,
MAX(value) AS value
FROM results
WHERE timestamp BETWEEN date1 AND date2
GROUP BY point_id;
The problem I am running into is this seems super inefficient with respect to memory. Using this structure each time stamp would have to be recorded 4000 times, which seems a bit excessive to me. The only solutions I thought of that reduce the memory footprint of my database requires me to either use separate tables (which to my understanding is super bad practice) or storing the data in CSV files which would require me to write my own code to search through the data (which to my understanding requires me not to be a bum... and probably search substantially slower). Is there a database structure that I could implement that doesn't require me to store so much duplicate data?
A database on with your data structure is going to be less efficient than custom code. Guess what. That is not unusual.
First, though, I think you should wait until this is actually a performance problem. A timestamp with no fractional seconds requires 4 bytes (see here). So, a record would have, say 4+4+8=16 bytes (assuming a double floating point representation for value). By removing the timestamp you would get 12 bytes -- savings of 25%. I'm not saying that is unimportant. I am saying that other considerations -- such as getting the code to work -- might be more important.
Based on your data, the difference is between 184 Mbytes/day and 138 Mbytes/day, or 67 Gbytes/year and 50 Gbytes. You know, you are going to have to deal with biggish data issues regardless of how you store the timestamp.
Keeping the timestamp in the data will allow you other optimizations, notably the use of partitions to store each day in a separate file. This should be a big benefit for your queries, assuming the where conditions are partition-compatible. (Learn about partitioning here.) You may also need indexes, although partitions should be sufficient for your particular query example.
The point of SQL is not that it is the most optimal way to solve any given problem. Instead, it offers a reasonable solution to a very wide range of problems, and it offers many different capabilities that would be difficult to implement individually. So, the time to a reasonable solution is much, much less than developing bespoke code.
Using this structure each time stamp would have to be recorded 4000 times, which seems a bit excessive to me.
Not really. Date values are not that big and storing the same value for each row is perfectly reasonable.
...use separate tables (which to my understanding is super bad practice)
Who told you that!!! Normalising data (splitting into separate, linked data structures) is actually a good practise - so long as you don't overdo it - and SQL is designed to perform well with relational tables. It would perfectly fine to create a "time" table and link to the data in the other table. It would use a little more memory, but that really shouldn't concern you unless you are working in a very limited memory environment.

The effect of the fields length on the querying time

I have a mysql database in which I keep information of item and also I keep description.
The thing is that the description column can hold up to 150 chars which I think is long and I wondered if it slows the querying time. Also I wanted to know if its recommended to shorten the size of the int I mean if I have a price which is normally not that big should I limit the column to small/medium int?
The columns are something like this:
id name category publisher mail price description
Thanks in advance.
Store your character data as varchar() and not as char() and read up on the MySQL documentation on these data types (here). This only stores the characters actually in the description, plus a few more bytes of overhead.
As for whether or not the longer fields imply worse-performing queries. That is a complicated subject. Obviously, at the extreme, having the maximum size records is going to slow things down versus a 10-byte record. The reason has to do with I/O performance. MySQL reads in pages and a page can contain one or more records. The records on the page are then processed.
The more records that fit on the page, the fewer the I/Os.
But then it gets more complicated, depending on the hardware and the storage engine. Disks, nowadays, do read-aheads as do operating systems. So, the next read of a page (if pages are not fragmented and are adjacent to each other) may be much faster than the read of the initial page. In fact, you might have the next page in memory before processing on the first page has completed. At that point, it doesn't really matter how many records are on each page.
And, 200 bytes for a record is not very big. You should worry first about getting your application working and second about getting it to meet performance goals. Along the way, make reasonable choices, such as using varchar() instead of char() and appropriately sized numerics (you might consider fixed point numeric types rather than float for monetary values).
It is only you that considers 150 long - the database most likely does not, as they're designed to handle much more at once. Do not consider sacrificing your data for "performance". If the nature of your application requires you to store up to 150 characters of text at once, don't be afraid to do so, but do look up optimization tips.
Using proper data types, though, can help you save space. For instance, if you have a field which is meant to store values 0 to 20, there's no need for an INT field type. A TINYINT will do.
The documentation lists the data types and provides information on how much space they use and how they're managed.

MYSQL table: reasonable size before issues start occuring?

This is a bit ambiguous and I think relies more on your own personal experience so any input is welcome.
I have a database X and within X is table Y. Table Y has become very large(1.1 million rows) and it can not be refactored anymore than it already has.
So... within your experience how much further can this table grow before I begin seeing problems(if any) occurring and what are those problems likely to be?
Why would a mere 1.1 million rows cause problems? Most (if not all) RDBMS'es can handle many, many more (like billions) as long as storage etc. suffices ofcourse and as long as the partition can handle files of considerable size (e.g. Fat32 only supports up to 2GB per file for example).
Also; you need to be more specific on what you're referring to when saying "before I begin seeing problems(if any)". What kind of problems? You might already have problems if you're not using correct indices for example which might slow queries down. That might be a problem but can, in some cases, also be fine.
Another issue that might actualy be a problem is stuff like an autoincrement primary key field of type (unsigned) int which might overflow at values around 2.1 (signed) or 4.2 billion rows (unsigned) but since you're at 1.1 million rows currently that is way outside of what to worry about now. (Exact values are, ofcourse, 231-1 and 232-1 respectively for signed and unsigned int). In that case you'll have to think about using types like bigint or others (maybe even (var)char etc.) for your PK.
The only thing interesting here, for MySQL specifically, could be: are you using InnoDB or MyISAM? I don't know the exact details since I'm not usually working with MySQL but I seem to remember that MyISAM can cause trouble (probably in old(er) versions like <5.0 or something). Correct me if I'm wrong. Edit: read up here. MyISAM supports a max. of 232 rows apparently, unless compiled with specific options.
It depends on the operating system being used. For older systems the typical issue is the maximum file size. The maximum 32 bit addressing of filesystems (e.g. FAT32) employed by older versions of operating systems could not seek past 2GB.
See Maximum table size documentation.

To BLOB or not to BLOB

I am in the process of writing a web app backed up by a MySQL database where one of the tables has a potential for getting very large (order of gigabytes) with a significant proportion of table operations being writes. One of the table columns needs to store a string sequence that can be quite big. In my tests thus far it has reached a size of 289 bytes but to be on the safe side I want to design for a maximum size of 1 kb. Currently I am storing that column as a MySQL MediumBlob field in an InnoDB table.
At the same time I have been googling to establish the relative merits and demerits of BLOBs vs other forms of storage. There is a plethora of information out there, perhaps too much. What I have gathered is that InnoDB stores the first few bytes (789 if memory serves me right) of the BLOB in the table row itself and the rest elsewhere. I have also got the notion that if a row has more than one BLOB (which my table does not) per column then the "elsewhere" is a different location for each BLOB. That apart I have got the impression that accessing BLOB data is significantly slower than accessing row data (which sounds reasonable).
My question is just this - in light of my BLOB size and the large potential size of the table should I bother with a blob at all? Also, if I use some form of inrow storage instead will that not have an adverse effect on the maximum number of rows that the table will be able to accommodate?
MySQL is neat and lets me get away with pretty much everything in my development environment. But... that ain't the real world.
I'm sure you've already looked here but it's easy to overlook some of the details since there is a lot to keep in mind when it comes to InnoDB limitations.
The easy answer to one of your questions (maximum size of a table) is 64TBytes. Using variable size types to move that storage into a separate file would certainly change the upper limit on number of rows but 64TBytes is quite a lot of space so the ratio might be very small.
Having a column with a 1KByte string type that is stored inside the table seems like a viable solution since it's also very small compared to 64TBytes. Especially if you have very strict requirements for query speed.
Also, keep in mind that the InnoDB 64TByte limit might be pushed down by the the maximum file size for the OS you're using. You can always link several files together to get more space for your table but then it's starting to get a bit more messy.
if the BLOB data is more then 250kb it is not worth it. In your case i wouldn't bother myself whit BLOB'n. Read this

How important is it to select the smallest possible data type when designing a database?

How much of a difference does using tinyint or smallint (when applicable) instead of just int do? Or restricting a char field to the minimum characters needed?
Do these choices affect performance or just allocated space?
On an Indexed field with a significantly large table the size of your field can make a large affect on performance. On a nonindexed field its not nearly as important bit it still has to write the extra data.
That said, the downtime of a resize of a large table can be several minutes or several hours even, so don't make them smaller than you'd imagine ever needing.
Yes it affects performance too.
If the indexes are larger, it takes longer to read them from disk, and less can be cached in memory.
I've frequently seen these three schema design defects causing problems:
A varchar(n) field was created with n only big enough for the sample of data that the designer had pulled in, not the global population: fine in unit tests, silent truncations in the real world.
A varchar(n) used where the data is fixed size. This masks data bugs.
A char(n) used for variable length data. This provides performance improvements (by enabling the data to sit in-line in the row on disc, but all the client code (and various stored procs/views etc) need to cope with whitespace padding issues (and often they don't). Whitespace padding can be difficult to track down, because spaces don't show up too well, and various libraries/sql clients suppress them.
I've never seen a well intentioned (i.e. not just using varchar(255) for all cols) but conservative selection of the wrong data size cause a significant performance problems. By significant, I mean factor of 10. I regularly see algorithmic design flaws (missing indexes, sending too much data over the wire etc.) causing much bigger performance hits.
Both, in some cases. But imo, it's more of a question of design than performance and storage considerations. The reason you don't make everything varchar(...) is because that doesn't accurately reflect what sort of data should be stored there, and it reduces your data's integrity and type-safety.