Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have an application which will require storage of 4-5 trillion records. I'm unfamiliar with limitations of mySQL, is it capable of data volumes this large? Is there going to be an issue with performance?
Would I be better off breaking it into multiple tables?
The limitation on your table will be the amount of memory it takes rather than the number of records per se.
On a Win32 system running NTFS, the maximum table size in MySQL is around 2TB. Assuming you have a rudimentary table with a single field of width 4 bytes, this would mean the maximum number of records a table could have is:
2000000000000 bytes / 4 bytes per field = 500000000000 = 5 billion records
So it would seem that you would not be able to use MySQL for your purposes. You can try looking into a NoSQL solution like Cassandra.
You can also read this SO article for more information.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am given to know that video blobs should not be stored in MySQL. But what if the video is very small with a maximum length of 5 seconds. Is it okay to store small videos(less than 5 seconds) as blobs in mysql?
It's okay to store media data as a BLOB, regardless of length, as long as the content isn't larger than the maximum size for the data type (64KB for BLOB, 16MB for MEDIUMBLOB, 4GB for LONGBLOB).
Many software developers will insist that media belongs in files, not in the database, but there are good reasons to store data in a database too.
This is basically a matter of opinion. The best solution for your project may be different from their project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am trying to decide on what will be more efficient between two different systems for MySQL to handle my data. I can either,
1: Create around 200 tables, each having around 30 rows & 5 columns.
2: Create 1 table, having around 6000 rows & 5 columns.
I am using Laravel for this project and Eloquent will be handling this. Does anybody have any opinions on this matter? I appreciate any/all responses.
Option 2.
For such low row counts the overhead both in terms of programming effort and computation of joining 200(!) tables far outweighs the "flat file" approach. Additionally, MySQL will attempt to cache the entire 6000-row table in RAM, assuming you're not storing massive BLOBs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have to work with a legacy MySQL database that, for reasons outside of my control, can not be normalised.
The db consists of one table with 400 columns of various types
The table has 2,000 rows, and grows by 300 per week
Basic calculations like averages and counts will be carried out
The data are graphed across varying time series, and presented on a dashboard (built in Rails)
I can change the type of database (to Postgresql or MongoDB), but I can not alter the structure of the table.
New data will be uploaded via CSV file
No data validation is required
There are no joins
I've worked with Rails for sometime, but I've never created a model with more than 15 or 20 columns. My concerns are:
Performance implications, if any, of an ActiveRecord model with 400 attributes.
Would using json data type in Postgresql or using MongoDB be a better fit given the data is not relational?
I'm under the impression that SQL databases excel at calculations, and I'm concerned that using JSON would add performance or complexity overhead when making calculations on the data.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am working in a project where i need to calculate some avg values based on the users interaction on a site.
Now, the amount of records that needs to have their total avg calculated can range from a few to thousands.
My question is, at which threshold would it be wise to store the aggregated data in a seperate table and through a store procedure update that value everytime a new record is generated instead of just calculate it everytime it is neede?
Thanks in advance.
Dont do it, until you start having performance problems caused by the time it takes to aggregate your data.
Then do it.
If discovering this bottleneck in production is unacceptable, then run the system in a test environment that accurately matches your production environment and load in test data that accurately matches production data. If you hit a performance bottleneck in that environment that is caused by aggregation time, then do it.
You need to weigh the need of current data vs the need of quick data. If you absolutely need current data then you have to live with longer delays in your queries. If you absolutely need your data asap then you will have to deal with older data.
You can time your queries and time the insertion into a separate table and evaluate which seems to best fit your needs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have 9 tables (total of 43 fields). I have access to many 1GB MySQL databases. Should I split my 9 tables over multiple databases or just pile them all into one database?
The answer depends on how much data is going to be in each table.
A table itself takes up almost no space - it's the rows that make the database size grow.
What you need to do is estimate how large each table is going to get within the foreseeable future - erring on the side of keeping the tables together.
That said, nine tables with 43 fields (assuming reasonably sized rows) would need to have hundreds of thousands of rows each to approach 1GB. I have a multi-million-row SQLite file which is only 100MB.
It depends.
How much data are you expecting?
How much more complicated is it if you have to manage multiple databases?
How much slower will it be to query multiple databases and aggregate the results?
How important is performance?
Putting everything in a single database will give you better performance (usually) and is easier to develop. You should do that until your data gets big enough that you outgrow the database.