I was thinking of making a GPS tracking app that stores clients GPS coordinates every second in a MySQL database. I would have one row per GPS entry (lat,lon, speed, time, elevation, user id, item_id, etc) each second for each user.
My calculation is that it could take up to 3 or 5 meg of MySQL space per day for each user to store all their GPS data.
I dont know how MySQL works, but im assuming it would certainly slow down if i have thousands of users running this poor database into the ground each day, and the size would reach tera bytes.
So i thought each day to back up the whole database, delete it and start a new data base each day. So this would create a bunch of smaller databases that may be better than having one extremely large database whose size would keep growing forever.
But then im wondering about duplicate IDs in the databases may be a problem, when a new database is created, all the id would start over creating a conflict with all the previous ids in the other databases.
Anybody knows how the big boys do this kind of thing? Im sure they dont have one incredibly large database with everything in there that keeps growing by terabytes each day. Does database access slow down when the size grows larger and larger? I dont know how it works really
Thanks
Related
I am a making a car tracking system and i want to store data that each car sends after every 5 seconds in a MySql database. Assuming that i have 1000 cars transmitting data to my system after 5 seconds, and the data is stored in one table. At some point i would want to query this table to generate reports for specific vehicle. I am confused between logging all the vehicles data in one table or creating a table for each vehicle (1000 tables). Which is more efficient?
OK 86400 seconds per day / 5 = 17280 records per car and day.
Will result in 17,280,000 records per day. This is not an issue for MYSQL in general.
And a good designed table will be easy to query.
If you go for one table for each car - what is, when there will be 2000 cars in future.
But the question is also: how long do you like to store the data?
It is easy to calculate when your database is 200 GB, 800GB, 2TB,....
One table, not one table per car. A database with 1000 tables will be a dumpster fire when you try to back it up or maintain it.
Keep the rows of that table as short as you possibly can; it will have many records.
Index that table both on timestamp and on (car_id, timestamp) . The second index will allow you to report on individual cars efficiently.
Read https://use-the-index-luke.com/
This is the "tip of the iceberg". There are about 5 threads here and on dba.stackexchange relating to tracking cars/trucks. Here are some further tips.
Keep datatypes as small as possible. Your table(s) will become huge -- threatening to overflow the disk, and slowing down queries due to "bulky rows mean that fewer rows can be cached in RAM".
Do you keep the "same" info for a car that is sitting idle overnight? Think of how much disk space this is taking.
If you are using HDD disks, plain on 100 INSERTs/second before you need to do some redesign of the ingestion process. (1000/sec for SSDs.) There are techniques that can give you 10x, maybe 100x, but you must apply them.
Will you be having several servers collecting the data, then doing simple inserts into the database? My point is that that may be your first bottleneck.
PRIMARY KEY(car_id, ...) so that accessing data for one car is efficient.
Today, you say the data will be kept forever. But have you computed how big your disk will need to be?
One way to shrink the data drastically is to consolidate "old" data into, say, 1-minute intervals after, say, one month. Start thinking about what you want to keep. For example: min/max/avg speed, not just instantaneous speed. Have an extra record when any significant change occurs (engine on; engine off; airbag deployed; etc)
(I probably have more tips.)
My application tracks clicks from adverts shown on remote sites, and redirects users to a product sales page.
I'm currently using MySQL to store click information (date, which link was used, ip address, custom data sent from the advertiser etc). The table is getting so big that it no longer fits our needs, which are:
High throughput (the app is processing 5 - 10M clicks per day and this is projected to grow)
Ability to report on the data by date range (e.g. how many clicks for link 1 over the past month grouped by country)
My initial idea was to move clicks into Redis (we only need to store them for 30 days, at which point they expire if they don't lead to a sale) and then make a new MySQL table to store generated stats by day, where we just update a counter per link when it's clicked.
When we started using the statistics table the database quickly fell over because of the amount of queries to that table.
Would it be best to keep the clicks in Redis, and have a separate MongoDB (or other noSQL DB) for the reporting? or could Mongo be used to store the whole click (just like we've been doing in MySQL) or is the volume too high?
Also I remember reading that MongoDB is not good at reclaiming space from deleted records, would this cause us issues since 90% of the clicks would be deleted after 30 days anyway?
Thanks
MongoDB is enough to solve this problem as compared to store in Radis and move to MongoDB. Since, the amount of data is very large, so you can create a indexes on timestamp or field having high carnality. This make you query fast, also MongoDB provide aggregation which help in generating the report. I don't think, is there any issue with deletion.
The title says most of it, I'm wondering if MySQL is suitable, (and if not, what else would do better) for storing this data?
It would be most likely 3 to 6 floating point numbers delivered every quarter second. (Somewhere between 1-3 GB per year)
At the same time this data needs to be real-time (or close-to) accessible from the MySQL, so it can't miss a beat while running some (potentially large) queries on the database.
Can MySQL handle this? I have no concept of how this kind of database scales or what constitutes "a lot" of information at one time for these databases.
If the database were to receive the same information but clumped into one large update every 5-10 minutes would that change the answer?
It depends on the table and the hardware. A Decent mysql server would have no problem with this. Especially if equipped with a SSD drive.
One large query inserting a large amount of data would be more efficient, as it would be a single index update and so on. There will be a sweet spot somewhere in there between the .25 second update and a 5 minute bulk. Testing would show that.
If you are doing replication, keep in mind that the slave is single threaded, and these updates would ultimate take longer there. You would also want to go with innodb as opposed to myisam to avoid table level locks during writes.
I want to build a MySQL database for storing the ranking of a game every 1h.
Since this database will become quite large in a short time, I figured it's important to have a proper design. Therefor some advice would be gratefully appreciated.
In order to keep it as small as possible, I decided to log only the first 1500 positions of the ranking. Every ranking of a player holds the following values:
ranking position, playername, location, coordinates, alliance, race, level1, level2, points1, points2, points3, points4, points5, points6, date/time
My approach was to simply grab all values of each top 1500 player every hour by a php script and insert them into the MySQL as one row. So every day the MySQL will grow 36,000 rows. I will have a second script that deletes every row that is older than 28 days, otherwise the database would get insanely huge. Both scripts will run as a cronjob.
The following queries will be performed on this data:
The most important one is simply the query for a certain name. It should return all stats for the player for every hour as an array.
The second is a query in which all players have to be returned that didn't gain points1 during a certain time period from the latest entry. This should return a list of players that didn't gain points (for the last 24h for example).
The third is a query in which all players should be listed that lost a certain amount or more points2 in a certain time period from the latest entry.
The queries shouldn't take a lifetime, so I thought I should probably index playernames, points1 and points2.
Is my approach to this acceptable or will I run into a performance/handling disaster? Is there maybe a better way of doing this?
Here is where you risk a performance problem:
Your indexes will speed up your reads, but will considerably slow down your writes. Especially since your DB will have over 1 million rows in that one table at any given time. Since your writes are happening via cron, you should be okay as long as you insert your 1500 rows in batches rather than one round trip to the DB for every row. I'd also look into query compiling so that you save that overhead as well.
Ranhiru Cooray is correct, you should only store data like the player name once in the DB. Create a players table and use the primary key to reference the player in your ranking table. The same will go for location, alliance and race. I'm guessing that those are more or less enumerated values that you can store in another table to normalize your design and be returned in your results with appropriates JOINs. Normalizing your data will reduce the amount of redundant information in your database which will decrease it's size and increase it's performance.
Your design may also be flawed in your ranking position. Can that not be calculated by the DB when you select your rows? If not, can it be done by PHP? It's the same as with invoice tables, you never store the invoice total because it is redundant. The items/pricing/etc can be used to calculate the order totals.
With all the adding/deleting, I'd be sure to run OPTIMIZE frequently and keep good backups. MySQL tables---if using MyISAM---can become corrupted easily in high writing/deleting scenarios. InnoDB tends to fair a little better in those situations.
Those are some things to think about. Hope it helps.
I have a database called RankHistory that is populated daily with each user's username and rank for the day (rank as in 1,2,3,...). I keep logs going back 90 days for every user, but my user base has grown to the point that the MySQL database holding these logs is now in excess of 20 million rows.
This data is recorded solely for the use of generating a graph showing how a user's rank has changed for the past 90 days. Is there a better way of doing this than having this massive database that will keep growing forever?
How great is the need for historic data in this case? My first thought would be to truncate data older than a certain threshold, or move it to an archive table that doesn't require as frequent or fast access as your current data.
You also mention keeping 90 days of data per user, but the data is only used to show a graph of changes to rank over the past 30 days. Is the extra 60 days' data used to look at changes over previous periods? If it isn't strictly necessary to keep that data (or at least not keep it in your primary data store, as per my first suggestion), you'd neatly cut the quantity of your data by two-thirds.
Do we have the full picture, though? If you have a daily record per user, and keep 90 days on hand, you must have on the order of a quarter-million users if you've generated over twenty million records. Is that so?
Update:
Based on the comments below, here are my thoughts: If you have hundreds of thousands of users, and must keep a piece of data for each of them, every day for 90 days, then you will eventually have millions of pieces of data - there's no simple way around that. What you can look into is minimizing that data. If all you need to present is a calculated rank per user per day, and assuming that rank is simply a numeric position for the given user among all users (an integer between 1 - 200000, for example), storing twenty million such records should not put unreasonable strain on your database resources.
So, what precisely is your concern? Sheer data size (i.e. hard-disk space consumed) should be relatively manageable under the scenario above. You should be able to handle performance via indexes, to a certain point, beyond which the data truncation and partitioning concepts mentioned can come into play (keep blocks of users in different tables or databases, for example, though that's not an ideal design...)
Another possibility is, though the specifics are somewhat beyond my realm of expertise, you seem to have an ideal candidate for an OLAP cube, here: you have a fact (rank) that you want to view in the context of two dimensions (user and date). There are tools out there for managing this sort of scenario efficiently, even on very large datasets.
Could you run an automated task like a cron job that checks the database every day or week and deletes entries that are more than 90 days old?
Another option, do can you create some "roll-up" aggregate per user based on whatever the criteria is... counts, sales, whatever and it is all stored based on employee + date of activity. Then you could have your pre-aggregated rollups in a much smaller table for however long in history you need. Triggers, or nightly procedures can run a query for the day and append the results to the daily summary. Then your queries and graphs can go against that without dealing with performance issues. This would also help ease moving such records to a historical database archive.
-- uh... oops... that's what it sounded like you WERE doing and STILL had 20 million+ records... is that correct? That would mean you're dealing with about 220,000+ users???
20,000,000 records / 90 days = about 222,222 users
EDIT -- from feedback.
Having 222k+ users, I would seriously consider that importance it is for "Ranking" when you have someone in the 222,222nd place. I would pair the daily ranking down to say the top 1,000. Again, I don't know the importance, but if someone doesn't make the top 1,000 does it really matter???