Best way to shard Mysql database - mysql

I have a huge number of users so I am needed to shard the databases in n shards. So to proceed with this I have below options-
Divide my data in n shards basis userId modulus n operation. i.e. if I have 10 shards userId 1999 will be sent to 1999%10=9th shard
Problem-
The problem with this approach is if the number of shard increases in future reference to previous will not be maintained.
I can maintain a table with UserId and ShardId
Problem-
If my users increase in future to billions I'll need this mapping table to be shared which doesn't seem to be good solution.
I can maintain static mapping in code like 0-10000 in Shard 1 and more on.
Problem-
With the increase in shards and Users Code needed to be changed more often.
If any specific User in shard has huge data It'd get difficult to separate out the shard.
So, these are the three ways I could have found but all having some problem. What would be an alternate or better approach to shard the MySQL tables which can compensate with increased number of shards and users in future.

I prefer a hybrid of 1 and 2:
Hash the UserId into, say, 4096 values.
Look up that number in a 'dictionary' that has shard numbers in it.
If a shard gets too full, migrate all the users with some hash number to another shard.
If you add a shard, migrate a few hash numbers to it - preferable from busy shards.
This forces you to write a script for moving users, and make it robust. Once you have that, a lot of other admin tasks become 'simple':
Retire a machine
Upgrade the OS (one by one across shards)
Upgrade whatever software is on the machines
Migrate a hash number that is bulky but not busy to a old, slow, shard that has a big disk. Similarly migrate small and busy to a shard with more cores and faster disks.
Each shard could be an HA cluster (Galera, Group replication, etc) of servers for both reliability and read-scaling. (Sharding gives you write-scaling.
There would need to be a way to distribute the dictionary to all clients "promptly".
All of this works well if you have, say, each hash in 3 different shards for HA. Each of the 3 would be at geographic locations for robustness. The dictionary would have 4 columns to say where the copies are. The 4th would be used during migrations.

Related

Scaling Pinterest User tables sharding and ensuring consistency when opening new shards

So this is very much a conceptual question (as much as I'd love to build a billion user app I don't think it's going to happen).
I've read the article by Pinterest on how they scaled their MySQL fleet a number of times ( https://medium.com/#Pinterest_Engineering/sharding-pinterest-how-we-scaled-our-mysql-fleet-3f341e96ca6f ) and I still don't get how they would "open up new shards" without effecting existing users.
The article states that every table is on every shard, including the User table.
So I'm assuming that when a user registers and they are assigned a random shard, this has to be done via a function that will always return the same result regardless of the number of shards.
e.g if I sign up with test#example.com they would potentially use that email to work out the shard id and this would have to take into consideration the number of currently 'open' shards. My initial assumption was that they would use something like the mod shard they mentioned later on in the article e.g.
md5($email) % number_of_shards
But as they open up the number of shards it would change the function result.
I then thought perhaps they had a separate DB to hold purely user info for authentication purposes and this would also contain a column with the assigned shard_id, but as I say the article implies that even the user table is on each shard.
Does anyone else have any ideas or insights into how something like this might work?
You are sharding on "user", correct? I see 3 general ways to split up the users.
The modulo approach to sharding has a big problem. When you add a shard, suddenly most users need to move most users to a different shard.
At the other extreme (from modulo) is the "dictionary" approach. You have some kind of lookup that says which shard each user is on. With millions of users, maintenance of the dictionary becomes a costly headache.
I prefer a hybrid:
Do modulo 4096 (or some suitably large number)
Use a dictionary with 4096 entries. This maps 4096 values into the current number of shards.
You have a package to migrate users from one shard to another. (This is a vital component of the system -- you will use it for upgrading, serious crashes, etc, load balancing, etc)
Adding a shard involves moving a few of the 4096 to the new shard and changing the dictionary. The users to move would probably come from the 'busiest' shards, thereby relieving the pressure on them.
Yes, item 4 impacts some users, but only a small percentage of them. You can soften the blow by picking 'idle' or 'small' or 'asleep' users to move. This would involve computing some metric for each of the 4096 clumps.

Sharding user data across multiple databases on a single database server

I'm a self taught programmer and I've always followed certain design parameters that were based more on common sense than research when it comes to building systems that scale. However, I just realized one component of my system might not be necessary.
Generally speaking I break user data into groups and assign it to specific mysql servers. When a content server behind a load balancer receives a request, I use data from the request (like a userid) to resolved the database where that users data is stored by querying a central table stored on DynamoDB which can handle an insane amount of load.
However, I also assign the user data to databases within the server. Like I'll have a 100 databases in each server that all have the same table structure, and I'll assign 250 users to each database.
The logic originally was that a table where each user has 2k entries is going to run way faster with 500k entries than 50 million. However, it occurred to me that breaking up user data this way might not make any sense at all.
Indexes are pretty efficient. I'm sure the database actually had some kind of internal logic that allows it to access data at basically the same speed right? I've been doing this for ten years, and I just realized this might not be necessary at all. Any thoughts? Can I just make one database with all my tables in it or should I continue doing things the way I always have, sharding across 100 databases on a server?
This is a little theoretical, so it might be worth understanding the idea of Big-O complexity aka Time Complexity.
A clustered B-Tree index lookup for a single item is O(log(n)) where n is the number of rows in the table. DynamoDB is a hash-based implementation, which puts it much closer to O(1), meaning that it's performance does not appreciably change with content size.
Now for the math, log(500k) = 5.7, where log(50mil) = 7.7 Single-row lookups scale REALLY well, as long as you are avoiding hits to the disk to load the index into memory.
So, you are talking about a 25% difference for a single-row lookup. Which is significant, but still likely less than the overhead of a round-trip to another db system (like DynamoDB).
Of course, your mileage may vary, as there are concerns like keeping the index in memory, etc... So it's possible that you would see a difference in a production environment. I highly recommend setting up a test, and verify your performance.

How to increase database performance if there is 0.1M traffic

I am developing a site and I'm concerned about the performance.
In the current system there are transactions like adding 10,000 rows to a single table. It doesn't matter it took around 0.6 seconds to insert.
But I am worrying about what happens if there are 100,000 concurrent users and 1000 of the users want to add 10,000 rows to a single table at once.
How could this impact the performance compared to a single user? How can I improve these transactions if there is a large amount of traffic like in this situation?
When write speed is mandatory, the way we tackle it is getting quicker hard drives.
You mentioned transactions, that means you need your data durable (D of ACID). This requirement rules out MyISAM storage engine or any type of NoSQL so I'll focus the answer towards what goes on with relational databases.
The way it works is this: you get a set number of Input Output Operations per Second or IOPS per hard drive. Hard drives also have a metric called bandwith. The metric you are interested in is write speed.
Some crude calculation here would be this - Number of MB per second divided by number of IOPS = how much data you can squeeze per IOPS.
For mechanical drives, this magic IOPS number is anywhere between 150 and 300 - quite low. Given their bandwith of about 100 MB/sec, you get a real small number of writes and bandwith per write. This is where Solid State Drives kick in - their IOPS number starts at about 5 000 (some even go to 80 000) which is awesome for databases.
Connecting these drives in RAID gives you a super quick storage solution. If you are able to squeeze 10 000 inserts into one transaction, the disk will try to squeeze all 10k inserts through 1 IOPS.
Another strategy is partitioning your table and having multiple drives where MySQL stores the data.
This is as far as you can go with a single MySQL installation. There are strategies for distributing data to multiple MySQL nodes etc. but I assume that's out of scope of your question.
TL;DR: you need quicker disks.
If you are trying to scale for inserting millions of rows per second, you have bigger problems. That could add up to trillions of rows per month. That's hundreds of terabytes before the end of the month. Do you have a big enough disk farm for that? Can you afford enough SSDs for that.
Another thing. With a trillion rows, it is quite challenging to have any indexes other than a simple auto_increment. Without any indexes, how do you plan on accessing the data? A table scan of a trillion rows will take day(s).
Also, you said 100,000 users; you implied that they are connected simultaneously? That, too, is a challenge.
What are the users doing to generate 10K rows all at once? What about the network bandwidth?
Etc. Etc.
If you really have a task like this, Sharding is probably the only solution. And that is in addition to SSDs, RAID, IOPs, etc, etc.
Few stuff that you must consider both from software and hardware point.
Things must consider :
Go for SSD drive to have better IO.
Good to have 10GB of network, if you have that huge traffic.
Use mysql 5.6 or above, they made good improvement on performance over previous version.
Use bulk inserts, instead of sequential one, and even better if you can store all data in a file and use load_data_infile. This would be
20 times faster then regular insert.
Mysql provide multiple ways to scaleout. Its depend upon on your product requirement which way you want to go.

Can I have several 'similar' database tables to reduce retrieval time

It is best to explain my question in terms of a concrete example.
Consider an order management application that restaurants use to receive orders from their customers. I have a table called orders which stores all of them.
Now every day the tables keep growing in size but the amount of data accessed is constant. Generally the restaurants are only interested in orders received in the last day or so. After 100 days, for example, 'interesting' data is only about 1/100 of the table size; after 1 year it's 1/365 and so on.
Of course, I want to keep all the old orders, but performance for applications that are only interested in current orders keeps reducing. So what is the best way to not have old data interfere with the data that is 'interesting'?
From my limited database knowledge, one solution that occurred to me was to have two identical tables - order_present and order_past - within the same database. New orders would come into 'order_present' and a cron job would transfer all processed orders older than two days to 'order_old', keeping the size of 'order_present' constant.
Is this considered an acceptable solution to deal with this problem. What other solutions exist?
Database servers are pretty good at handling volume. But the performance could be limited by physical hardware. If it is the IO latency that is bothering you, there are several solutions available. You really need to evaluate what fits best for your usecase.
For example:
you can Partition the table to distribute it onto multiple physical disks.
you can do Sharding to put data on to different physical servers
you can evaluate using another Storage Engine which best fits your data and application. MyISAM delivers better read performance compared to InnoDB at the cost of being less ACID compliant
you can use Read Replicas to deligate all (most) "select" queries to replicas (slaves) of the main database servers (master)
Finally, MySQL Performance Blog is a great resource on this topic.

MYISAM sharding vs using InnoDB

I have a table with very high insert rate and update rate as well as read rate. On average there are about 100 rows being inserted and updated per second. And there are about 1000 selects per second.
The table has about 100 million tuples. It is a relationship table so it only has about 5 fields. Three fields contain keys so they are indexed. All the fields are of integers.
I am thinking of sharding the data, however, it adds a lot of complexity, but does offer speed. The other alternative is to use innodb.
The database runs on a raid 1 of 256GB ssd with 32GB 1600mhz of RAM and i7 3770k over clocked to 4Ghz
The database freezes constantly at peak times where the queries can be as high as 200 rows being inserted or updated and 2500 selects per second
Could you guys please point into as what I should do?
Sharding is usually a good idea to distribute table size. Load problems should generally be addressed with a replicated data environment. In your case your problem is a) huge table and b) table level locking and c) crappy hardware.
InnoDB
If you can use one of the keys on your table as a primary key, InnoDB might be a good way to go since he'll let you do row-level locking which may reduce your queries from waiting on each other. A good test might be to replicate your table to a test server and try all your queries against him and see what the performance benefit is. InnoDB has a higher resource consumption rates then MyISAM, so keep that in mind.
Hardware
I'm sorry bud, but your hardware is crap for the performance you need. Twitter does 34 writes per second at 2.6k QPS. You can't be doing Twitter's volume and think a beefed up gaming desktop is going to cut it. Buy a $15k Dell with some SSD drives and you'll be able to burst 100k QPS. You're in the big times now. It's time to ditch the start-up gear and get yourself a nice server. You do not want to shard. It will be cheaper to upgrade your hardware, and frankly, you need to.
Sharding
Sharding is awesome for splitting up large tables. And that's it.
Let me be clear about the bad. Developing a sharded architecture sucks. You want to do everything possible to not shard. Upgrade hardware, buy multiple servers and set up replication, optimize your code, but for the love of God, do not shard. You are way below the performance line for sharding. When your pushing sustained 30k+ QPS, then we can talk sharding. Until that day, NO.
You can buy a medium-range server ($30k Dell PowerEdge) with 5TB of Fusion IO on 16 cores and 256 GB of RAM and he'll take you all the way to 200k QPS.
But if you refuse to listen to me and are going to shard anyway, then here's what you need to do.
Rule 1: Stay on the Same Shard (ie. Picking a Partition Rule)
Once you shard, you do not want to be accessing data from across multiple shards. You need to pick a partition rule that keeps your query on the same shard as much as possible. Distributing a query (rule 4) is incredibly painful in distributed data environments.
Rule 2: Build a Shard Map and Replicate it
Your code will need to be able to get to all shards. Create a shard map based on your partition rule that lets your code know where to go to get the data he wants.
Rule 3: Write a Query Wrapper for your Shards
You do not want to manually decide which shard to go to. Write a wrapper that does it for you. You will thank yourself down the road when you're writing code.
Rule 4: Auto-balance
You'll eventually need to balance your shards to keep performance optimal. Plan for this before-hand and write your code with the intention that you'll have some kron job which balances your shards for you.
Rule 4: Support Distributed Queries
You inevitably will need to break Rule 1. When that happens, you'll need a query wrapper that can pull data from multiple shards and aggregate (bring) it into one place. The more shards you have, the more likely this will need to be multi-threaded. In my shop, we call this a distributed query (ie. a query which runs on multiple shards).
Bad News: There is no code out there for doing distributed queries and aggregating results. Apache Hadoop tries, but he's terrible. So is HiveDB. A good query distributor is hard to architect, hard to write, hard to optimize. This is a problem billion-dollar a year companies deal with. I shit you not, but if you come up with a good wrapper for distributing queries across shards that supports sorting+limit clauses and scales well, you could be a millionaire over night. Selling it for $300,000? You would have a line outside your door a mile long.
My point here is sharding is hard and it is expensive. It takes a lot of work and you want to do everything humanly possible to not shard. If you must, follow the rules.