Switching from redis to Mysql. Good idea? - mysql

We are building a SaaS backend for restaurants using Rails. We integrate directly with POS, so each POS keeps sending customer orders that we store for later processing. we have this POS integration going at about 1,000 locations which send us about 3 Million individual customer orders on monthly basis.
for this write-heavy app, we store all orders in redis which is working beautifully. We are growing at incredible pace, we keep adding new restaurants with hundreds of locations that keep sending us crazy amount of data. Except there is one problem -- redis keeps running out of memory every month! As, everything which doesn't have to be in memory is in memory.
This is why we contemplating to switch to mysql. As we really don't need to keep all data in memory. here are we numbers of current redis database:
used_memory_human:39.83G
dbsize: 34706870
Here is what we store in redis as Hash:
id - integer
location_id - integer
stored_at - timestamp
token - string
transaction_no - integer
menu_items - string(comma seprated list of all menu items that customer ordered along with their price & Qty)
order_amount - decimal
order_subtotal_amount - decimal
order_amount_payable - decimal
order_datetime - timestamp
employee_id - integer
employee_name - string
pos_type - string
post_version - string
restaurant_id - integer
So, looking for some advice on:
moving from redis to mysql is good idea? how will it effect us in long run as we will need to keep updating our indexes & partition scheme to cater to huge demand.
What other databases(relational or non-relational) would be suited for this use case than redis?
Or we are all wrong, as redis is made for storing this type of data. so, we just keep using redis & upgrading our machines every month?

Data on the web is bound to grow. Any long-term project should anticipate this, and have a strategy for scaling.
As your volume of data or volume of traffic increases, you will find that approximately every order of magnitude growth requires changes to your architecture to handle it. Maybe you can be ahead of the curve a bit, but not forever. And you can't predict where your bottlenecks will be very far in advance.
It's common for a small subset of your data to be important for minute-to-minute work of your app, and you can keep this subset in Redis to take advantage of your current code. Then the rest of the data can be available in another data store, perhaps a bit slower to access, but much easier to handle growth.
You could scrap your current code and move everything to MySQL or another datastore, but keep two things in mind:
There is no database that will allow you to neglect having a scaling strategy. You could use MySQL, or PostgreSQL, or MongoDB, or Hadoop, or anything else, and you will still have the problem that your data is growing faster than a single database on a single server can handle.
It's generally not cost-effective to rewrite your app from the ground up for internal reasons of more efficient development or operations (read Things You Should Never Do, Part I by Joel Spolsky).
I'd recommend keeping your Redis app, but try to move historical data to another datastore.
I think MySQL is a fine choice, I'm sure it would be capable of handling your data. I work with clients regularly who keep terrabytes of data in MySQL, and handle tens of thousands of transactions per second. But since you haven't given any details about your usage of data, I can't offer an opinion about whether MySQL is the best choice. It could be Hadoop would have advantages, for example.

moving from redis to mysql is good idea? how will it effect us in long run as we will need to keep updating our indexes & partition scheme to cater to huge demand.
My vote is moving off of Redis is probably a good idea if you're concerned about the cost of hosting due to the necessity of keeping all data in memory. This doesn't have to involve moving all the data off of Redis, perhaps just the historical "colder" data where you care less about latency. The other advantage of moving the cold data off Redis is that any bugs that are found during the migration are likely to have a less significant impact.
What other databases(relational or non-relational) would be suited for this use case than redis?
This is a tough question to answer without better understanding your use case. That said I think any number of scalable relational DBs are probably good enough for your workload. A key requirement in my mind would be the ability to easily add/remove machines to scale as needed. A personal favorite is CitusDB but there are various options.
One trade-off to be aware of when moving to a relational database is that you'll potentially have more work to do when managing structured data then you would with Redis key/value store. For example, adding new fields could involve schema changes. PostgreSQL (and CitusDB) have support for some semi-structured data types which make this easier, I'm sure there's other relational databases that have similar features.

If mysql (or any other traditional Database) would suffice why did you go for Redis in the first place?
"we store for later processing" is vague. Can you please elaborate on this? I assume, this later processing is an Analysis kind of activity for which latency doesn't really matter and only throughput matters, right? If that's the case Redis was an overkill don't you think?
Have you consider compressing the data before dumping it to Redis.
From what I understood from your question is, your data is always structured, your READ is non-real time, "Durability" matters to you than the latency. If all of this assumption is correct, mysql is a safe choice. If you ever hit WRITE bottleneck you can think about Sharding.
This thread will give you a fair idea.
Can redis fully replace mysql?
Always keep in mind that most of the NoSQL solutions(including Redis) are fast because they trade ACID properties for speed. But here, in your case, from what I understood, ACID properties matters more.

With the upcoming 3.0 of Redis, the cluster functionality will be ready for production. Have a look a http://redis.io/topics/cluster-tutorial to get an overview. This will not directly help concerning the growing data volumes, but I assume this could make scaling/sharding easier for your setup.
What you also could consider is to move "old" data from Redis to another system, for example ElasticSearch with the help of a Redis River:
https://github.com/leeadkins/elasticsearch-redis-river
Compression using MessagePack could also be an option:
http://msgpack.org/
http://ruby.msgpack.org/
Storing a MessagePacked hash in Redis
http://redis.io/commands/EVAL

Related

Backend technology for high volume data for web application

I am developing an application to provide daily dynamic information like prices, availability, etc for around 50,000 objects. I need to store data for about the next 200 days. That would mean a total of 10 million rows. The prices will be batch updated and new data will be added once daily. Let me say about 10,000 existing rows get updated and 50,000 rows are inserted daily. What is the best backend framework that I can use.
Can MySQL be scalable with limited hardware capability. Or is NoSQL database the way to go? If yes, then which NoSQL database will be best suited for fast fetching and updating the data.
I would recommend you to use Cassandra, as you need to write more than read, and Cassandra is optimized for high throughput while write.
It provide scalability, no single point failure and high throughput. And you can update records as well.
Cassandra also supports batch operation for DML (data manipulation language) i.e. write, update and delete. And batch operation of Cassandra provides atomicity as well.
This type of volume is well within the capabilities/capacities of traditional RDBMS. I would say that if you are familiar with MySQL you will be safe to stick with it. A lot depends also, on what kind of queries you want to run. With a properly structured, denormalized setup, you can run ad hoc queries in an RDBMS, whereas with document stores, you need to think quite carefully about structure up front -- embedding versus referencing, see: MongoDB relationships: embed or reference?. MongoDB has added a very nice aggregation framework, which goes a long way towards being able to query data as you would in an RDBMS, but in many other NoSQL systems, queries are essentially map-reduce jobs and joins are either painful or impossible.
It sounds like your data is structured around dates/days. One thing you can do that will yield dramatic speed improvements on queries is partitioning by date ranges. I have worked on dbs over 100m rows in MySQL where historical data had to be kept for auditing purposes but where most of the read/write was on current data, and partitioning led to truly dramatic read query improvements.
You might be interested by this link which shows what some very high volume sites are using: What databases do the World Wide Web's biggest sites run on? Anecdotally, I know that Facebook had trillions of rows in MySQL across various clusters before they started hitting real bottlenecks, but it is no suprise that Cassandra ultimately came out of Facebook engineering, given the truly colossal data volumes they now handle.
Cassandra, Riak, CouchDB, MongoDB, etc all arose to solve very real problems, but these come with tradeoffs, both in terms of the CAP theorem, and in terms of ad hoc queries being more difficult than in RDBMS. Having said that, MongoDB and Cassandra (which I have most experience with) are easy to set up and fun to work with, so if you want to give them a go, I'm sure you will have no problems, but I would say your usage requirements are well within the capabilities of MySQL. Just my 2c.

Limits to move from Sql to NoSql Database

We are facing performance related issues in our current MySQL DB. Our application is pretty heavy on a few tables ~20. We run lot of aggregation queries on this table as well as writes. Most of our teams are developers and we don't have access to a dba which might help in retuning our current db and make things work faster.
Moving to NoSql is an option. But seriously thinking what are the higher limits in terms of
Volumes (Current volumes per day ~50GB)
Structured or Raw Data? (Structured Data)
IO stats on DB - ( Current rate is 60 KB/Sec)
Record writes - (now 3000 rows/sec)
Question arise
Is 50GB is high enough to consider NoSql? Some documentation recommends more than a TB
The data should be raw data, which can be further processed to get structured and use in application
MySql scales out at 3000 rows/secs, not sure MySql can be further tuned
HBase seems to be promising for Analytic application.
Would like to get some guidelines on limits of RDBMS one can think of moving to NoSQL
This is such a broad topic so don't believe there are any "right" answers but maybe a few general recommendations would help:
I think you should think of this challenge in terms of picking the right tool for the problem. All databases have their pros and cons and in some challenges the best approach is to use an entire toolbox to get the job done.
Note that moving your data, or even just parts of it, to different datastores is rarely a non-trivial effort. Use this chance to rethink about your data model before implementing it.
Getting this job done should also take into account more requirements, such your growth plans for example. It looks you're at this crossroads because your original assumptions->choices are no longer en par with reality. If you want to delay the next time you're at the same place, you should use this opportunity to do so.
Lastly keep in mind that the job really done only after you do something with all that captured data - or else I'd recommend you use the infinitely-scalable write-to-/dev/null design pattern ;) Put differently, unless your data is write-only, you'd want to make sure that whatever SQL/NoSQL/NewSQL/other datastore that you choose can also get you the data/information/knowledge inside your use case's acceptable time frames.
It will probably worth it given your current infrastructure, but keep in mind that it's going to be a huge task, since you're going to need to redesign the whole process. HBase can help you, as it has some neat features, like realtime counters (which in some cases eliminates the needing of periodic rollups), or per-client buffering (which can allow you to scale to the >100k writes per second), but, be warned it cannot be queried in the same way you query a relational database, so, you're going to need to carefully plan it to make it work for you.
It seems that your main issue is with the raw data writes, sure, you can definitely rely on HBase for that, and then do the rollups every X min to store the data in your RDBMS so it can be queried as usual. But given you're doing them every minute, which is a very short gap, why don't you keep the data in memory and flush it the rolled up tables every minute?. Sure, you could loss data, but I don't know how critic is for you loosing one minute of data, and that alone could help you a lot.
Anyway, the best advice I can think of: read a book, understand how HBase works first, dig into the pros & cons, and think about how it can suit your specific needings. This is crucial because a good implementation is what is going to determine if it's a success or a total failure.
Some resources:
HBase: The Definitive Guide
HBase Administration Cookbook
HBase Reference guide (free)

Is mongoDB or Cassandra better than MySQL for large datasets?

In our (currently MySQL) database there are over 120 million records, and we make frequent use of complex JOIN queries and application-level logic in PHP that touch the database. We're a marketing company that does data mining as our primary focus, so we have many large reports that need to be run on a daily, weekly, or monthly basis.
Concurrently, customer service operates on a replicated slave of the same database.
We would love to be able to make these reports happen in real time on the web instead of having to manually generate spreadsheets for them. However, many of our reports take a significant amount of time to pull data for (in some cases, over an hour).
We do not operate in the cloud, choosing instead to operate using two physical servers in our server room.
Given all this, what is our best option for a database?
I think you're going the wrong way about the problem.
Thinking if you drop in NoSQL that you'll get better performance is not really true. At the lowest level, you're writing and retrieving a fair chunk of data. That implies your bottleneck is (most likely) HDD I/O (which is the common bottleneck).
Sticking to the hardware you have momentarily and using a monolithic data storage isn't scalable and as you noticed - has implications when wanting to do something in real-time.
What are your options? You need to scale your server and software setup (which is what you'd have to do with any NoSQL anyway, stick in faster hard drives at some point).
You also might want to look into alternative storage engines (other than MyISAM and InnoDB - for example, one of better engines that seemingly turn random I/O to sequential I/O is TokuDB).
Implementing faster HDD subsystem would also aid to your needs (FusionIO if you have the resources to get it).
Without more information on your end (what the server setup is, what MySQL version you're using and what storage engines + data sizes you're operating with), it's all speculation.
Cassandra still needs Hadoop for MapReduce, and MongoDB has limited concurrency with regard to MapReduce...
... so ...
... 120 mio records is not that much, and MySQL should easily be able to handle that. My guess is an IO bottleneck, or you're doing lots of random reads instead of sequential reads. I'd rather hire a MySQL techie for a month or so to tune your schema and queries, instead of investing into a new solution.
If you provide more information about your cluster, we might be able to help you better. "NoSQL" by itself is not the solution to your problem.
As much as I'm not a fan of MySQL once your data gets large, I have to say that you're nowhere near needing to move to a NoSQL solution. 120M rows is not a big deal: the database I'm currently working with has ~600M in one table alone and we query it efficiently. Managing that much data from an ops perspective is the problem; querying it isn't.
It's all about proper indexes and the correct use of them when joining, and secondarily memory settings. Find your slow queries (mysql slow query log FTW!), and learn to use the explain keyword to understand whey they are slow. Then tweak your indexes so your queries are efficient. Further, make sure you understand MySQL's memory settings. There are great pages in the docs explaining how they work, and they aren't that hard to understand.
If you've done both of those things and you're still having problems, make sure disk I/O isn't an issue. Then you should look in to another solution for querying your data if it is.
NoSQL solutions like Cassandra have a lot of benefits. Cassandra is fantastic at writing data. Scaling your writes is very easy--just add more nodes! But the tradeoff is that it's harder to get the data back out. From a cost perspective, if you have expertise in MySQl, it's probably better to leverage that and scale your current solution until it hits a limit before completely switching your underlying architecture.

how to do fast read data and write data in mysql?

Hi Friends
i am using MySQL DB for one of my Product, about 250 schools are singed for it now, its about 1500000 insertion per hour and about 12000000 insertion per day, i think my current setup like just a single server may crash with in hours, and the read is also same as write, how can i make it crash free DB server, the main problem i am facing now is the slow of both writing and reading data how can i over come that,it is very difficult for me to get a solution.guys please help me..which is the good model for doing the solution?
It is difficult to get both fast reads and writes simultaneously. To get fast reads you need to add indexes. To get fast writes you need to have few indexes. And to get both to be fast they must not lock each other.
Depending on your needs, one solution is to have two databases. Write new data to your live database and every so often when it is quiet you can synchronize the data to another database where you can perform queries. The disadvantage of this approach is that data you read will be a little old. This may or may not be a problem depending on what it is you need to do.
~500 inserts per second is nothing to sneeze at indeed.
For a flexible solution, you may want to implement some sort of sharding. Probably the easiest solution is to separate schools into groups upfront and store data for different groups of schools on different servers. E.g., data for schools 1-10 is stored on server A, schools 11-20 on server B, etc. This is almost infinitely scalable, assuming that there are few relationships between data from different schools.
Also you could just try throwing more horsepower at the problem and invest into a RAID of SSD drives and, assuming that you have enough processing power, you should be OK. Of course, if it's a huge database, the capacity of SSD drives may not be enough.
Finally, see if you can cut down on the number of insertions, for example by denormalizing the database. Say, instead of storing attendance for each student in a separate row put attendance of the entire class as a vector in a single row. Of course, such changes will heavily limit your querying capabilities.
My laid back advice is:
Build you application lightweight. Don't use an high level database abstraction layer like Active Record. They suck at scaling.
Learn a lot about mysql permformance.
Learn about mysql replication.
Learn about load balancing.
Learn about in memory caches. (memcached)
Hire an administrator (with decent mysql knowledge) or web app performance guru/consultant.
The concrete strategy depends on your application and how it is used. Mysql replication, may or may not be appropriate (same applies for the mentioned sharding strategy). But it's a rather simple way to achive some scaling, because it doesn't impact your application design too much. In memory caches can keep away some load from your databases, but they need some work to apply and some trade offs. In the end you need a good overall understanding how to handle a database driven application under heavy load. If you have a tight deadline, add external manpower, because you won't do this right within 6 weeks without experience.

Alternatives to traditional relational databases for activity streams

I'm wondering if some other non-relational database would be a good fit for activity streams - sort of like what you see on Facebook, Flickr (http://www.flickr.com/activity), etc. Right now, I'm using MySQL but it's pretty taxing (I have tens of millions of activity records) and since they are basically read-only once written and always viewed chronologically, I was thinking that an alternative DB might work well.
The activities are things like:
6 PM: John favorited Bacon
5:30 PM: Jane commented on Snow Crash
5:15 PM: Jane added a photo of Bacon to her album
The catch is that unlike Twitter and some other systems, I can't just simply append activities to lists for each user who is interested in the activity - if I could it looks like Redis would be a good fit (with its list operations).
I need to be able to do the following:
Pull activities for a set or subset of people who you are following ("John" and "Jane"), in reverse date order
Pull activities for a thing (like "Bacon") in reverse date order
Filter by activity type ("favorite", "comment")
Store at least 30 million activities
Ideally, if you added or removed a person who you are following, your activity stream would reflect the change.
I have been doing this with MySQL. My "activities" table is as compact as I could make it, the keys are as small as possible, and the it is indexed appropriately. It works, but it just feels like the wrong tool for this job.
Is anybody doing anything like this outside of a traditional RDBMS?
Update November 2009: It's too early to answer my own question, but my current solution is to stick with MySQL but augment with Redis for fast access to the fresh activity stream data. More information in my answer here: How to implement the activity stream in a social network...
Update August 2014: Years later, I'm still using MySQL as the system of record and using Redis for very fast access to the most recent activities for each user. Dealing with schema changes on a massive MySQL table has become a non-issue thanks to pt-online-schema-change
I'd really, really, suggest stay with MySQL (or a RDBMS) until you fully understand the situation.
I have no idea how much performance or much data you plan on using, but 30M rows is not very many.
If you need to optimise certain range scans, you can do this with (for example) InnoDB by choosing a (implicitly clustered) primary key judiciously, and/or denormalising where necessary.
But like most things, make it work first, then fix performance problems you detect in your performance test lab on production-grade hardware.
EDIT:Some other points:
key/value database such as Cassandra, Voldermort etc, do not generally support secondary indexes
Therefore, you cannot do a CREATE INDEX
Most of them also don't do range scans (even on the main index) because they're using hashing to implement partitioning (which they mostly do).
Therefore they also don't do range expiry (DELETE FROM tbl WHERE ts < NOW() - INTERVAL 30 DAYS)
Your application must do ALL of this itself or manage without it; secondary indexes are really the killer
ALTER TABLE ... ADD INDEX takes quite a long time in e.g. MySQL with a large table, but at least you don't have to write much code to do it. In a "nosql" database, it will also take a long time BUT also you have to write heaps and heaps of code to maintain the new secondary index, expire it correctly, AND modify your queries to use it.
In short... you can't use a key/value database as a shortcut to avoid ALTER TABLE.
I am also planning on moving away from SQL. I have been looking at CouchDB, which looks promising. Looking at your requirements, I think all can be done with CouchDB views, and the list api.
It seems to me that what you want to do -- Query a large set of data in several different ways and order the results -- is exactly and precisely what RDBMeS were designed for.
I doubt you would find any other datastore that would do this as well as a modern commercial DBMS (Oracle, SQLServer, DB2 etc.) or any opn source tool that would accomplish
this any better than MySql.
You could have a look at Googles BigTable, which is really a relational database but
it can present an 'object'y personality to your program. Its exceptionaly good for free format text
searches, and complex predicates. As the whole thing (at least the version you can download) is implemented in Python I doubt it would beat MySql in a query marathon.
For a project I once needed a simple database that was fast at doing lookups and which would do lots of lookups and just an occasional write. I just ended up writing my own file format.
While you could do this too, it is pretty complex, especially if you need to support it from a web server. With a web server, you would at least need to protect every write to the file and make sure it can be read from multiple threads. The design of this file format is something you should work out as good as possible with plenty of testing and experiments. One minor bug could prove fatal for a web project in this style, but if you get it working, it can work real well and extremely fast.
But for 99.999% of all situations, you don't want such a custom solution. It's easier to just upgrade the hardware, move to Oracle, SQL Server or InterBase, use a dedicated database server, use faster hard disks, install more memory, upgrade to a 64-bit system. Those are the more generic tricks to improve performance with the least effort.
I'd recommend learning about message queue technology. There are several open-source options available, and also robust commercial products that would serve up the volume you describe as a tiny snack.
CouchDB is schema-free, and it's fairly simple to retrieve a huge amount of data quickly, because you are working only with indexes. You are not "querying" the database each time, you are retrieving only matching keys (which are pre-sorted making it even faster).
"Views" are re-indexed everytime new data is entered into the database, but this takes place transparently to the user, so while there might be potential delay in generating an updated view, there will virtually never be any delay in retrieving results.
I've just started to explore building an "activity stream" solution using CouchDB, and because the paradigm is different, my thinking about the process had to change from the SQL thinking.
Rather than figure out how to query the data I want and then process it on the page, I instead generate a view that keys all documents by date, so I can easily create multiple groups of data, just by using the appropriate date key, essentially running several queries simultaneously, but with no degradation in performance.
This is ideal for activity streams, and I can isolate everything by date, or along with date isolation I can further filter results of a particular subtype, etc - by creating a view as needed, and because the view itself is just using javascript and all data in CouchDB is JSON, virtually everything can be done client-side to render your page.