I think they are both great and I wanna know is it possible to use both caching systems together at the same time?
Is there any experience?
Mysqls working set will always be cached.
This is called double caching and represents a major bottleneck in larger systems (Facebook for example), where user data is double cached, existing twice in RAM.
Related
We have an enterprise application that uses an SQL database. The database access characteristics are about 90% reads. The data that does get updated or created needs to be up-to-date immediately. The cache needs to be correctly invalidated with high certainty. The entities are referred to by their primary key for 98% of the cases.
The application is based on Node.js and is AWS-native. Since the application is AWS-native, I'd like to rely on managed services from AWS rather than hosting my own. One option is to implement our read-through Redis-based cache. Upon retrieving the entities, we'd check the cache and if the data is not cached we'd put it into the cache before turning it to the user. The parts of the code that update those entities will invalidate the cache by primary key.
Generally speaking, in computer science cache coherency is one of the most challenging problems to get right. I am of the opinion that rather than implementing a Redis cache and thinking through all of the possible scenarios for correctly invalidating it, it is wiser to instead configure an Aurora read-replica specifically for reading frequently accessed entities. The RDBMS will do a much better job at caching than anything we can build ourselves.
So, I am facing two options -- go through the effort of implementing my own caching, or use read replicas. My personal opinion is to use a read replica.
Any advice is greatly appreciated, as always.
Yes, you're right, cache invalidation is a tough problem. The simplest solution is to add code to your data writes, to replace the cached values. So they're always current. But this is easy only if the cached values have a pretty much 1-to-1 correlation with rows in your database.
An advantage of your own cache is that you can cache data that is not 1-to-1 with rows of data in the database. You might cache an entire HTML fragment for a drop-down menu for example. That could be the result of several SQL queries. It could be quite an advantage to cache data that is higher up the "food chain" so to speak. But cache invalidation becomes less straightforward. Best for storing results of queries that don't change often.
Using a read-replica is not a substitute for using a cache. Querying a read-replica still has overhead of making a database connection, authentication, SQL query parsing and optimization, locking, and all the other overhead that goes into RDBMS workings.
Querying data from a cache can be orders of magnitude faster.
Both have their place. It's best to use both a cache and a read-replica for different tasks. I would also add message queues as an important technology. I believe database, cache, and queue form a three-legged stool.
But you must have experience and judgment to know when each is the best tool for a given case.
I'm not sure if caching would be the correct term for this but my objective is to build a website that will be displaying data from my database.
My problem: There is a high probability of a lot of traffic and all data is contained in the database.
My hypothesized solution: Would it be faster if I created a separate program (in java for example) to connect to the database every couple of seconds and update the html files (where the data is displayed) with the new data? (this would also increase security as users will never be connecting to the database) or should I just have each user create a connection to MySQL (using php) and get the data?
If you've had any experiences in a similar situation please share, and I'm sorry if I didn't word the title correctly, this is a pretty specific question and I'm not even sure if I explained myself clearly.
Here are some thoughts for you to think about.
First, I do not recommend you create files but trust MySQL. However, work on configuring your environment to support your traffic/application.
You should understand your data a little more (How much is the data in your tables change? What kind of queries are you running against the data. Are your queries optimized?)
Make sure your tables are optimized and indexed correctly. Make sure all your query run fast (nothing causing a long row locks.)
If your tables are not being updated very often, you should consider using MySQL cache as this will reduce your IO and increase the query speed. (BUT wait! If your table is being updated all the time this will kill your server performance big time)
Your query cache is set to "ON". Based on my experience this is always bad idea unless your data does not change on all your tables. When you have it set to "ON" MySQL will cache every query. Then as soon as they data in the table changes, MySQL will have to clear the cached query "it is going to work harder while clearing up cache which will give you bad performance." I like to keep it set to "ON DEMAND"
from there you can control which query should be cache and which should not using SQL_CACHE and SQL_NO_CACHE
Another thing you want to review is your server configuration and specs.
How much physical RAM does your server have?
What types of Hard Drives are you using? SSD is not at what speed do they rotate? perhaps 15k?
What OS are you running MySQL on?
How is the RAID setup on your hard drives? "RAID 10 or RAID 50" will help you out a lot here.
Your processor speed will make a big different.
If you are not using MySQL 5.6.20+ you should consider upgrading as MySQL have been improved to help you even more.
How much RAM does your server have? is your innodb_log_buffer_size set to 75% of your total physical RAM? Are you using innodb table?
You can also use MySQL replication to increase the read sources of the data. So you have multiple servers with the same data and you can point half of your traffic to read from server A and the other half from Server B. so the same work will be handled by multiple server.
Here is one argument for you to think about: Facebook uses MySQL and have millions of hits per seconds but they are up 100% of the time. True they have trillion dollar budget and their network is huge but the idea here is to trust MySQL to get the job done.
I would like to convert my stats tracking system not to write to the database directly, as we're hitting bottlenecks.
We're currently using memcached for certain aspects of the site, and I wanted to use it for storing stats and committing them to mysql DB periodically.
The issue lies however in the number of items (which is in the millions) for which potentially there could be stats collected between the cronjob runs that would commit them into the database. Other than running a SELECT * FROM data and checking for existence of every single memcache key, and then updating the table.... is there any other way to do this?
(I'm not saying below is gospel, this is just my gut feeling. As said later on, I don't have the specifics of your system :) And obviously no offence meant etc :) )
I would advice against using memcached for this. Memcached is build te quickly retrieve values that you've gotten before, not to store values. The big difference is that is your cache is getting full, you'll loose your data.
Normally, you'd just have no data in your cache, and recollect the data from the source, which is impossible in this case. That alone would be a reason for me to try an dissuade you from this.
Now you say the major problem is the mysql connection limit you are hitting. If you do simple stuff (like what we talked about in the comments: the insert delayed), it's just a case of increasing the limit. You should probably have enough power to have your scripts/users go to the database once and say "this should eventually be added", and then go away. If your users can't even open 1 connection for that, there's a serious resource problem you probably won't fix by adding extra layers of cache?
Obviously hard to say without any specs of the system, soft and hardware, but my suggestion would be to see if you can just let them open their connections by increasing the limit, and fiddle with the server variables a bit, instead of monkey-patching your system by using a memcached as an in-between layer.
I had a similar issue with statistic data. But please don't use memcached for it. You can't be sure that ALL your items will moved to DB. You can loose data and/or double process data.
You should analyse your bottleneck against how much data you are writing/reading and how many connections you need. And than switch to something scalable like Hadoop, Cassandra, Scripe and other systems.
You need to provide additional information on the platform that you are running: O/S, database (version), storage engine, RAM, CPU (if possible)?
Are you inserting into a single table or more than one table?
Can you disable the indexes on the tables you are inserting into as this slows down the insert functions.
Are you running any triggers or stored procedures to compute values as you insert the raw data?
Is it good idea to use Memcached for session storage with PHP? We will have a lot of servers and we must access the session data from everywhere so we are forced to use database (in our case that will be MySQL) as session storage or Memcached. What do you think?
I know people who've used Memcached for this -- it's very fast, certainly a lot faster than a database, and is built to handle a lot more concurrency.
The primary disadvantage to purely in-memory storage is that all your session data will be wiped if/when you restart the daemon. In my experience, memcached is rock-solid and I've never had to restart it because of a failure, but it is a consideration if your sysadmins aren't used to working that way, or if your systems are updated frequently. It also depends on whether losing all your user sessions once a month or year is acceptable or not (i.e. in ecommerce, management probably won't like this).
The obvious solution, if that's the case, is to go to one of the many disk-based NoSQL/hash table databases, such as MemcacheDB, which is based off of Memcached. Or see: CouchDB, MongoDB etc. Each of these daemons (including Memcached) is also a lot less complex when it comes to performance tuning than MySQL (where all sorts of things like key and sort buffers, query cache etc. have to be tuned per install/use case) -- I mean, with Memcached there's not much to do other than to allocate memory and start it up.
Personally, I am a fan of using faster, more appropriate (non-SQL) storage for temporary things like session keys, but if your database is not under load and you don't anticipate it to be, the only thing you lose by storing sessions in the database is that it's a little slower, so users see a little more latency.
Whichever way you go, I suggest that you write your session-management code in such a way that the storage engine is just a layer, and you can swap in a different storage engine relatively painlessly. You don't want to be recoding your application if you find memcached or whatever you choose isn't working well, and you want to try something else. For instance, I once wrote a caching system for a clustered CMS application that used memcached to cache various pages and objects, but when the daemon wasn't reachable, it would fail over to alternate backends that would cache to shared memory or disk on the individual webservers. (In your case, you don't necessarily need the auto-failover, just the ability to change your mind about the backend.)
I mentioned MemcacheDB because it uses the Memcache protocol, so it's extremely easy to swap in Memcached for MemcacheDB or vice versa.
I have a web server with a lot of web sites with many database operations, and i am tryng sql caching as a way to improve the performance of my server.
In general, is there any disadvantage about sql caching in a common environment?
Thanks
Well, caching consumes RAM memory, so you'll need plenty of that.
I'm not sure about what caching mechanism SQL server employs, but it might be possible that your queries return stale data for some time.
Your best options of performance improvement is to load as much data into RAM as possible instead of caching.
The main problem with caching in a normal environment is cache expiration and stale data.
If you invalidate your cache every time data changes, you could end up rarely or never hitting the cache.
If you try to invalidate just the part of the cache that is changed, you have extra processing time to determine what to invalidate.
If you do not invalidate the cache or have cache timers, you may end up with stale data.
Depending on your environment and your requirements, you need to pick which solution best meets your needs. Sometimes it is ok to have some stale data, and in other applications it is not.
All the above points are valid. Invalidation of stale cache entries would be a key concern as well as syncing local cache across multiple servers. You may want to look into a grid cache (e.g. Hazelcast, mem-cache) and Heimdall Data. Heimdall acts as a transparent cache and provides invalidation logic built in.
In summary, sql caching itself is a good thing to do. It increases performance and can buffer sql traffic away from the database allowing scaling benefits.