I'm new to couchbase and was wondering if very frequent updates to a single document (possibly every second) will cause all updates to pass through the disk write queue, or only the last update made to the document?
In other words, does couchbase optimize disk writes by only writing the document to disk once, even if updated multiple time between writes.
Based on the docs, http://docs.couchbase.com/admin/admin/Monitoring/monitor-diskqueue.html, it sounds like all updates are processed. If anyone can confirm this, I'd be grateful.
thanks
Updates are held in a disk queue before being written to disk. If a write to a document occurs and a previous write is still in the disk queue, then the two writes will be coalesced, and only the more recent version will actually be written to disk.
Exactly how fast the disk queue drains will depend on the storage subsystem, so whether writes to the same key get coalesced will depend on how quick the writes come in compared to the storage subsystem speed / node load.
Jako, you should worry more about updates happening in the millisecond time frame or more than one update happening in 1 (one) millisecond. The disc write isn't the problem, Couchbase solves this intelligently itself but the fact that you will concurrency issues when you operate in the milliseconds time frame.
I've run into them fairly easily when I tested my application and first couldn't understand why Node.js (in my case) sometimes would write data to CouchBase and sometimes not. If it didn't write to CouchBase usually for the first record.
More problems raised when I first checked if a document with a specific key existed, upon not existing I would try to write it to CouchBase only to find out that in the meantime an early callback had finished and now there was indeed a key for the same document.
In those case you have to operate with the CAS flag and program it iteratively so that your app is continuously trying to pull the right document for that key and then updates. Keep this in mind especially when running tests and updates to the same document is being done!
Related
I am having several race conditions in my app where I was able to "SELECT" a document that was previously deleted by another thread 1-2 secs ago. I added ScanConsistency.REQUEST_PLUS to my "SELECTs" but it takes too long...
I am planning to add PersistTo.ONE param to the "DELETEs" however, I am not sure if the succeeding "SELECT" will still see the deleted document or not because I think that it might invoke "SELECT" on one of the non-master nodes which still has the deleted document in-memory.
Will it be possible to "SELECT" only on the master node?
I could use PersistTo.FOUR but I think that would also affect performance greatly.
From what I could tell from reading the documentation on this feature (this is a fairly recent addition to Couchbase), the fact that the edit takes place on another thread is significant. Each thread is going to have to have its own session with the database, and the consistency level could in theory be pulled from that thread, but you would need to have that thread communicate with your thread (probably a non-starter).
Therefore, going back to basics, it's important to realize that the database itself is an eventually-consistent data store. This means that, given the CAP-theorem, the data store sacrifices consistency for availability and partition tolerance. This is true in all cases, but the N1QL mechanism attempts to compensate a little bit for "your own writes." Being a software architect myself, I would not depend upon this except if I needed it as a temporary workaround, but rather keep the prevailing design principles in mind when designing the application data store.
Bottom line is that I believe this behavior is expected, and your design should be tolerant of it. If your application requires immediate consistency across sessions, then you should use a different data store.
Our mobile app track user events (Events can have many types)
Each mobile reporting the user event and later on can retrieve it.
I thought of writing to Redis and Mysql.
When user request:
1. Find on Redis
2. If not on Redis find on Mysql
3. Return the value
4. Keep Redis modified in case value wasnt existed.
5. set expiry policy to each key on redis to avoid out of mem.
Problem:
1. Reads: If many users at once requesting information which not existed at Redis mysql going to be overloaded with Reads (latency).
2. Writes: I am going to have lots of writes into Mysql since every event going to be written to both datasources.
Facts:
1. Expecting 10m concurrect users which writes and reads.
2. Need to serv each request with max latency of one second.
3. expecting to have couple of thousands requests per sec.
Any solutions for that kind of mechanism to have good qos?
3. Is that in any way Lambda architecture solution ?
Thank you.
Sorry, but such issues (complex) rarely have a ready answer here. Too many unknowns. What is your budget and how much hardware you have. Since 10 million clients are concurrent use your service your question is about hardware, not the software.
Here is no any words about several important requirements:
What is more important - consistency vs availability?
What is the read/write ratio?
Read/write ratio requirement
If you have 10,000,000 concurrent users this is problem in itself. But if you have much of reads it's not so terrible as it may seem. In this case you should take care about right indexes in mysql. Also buy servers with lot of RAM to keep at least index data in RAM. So one server can hold 3000-5000 concurrent select queries without any problems with latency requirement in 1 second (one of our statistic project hold up to 7,000 select rps per server on 4 years old ordinary harware).
If you have much of writes - all becomes more complicated. And consistency becomes main question.
Consistency vs availability
If consistency is important - go to the store for new servers with SSD drives and moder CPU. Do not forget to buy much RAM as possible. Why? If you have much of write requests your sql server would rebuild index with every write. And you can't do not use indexes because of your read requests do not to keep in latency requirement. Under consistency i mean - if you write something, you should do this in 1 second and if you read this data right after write - you get actual written information in 1 second.
Your problem 1:
Reads: If many users at once requesting information which not existed at Redis mysql going to be overloaded with Reads (latency).
Or well known "cache miss" problem. And it has just some solutions - horizontal scaling (buy more hardware) or precaching. Precaching in this case may be done in at least 3 scenarios:
Using non blocking read and wait up to one second while data wont be queried from SQL server. If it not, return data from Redis. Update in Redis immediately or throw queue - as you want.
Using blocking/non blocking read and return data from Redis as fast as possible, but with every ready query push jub to queue about update cache data in Redis (also may inform app it should requery data after some time).
Always read/write from Redis, but register job in queue every write request to update data in SQL.
Every of them is compromise:
High availability but consistency suffers, Redis is LRU cache.
High availability but consistency suffers, Redis is LRU cache.
High availability and consistency but requires lot of RAM for Redis.
Writes: I am going to have lots of writes into Mysql since every event going to be written to both datasources.
The filed of compromise again. Lot's of writes rests to hardware. So buy more or use queues for pending writes. So availability vs consistency again.
Event tracking means (usualy) you can return data close to real time but not in real time. For example have 1-10 seconds latency to update data on disk (mysql) keeping 1 second latency for write/read serving requests.
So, it's combination of 1/2/3 (or some other) techniques for data provessing:
Use LRU in Redis and do not use expire. Lot's of expire keys - problem as is. So we can't use to be sure we save RAM.
Use queue to warm up missing keys in Redis.
Use queue to write data into mysql server from Redis server.
Use additional requests to update data from client size of cache missing situation accures.
I have a couchbase server to storing a huge data.
This data growing daily, but i also daily delete after process it.
Current, this data has about 1320168 items count, with 2.97G of Data Usage
But why Disc Usage is very large with 135G ???
My disc is lowing space to store more data.
Could delete this data log files to reduce disc Usage?
Couchbase uses an append-only format for storage. This means that every update or delete operation is actually stored as a new entry in the storage file and consumes more disk space.
Then a process called compaction occurs, that will reclaim the unnecessary used space. Compaction can either be configured to run automatically, when a certain fragmentation % is reached in your cluster, or manually on each node.
IIRC auto-compaction is not on by default.
So what you probably want to do is run compaction on your cluster. Note that it may require quite a large amount of diskspace, as noted here...
See the doc on how to perform compaction (in your case at the end of business day I guess you have an "off-peak" window where you currently delete and could perform compaction).
PS: Maybe guys in the official forums may have more insight and recommandations to offer.
I want to use pt-online-schema-change to change the schema of a big table (~100M records), does this tool effect the performance of MySql while its running?
By design, the tool will have no significant effect on performance. First, let's review what the tool does:
attach triggers to the current table, to copy all updates, deletes and inserts to the new table.
copy existing data over in chunks, partitioned by the key
The first part is going to double all writes and there's no way around this. The second part is a batch operation that is going to potentially lock the current table and use up a lot of IO.
Fortunately, the second part is split into chunks and pt-online-schema-change is quite clever about how big the chunks are and how long it waits between chunks:
it checks slave replication between chunks, and pauses if the lag is too great. it is able to recursively check for slaves.
it checks load (typically measured by number of running threads) and pauses if there are too many queries running (which implies lock contention or high CPU/IO usage). it can even abort if the load is extremely high.
it configures InnoDB lock settings such that it is most likely to be the victim of any lock contention, so production queries will run smoothly.
by default, the chunk size is dynamically changed to keep its runtime consistent, using a weighted average of previous chunk runtimes.
chunks that are too big (e.g. due to a huge number of rows with the same key) will be skipped over.
Due to this, it is likely that your server will only be slightly affected by the copy. But of course, there is no guarantee and if possible, you should run the tool on a staging version of the database. In the event of issues, you can safely abort the tool with no loss of data.
The online schema change is pretty intense, while running all inserts/updates/deletes to the original table are doubled (the tool adds triggers so these actions are duplicated on the new copy), and piece by piece every row of the original table is copied over.
If your concerned about what effects this may have on your database, check out the doc: http://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html
There are several ways you can have the tool throttle itself, including checking thread count, replication lag, and modifying how large each chunk is/how long each chunk should take.
It will read and write stuff to disk, consume memory and use CPU, so yes, it will affect performance while running. How could it be otherwise?
We have a service which sees several hundred simultaneous connections throughout the day, peeking at about 2000, for about 3 million hits a day, and growing. With each request I need to log 4 or 5 pieces of data to MySQL, we originally used the logging that came with the app were using however it was terribly inefficient and would run my db server at >3x the average cpu load, and would eventually bring the server to it knees.
At this point we are going to add our own logging to the application (php), the only option I have for logging data is the MySQL db, as this is the only common resource available to all of the http servers. This data will be mostly writes however everyday we generate reports based on the data, then crunch and archive the old data.
What recommendations can be made to ensure that I don't take down our services with logging data?
The solution we took with this problem was to create an archive table then regularly ( every 15 minutes, on an app server) crunch the data and put it back into the tables that were used to generate reports. The archive table of course did not have any indices, the tables which the reports are generated from have several indices.
Some stats on this approach:
Short Version: >360 times faster
Long Version:
The original code/model did direct inserts into the indexed table, and the average insert took .036 seconds, using the new code/model inserts took less than .0001 seconds (I was not able to get an accurate fix on the insert time I had to measure 100,000 inserts and average for the insert time). The post-processing (crunch) took an average 12 seconds for several tens-of-thousands records. Overall we were greatly pleased with this approach and so far it has worked incredibly well for us.
Based on what you describe, I recommend you try to leverage the fact that you don't need to read this data immediately and pursue a "periodic bulk commit route". That is, buffer the logging data in RAM on the app servers and doing periodic bulk commits. If you have multiple application nodes, some sort of randomized approach would help even more (e.g., commit updated info every 5 +/- 2 minutes).
The main drawback with this approach is that if an app server fails, you lose the buffered data. However, that's only bad if (a) you absolutely need all of the data and (b) your app servers crash regularly. Small chance that both are true, but in the event they are, you can simply persist your buffer to local disk (temporarily) on an app server if that's really a concern.
The main idea is:
buffering the data
periodic bulk commits (leveraging some sort of randomization in a distributed system would help)
Another approach is to stop opening and closing connections if possible (e.g., keep longer lived connections open). While that's likely a good first step, it may require a fair amount of work on your part on a part of the system that you may not have control over. But if you do, it's worth exploring.