I'm currently trying to figure out how to schedule a task in the future efficiently, but only when certain conditions are met. I'm planning a new project so specific technologies haven't been locked in but I'll most likely be using a queue like RabbitMQ.
The system I'll be building will have a bunch of data in separate accounts, and each account can set up different rules and actions to occur when certain conditions are met.
E.g. Do X if field1 = 'abc' and field2 > 200
The immediate rules are of course pretty straight forward, however if it is changed to 'Do X in 3 days if field1 = 'abc' and field2 > 200' then it becomes harder and a ideal solution isn't springing to mind.
When the data is changed and the rule matches I could add a delayed message to the queue so the message is released after 3 days. This would be extremely efficient, no polling, etc... The problem is if during that time the data changes and the rule no longer matches I don't want the action to occur.
Additionally it's possible for the rule to match, then the data is changed so that the rule no longer matches, then the data changed again so it matches again. This should reset the 3 days but it rules out just double checking the rules when the delayed message gets released.
I could store a history of all changes to the data (which I might be doing anyway) but when the delayed message is released it could be expensive trawling through all changes to verify the rule has matched on all occasions.
Its looking like I might need to use a database driven queue, and when data changes re-apply the rules to every message being delayed and delete them if they stop matching.
This kind of thing must come up in a wide range of applications however and I'm wondering if there is a better way of doing delayed messages while the data continues to match rules.
Related
A database table is used to store editing changes to a text document.
The database table has four columns: {id, timestamp, user_id, text}
A new row is added to the table each time a user edits the document. The new row has an auto-incremented id, and a timestamp matching the time the data was saved.
To determine what editing changes a user made during a particular edit, the text from the row inserted in response to his or her edit is compared to the text in the previously inserted row.
To determine which row is the previously inserted row, either the id column or the timestamp column could be used. As far as I can see, each method has advantages and disadvantages.
Determining the creation order using id
Advantage: Immune to problems resulting from incorrectly set system clock.
Disadvantage: Seems to be an abuse of the id column since it prescribes meaning other than identity to the id column. An administrator might change the values of a set of ids for whatever reason (eg. during a data migration), since it ought not matter what the values are so long as they are unique. Then the creation order of rows could no longer be determined.
Determining the creation order using timestamp
Advantage: The id column is used for identity only, and the timestamp is used for time, as it ought to be.
Disadvantage: This method is only reliable if the system clock is known to have been correctly set each time a row was inserted into the table. How could one be convinced that the system clock was correctly set for each insert? And how could the state of the table be fixed if ever it was discovered that the system clock was incorrectly set for a not precisely known period in the past?
I seek a strong argument for choosing one method over the other, or a description of another method that is better than the two I am considering.
Using the sequential id would be simpler as it's probably(?) a primary key and thus indexed and quicker to access. Given that you have user_id, you can quickly assertain the last and prior edits.
Using the timestamp is also applicable, but it's likely to be a longer entry, and we don't know if it's indexed at all, plus the potential for collisions. You rightly point out that system clocks can change... Whereas sequential id's cannot.
Given your update:
As it's difficult to see what your exact requirements are, I've included this as evidence of what a particular project required for 200K+ complex documents and millions of revisions.
From my own experience (building a fully auditable doc/profiling system) for an internal team of more than 60 full-time researchers. We ended up using both an id and a number of other fields (including timestamp) to provide audit-trailing and full versioning.
The system we built has more than 200 fields for each profile and thus versioning a document was far more complex than just storing a block of changed text/content for each one; Yet, each profile could be, edited, approved, rejected, rolled-back, published and even exported as either a PDF or other format as ONE document.
What we ended up doing (after a lot of strategy/planning) was to store sequential versions of the profile, but they were keyed primarily on an id field.
Timestamps
Timestamps were also captured as a secondary check and we made sure of keeping system clocks accurate (amongst a cluster of servers) through the use of cron scripts that checked the time-alignment regularly and corrected them where necessary. We also used Ntpd to prevent clock-drift.
Other captured data
Other data captured for each edit also included (but not limited to):
User_id
User_group
Action
Approval_id
There were also other tables that fulfilled internal requirements (including automatically generated annotations for the documents) - as some of the profile editing was done using data from bots (built using NER/machine learning/AI), but with approval being required by one of the team before edits/updates could be published.
An action log was also kept of all user actions, so that in the event of an audit, one could look at the actions of an individual user - even when they didn't have the permissions to perform such an action, it was still logged.
With regard to migration, I don't see it as a big problem, as you can easily preserve the id sequences in moving/dumping/transferring data. Perhaps the only issue being if you needed to merge datasets. You could always write a migration script in that event - so from a personal perspective I consider that disadvantage somewhat diminished.
It might be worth looking at the Stack Overflow table structures for there data explorer (which is reasonably sophisticated). You can see the table structure here: https://data.stackexchange.com/stackoverflow/query/new, which comes from a question on meta: How does SO store revisions?
As a revision system, SO works well and the markdown/revision functionality is probably a good example to pick over.
Use Id. It's simple and works.
The only caveat is if you routinely add rows from a store-and-forward server so rows may be added later but should treated as being added earlier
Or add another column whose sole purpose is to record the editing order. I suggest you do not use datetime for this.
I have noticed that using something like delayed_job without a UNIQUE constraint on a table column would still create double entries in the DB. I have assumed delayed_job would run jobs one after another. The Rails app runs on Apache with Passenger Phusion. I am not sure if that is the reason why this would happen, but I would like to make sure that every item in the queue is persisted to AR/DB one after another, in sequence, and to never have more than one write to this DB table happen at the same time. Is this possible? What would be some of the issues that I would have to deal with?
update
The race conditions arise because an AJAX API is used to send data to the application. The application received a bunch of data, each batch of data is identified as belonging together by a Session ID (SID), in the end, the final state of the database has to include the latest most up-to date AJAX PUT query to the API. Sometimes queries arrive at the exact same time for the same SID -- so I need a way to make sure they don't all try to be persisted at the same time, but one after the other, or simply the last to be sent by AJAX request to the API.
I hope that makes my particular use-case easier to understand...
You can lock a specific table (or tables) with the LOCK TABLES statement.
In general I would say that relying on this is poor design and will likely lead to with scalability problems down the road since you're creating an bottleneck in your application flow.
With your further explanations, I'd be tempted to add some extra columns to the table used by delayed_job, with a unique index on them. If (for example) you only ever wanted 1 job per user you'd add a user_id column and then do
something.delay(:user_id => user_id).some_method
You might need more attributes if the pattern is more sophisticated, e.g. there are lots of different types of jobs and you only wanted one per person, per type, but the principle is the same. You'd also want to be sure to rescue ActiveRecord::RecordNotUnique and deal with it gracefully.
For non delayed_job stuff, optimistic locking is often a good compromise between handling the concurrent cases well without slowing down the non concurrent cases.
If you are worried/troubled about/with multiple processes writing to the 'same' rows - as in more users updating the same order_header row - I'd suggest you set some marker bound to the current_user.id on the row once /order_headers/:id/edit was called, and removing it again, once the current_user releases the row either by updating or canceling the edit.
Your use-case (from your description) seems a bit different to me, so I'd suggest you leave it to the DB (in case of a fairly recent - as in post 5.1 - MySQL, you'd add a trigger/function which would do the actual update, and here - you could implement similar logic to the above suggested; some marker bound to the sequenced job id of sorts)
I have 5+ simultaneously processes selecting rows from the same mysql table. Each process SELECTS 100 rows, PROCESS IT and DELETES the selected rows.
But I'm getting the same row selected and processed 2 times or more.
How can I avoid it from happening on MYSQL side or Ruby on Rails side?
The app is built on Ruby On Rails...
Your table appears to be a workflow, which means you should have a field indicating the state of the row ("claimed", in your case). The other processes should be selecting for unclaimed rows, which will prevent the processes from stepping on each others' rows.
If you want to take it a step further, you can use process identifiers so that you know what is working on what, and maybe how long is too long to be working, and whether it's finished, etc.
And yeah, go back to your old questions and approve some answers. I saw at least one that you definitely missed.
Eric's answer is good, but I think I should elaborate a little...
You have some additional columns in your table say:
lockhost VARCHAR(60),
lockpid INT,
locktime INT, -- Or your favourite timestamp.
Default them all to NULL.
Then you have the worker processes "claim" the rows by doing:
UPDATE tbl SET lockhost='myhostname', lockpid=12345,
locktime=UNIX_TIMESTAMP() WHERE lockhost IS NULL ORDER BY id
LIMIT 100
Then you process the claimed rows with SELECT ... WHERE lockhost='myhostname' and lockpid=12345
After you finish processing a row, you make whatever updates are necessary, and set lockhost, lockpid and locktime back to NULL (or delete it).
This stops the same row being processed by more than one process at once. You need the hostname, because you might have several hosts doing processing.
If a process crashes while it is processing a batch, you can check if the "locktime" column is very old (much older than processing can possibly take, say several hours). Then you can just reclaim some rows which have an old "locktime" even though their lockhost is not null.
This is a pretty common "queue pattern" in databases; it is not extremely efficient. If you have a very high rate of items entering / leaving the queue, consider using a proper queue server instead.
http://api.rubyonrails.org/classes/ActiveRecord/Transactions/ClassMethods.html
should do it for you
I'm developing a relatively simple, custom web app with a MySQL MyISAM database on the back end. Somehow, I want to avoid the classic concurrency overwrite problem, e.g. that user A overwrites user B's edits because B loads and submits some edit form before A is finished.
That's why I would like to somehow lock a row on displaying the edit form. However...
As I said, I'm using MyISAM, which, as far as I can tell, doesn't support row-level locks. Also, I'm not sure if holding 'real' MySQL locks for a couple of minutes is recommended practice.
I don't really know much about transactions, but from what I've seen, it looks like they're meant to be used inside one connection.
Using some kind of conflict merge system like Git has is not an option really.
Rows would stay locked for a few minutes. Concurrency is very low: there's half a dozen users using the app at any time.
I'm now planning on using a table with details on which user is doing what, and since when. The app can then decide to not show the edit form when some other user recently opened it (e.g. is working on it). This fake lock would be deleted on saving the form.
Would this work? What should I do to avoid deadlocks, livelocks and all that stuff?
You could implement a lock, the easiest would probably be adding two fields to the data you want locked (lock_created Datetime, locked_by int). Then on the edit page (and probably also on the edit button) you check wether (lock_created + lock_interval) < now() - if not, the data is locked for editing and the user should be informed. (Note you always need the check on the edit-page, not just on the edit button.)
Also on the submission page, you need to check the user still has the lock to submit. (See below.)
The one difficult part of this is what to do when someone edits but fails to submit within the lock interval.
So:
The lock_interval is 2 minutes.
At time 0:00 Alice locks the page, edits something, but gets a phone call and doesn't submit her changes
At time 2:30 Bob checks the page, gets the edit lock because Alice's lock has expired, and edits
At time 3:00 Alice gets back to her comp, presses submit -> conflict.
Someone doesn't get their data submitted. There is no way around that if you set locks to expire. (And if you don't, locks can be left forever.)
You can only decide which one to give priority (going with the new lock created by Bob is probably easiest) and inform the other the page has expired and the data won't be sumbitted, and hand them back their edits to redo them.
A note on the table structure: you could create a table 'locks' with fields 'table_name, row_id, lock_created, locked_by' but it probably won't be the easiest way, since joining on variable table names is complex and confusing. Also, there is probably no use to have a single place for all locks to be stored. For a simple mechanism, I think adding uniform fields to every table you want to implement the locking mechanism is easier all around.
You should absolutely not use row-level locks for this scenario.
You can use optimistic locking, which basically means that you have a version field for each row, which is incremented when the row is saved. Before save you make sure that the version field is the same as what it was when the row was loaded, which means that noone else has saved anything since you read the row.
Here's our mission:
Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records.
Records are loaded to a staging area and business-rule validation is applied.
Valid records are then pumped into an OLTP database in a batch fashion, with the following rules:
If record does not exist (we have a key, so this isn't an issue), create it.
If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are.
Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is).
Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors.
We've then put an UPDATE similar to...
UPDATE OLTPTable1 SET Field1 = CASE
WHEN Mask.Field1 = 0 THEN Staging.Field1
WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 )
WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 )
...
As you can imagine, the performance is rather horrendous.
Has anyone tackled a similar requirement?
We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.
If you are using SQL Server 2008, look into the MERGE statement, this may be suitable for your Upsert needs here.
Can you use a Conditional Split for the input to send the rows to a different processing stage dependent upon the factor that is matched? Sounds like you may need to do this for each of the 12 tables but potentially you could do some of these in parallel.
I took a look at the merge tool, but I’m not sure it would allow for the flexibility to indicate which data source takes precedence based off of a predefined set of rules.
This function is critical to allow for a system that lets multiple members utilize the process that can have very different needs.
From what I have read the Merge function is more of a sorted union.
We do use an approach similar to what you describe in our product for external system inputs. (we handle a couple of hundred target tables with up to 240 columns) Like you describe, there's anywhere from 1 to a million or more rows.
Generally, we don't try to set up a single mass update, we try to handle one column's values at a time. Given that they're all a single type representing the same data element, the staging UPDATE statements are simple. We generally create scratch tables for mapping values and it's a simple
UPDATE target SET target.column = mapping.resultcolumn WHERE target.sourcecolumn = mapping.sourcecolumn.
Setting up the mappings is a little involved, but we again deal with one column at a time while doing that.
I don't know how you define 'horrendous'. For us, this process is done in batch mode, generally overnight, so absolute performance is almost never an issue.
EDIT:
We also do these in configurable-size batches, so the working sets & COMMITs are never huge. Our default is 1000 rows in a batch, but some specific situations have benefited from up to 40 000 row batches. We also add indexes to the working data for specific tables.