I've created a nightly sync between two database applications for a small construction company and setup simple notifications using database mail to let a few people know if the load was successful or not. Now that they see this notification is working I've been asked to provide status updates to their clients as employees make changes to the work order throughout the day.
I've done some research and understand DB Mail is not designed for this type of feature but I'm thinking the frequency will be small enough to not be a problem. I'm estimating 50-200 emails per day.
I couldn't find anything on the actual limitations of DB Mail and wondering if anyone has tried something similar in the past or if I could be pushed in the right direction to send these emails using best practice.
If we're talking hundreds here you can definitely go ahead. Take a peak at the Database Mail MSDN page. The current design (i.e. anything post-SQL2000) was specifically designed for large, high-performance enterprise implementations. Built on top of Service Broker (SQL Server's message queuing bus) it offers both asynchronous processing and scalability with process isolation, clustering, and failover. One caveat is increased transaction log pressure as messages, unlike in some other implementations, are ACID-protected by SQL Server which in turn gives you full recoverability of the queues in case of failure.
If you're wondering what Service Broker can handle before migrating to a dedicated solution, there's a great MySpace case study. The most interesting fragment:
We didn’t want to start down the road of using Service Broker unless
we could demonstrate that it could handle the levels of messages that
we needed to support our millions of users across 440 database
servers,” says Stelzmuller. “When we went to the lab we brought our
own workloads to ensure the quality of the testing. We needed to see
if Service Broker could handle loads of 4,000 messages per second. Our
testing found it could handle more than 18,000 messages a second. We
were delighted that we could build our solution using Service Broker,
rather than creating a custom solution on our own.
Related
Something I have searched for but cannot find a straight answer to is this:
For a given service, if there are two instances of that service deployed to two machines, do they share the same persistent store or do they have separate stores with some syncing mechanism (master/slave, clustering)?
E.g. I have a OrderService backed by MySQL. We're getting many orders in so I need to scale this service up, so we deploy a second OrderService. Where does its data come from?
It may sound silly but, to me, every discussion makes it seem like the service and database are a packaged unit that are deployed together. But few discussions mention what happens when you deploy a second service.
Posting this as an answer because it's too long for a comment.
Microservices are self contained components and as such are responsible for their own data. If you want the get to the data you have to talk to the service API. This applies mainly to different kinds of services (i.e. you don't share a database among services that offer different kinds of business functionality - that's bad practice because you couple services at the heap through the database and it's then easy to couple more things that would normally be done at the API level but it's more convenient to do them through the database => you risk loosing componentization).
But if you have the same kind of service then there are, as you mentioned, two obvious choices: share a database or have each service contain it's own database.
Now you have to ask yourself which solution do you chose:
Are these OrderServices of yours truly capable of working on their own, or do you need to have all the orders in the same database for reporting or access by other applications?
determine what is your actual bottleneck. Is it the database? If not then share the database. Is it the services? If not then distribute your data.
need to distribute the data? What are your choices, what are your needs? Do you need to be consistent all the time or eventual consistency is good enough? Do you need to have separate databases and synchronize them manually or does your database installation handle replication and partitioning out of the box?
etc
What I'm trying to say is that in this kind of situations the answer is: it depends. And something that we tech geeks often forget to do before embarking on such distributed/scalability/architecture journeys is to talk to business. Often business can handle a certain degree of inconsistencies, suboptimal processes or looking up data in more places instead of one (i.e. what you think is important might not necessarily be for business). So talk to them and see what they can tolerate. Might be cheaper to resolve something in an operational way than to invest a lot into trying to build a highly distributable system.
All s/w is Windows based, coded in Delphi.
Some guys submit some data, which I send by TCP to a database server running MySql.
Some other guys add a pass/fail to their data and update the database.
And a third group are just looking at reports.
Now, the first group can see a history of what they submitted. When the second group adds pass/fail, I would like to update their history. My options seem to be
blindly refresh the history regularly (in Delphi, I display on a DB grid so I would close then open the query), but this seems inefficient.
ask the database server regularly if anything changed in the last X minutes.
never poll the database server, instead letting it inform the user's app when something changes.
1 seems inefficient. 2 seems better. 3 reduces TCP traffic, but that isn't much. Anyway, just a few bytes for each 2. However, it has the disadvantage that both sides are now both TCP client and server.
Similarly, if a member of the third group is viewing a report and a member of either of the first two groups updates data, I wish to reflect this in the report. What it the best way to do this?
I guess there are two things to consider. Most importantly, reduce network traffic and, less important, make my code simpler.
I am sure this is a very common pattern, but I am new to this kind of thing, so would welcome advice. Thanks in advance.
[Update] Close voters, I have googled & can't find an answer. I am hoping for the beneft of your experience. Can you help me reword this to be acceptable? or maybe give a UTL which will help me? Thanks
Short answer: use notifications (option 3).
Long answer: this is a use case for some middle layer which propagates changes using a message-oriented middleware. This decouples the messaging logic from database metadata (triggers / stored procedures), can use peer-to-peer and publish/subscribe communication patterns, and more.
I have blogged a two-part article about this at
Firebird Database Events and Message-oriented Middleware (part 1)
Firebird Database Events and Message-oriented Middleware (part 2)
The article is about Firebird but the suggested solutions can be applied to any application / database.
In your scenarios, clients can also use the middleware message broker send messages to the system even if the database or the Delphi part is down. The messages will be queued in the broker until the other parts of the system are back online. This is an advantage if there are many clients and update installations or maintenance windows are required.
Similarly, if a member of the third group is viewing a report and a
member of either of the first two groups updates data, I wish to
reflect this in the report. What it the best way to do this?
If this is a real requirement (reports are usually a immutable 'snapshot' of data, but maybe you mean a view which needs to be updated while beeing watched, similar to a stock ticker) but it is easy to implement - a client just needs to 'subscribe' to an information channel which announces relevant data changes. This can be solved very flexible and resource-saving with existing message broker features like message selectors and destination wildcards. (Note that I am the author of some Delphi and Free Pascal client libraries for open source message brokers.)
Related questions:
Client-Server database application: how to notify clients that data was changed?
How to communicate within this system?
Each of your proposed solutions are all viable in certain situations.
I've been writing software for a long time and comments below relate to personal experience which dates way back to 1981. I have no doubt others will have alternative opinions which will also answer your questions.
Please allow me to justify the positives and negatives of each approach, and the parameters around each comment.
"blindly refresh the history regularly (in Delphi, I display on a DB grid so I would close then open the query), but this seems inefficient."
Yes, this is inefficient
Is often the quickest and simplest thing to do.
Seems like the best short-term temporary solution which gives maximum value for minimal effort.
Good for "exploratory coding" helping derive a better software design.
Should be a good basis to refine / explore alternatives.
It's very important for programmers to strive to document and/or share with team members who could be affected by your changes their team when a tech debt-inducing fix has been checked-in.
If not intended as production quality code, this is acceptable.
If usability is poor, then consider more efficient solutions, like what you've described below.
"ask the database server regularly if anything changed in the last X minutes."
You are talking about a "pull" or "polling" model. Consider the following API options for this model:
What's changed since the last time I called you? (client to provide time to avoid service having to store and retrieve seesion state)
If nothing has changed, server can provide a time when the client should poll again. A system under excessive load is then able to back-off clients, i.e if a server application has an awareness of such conditions, then it is therefore better able to control the polling rate of compliant clients, by instructing them to wait for a longer period before retrying.
After considering that, ask "Is the API as simple as it can possibly be?"
"never poll the database server, instead letting it inform the user's app when something changes."
This is the "push" model you're talking about- publishing changes, ready for subscribers to act upon.
Consider what impact this has on clients waiting for a push - timeout scenarios, number of clients, etc, System resource consumption, etc.
Consider that the "pusher" has to become aware of all consuming applications. If using industry standard messaging queueing systems (RabbitMQ, MS MQ, MQ Series, etc, all naturally supporting Publish/Subscribe JMS topics or equivalent then this problem is abstracted away, but also added some complexity to your application)
consider the scenarios where clients suddenly become unavailable, hypothesize failure modes and test the robustness of you system so you have confidence that it is able to recover properly from failure and consistently remain stable.
So, what do you think the right approach is now?
I have looked everywhere...
whats the difference between a hosted database and a cloud database they seem like the same things?
Thanks
Both "hosted database" and "cloud database" mean that the database lives on the servers of some external provider/hoster.
The hoster might even be the same in both cases.
The main difference is that the "cloud" plans are usually meant to scale more (at a higher monthly fee), so you'd use them when you expect your site to get huge soon and need to quickly adjust server capacity when needed.
On the other hand, "hosted" plans are not that expensive, but have more limitations (server speed, database size...) and are more suited for "small" websites.
Of course this isn't by any means an "official" description of the two terms, but that's the impression that I get every time I see "cloud" or "hosted" webspaces/databases/services/whatever.
It depends on the context in which they're being used, but, yes, they usually mean the same thing. When I see the term cloud database being used they are usually referencing some cloud platform like Amazon EC2 or Microsoft Azure instead of GoDaddy or HostGator or something. Plus, cloud is the new buzz word - I'm sure it sells better. Lol.
As Christian Specht said, the cloud servers really scale more. So why you need more scaling? and why there are many featured options in cloud database service selection?
Things are not like before. Before smartphones and earlier pc operating systems, users gets information from the server only when they log on the specific web page using their credentials. But now apps like facebook shows notifications, provide ads etc and collect/push other data in parallel while we are looking at something else irrelevant.
Hosted database are reliable to access the database when users log onto the web page. But in case of the lastest smart phone applications, it needs to access the database everytime starting from its birth (installation on the device). So for each installation, the minimum workload over the server is expected to raise up.
So more scalability is required here. More simultaneous connections, Input/Output operation requests are expected daily. So with the dedicated servers with the core purpose, and with the configurable package selection based on your expectation of user count and bandwidth usage, Cloud Service is not yet another marketing term, but is a helpful service.
I've just started working on a project that will involve multiple people entering data from multiple geographic locations. I've been asked to prepare forms in Access 2003 to facilitate this data entry. Right now, copies of the DB (with my tables and forms) will be distributed to each of the sites, returned to me, and then I get to hammer them all together. I can do that, but I'm hoping that there is a better way - if not for this project, then for future projects.
We don't have any funding for real programming support, so it's up to me. I am comfortable with HTML, CSS, and SQL, have played around with Django a fair bit, and am a decently fast learner. I don't have much time to design forms, but they don't have to actually function for a few months.
I think there are some substantial benefits to web-based forms (primary keys are set centrally, I can monitor data entry, form changes are immediately and universally deployed, I don't have to do tech support for different versions of Access). But I'd love to hear from voices of experience about the actual benefits and hazards of this stuff.
This is very lightweight data entry - three forms attached to three tables, linked by person ID, certainly under 5000 total records. While this is hardly bank account-type information, I do take the security of these data seriously, so that's an additional consideration. Any specific technology recommendations?
Options that involve Access:
use Jet replication. If the machines where the data editing is being done can be connected via wired LAN to the central network, synchronization would be very easy to implement (via the simple Direct Synchronization, only a couple lines of code). If not (as seems the case), it's an order of magnitude more complex and requires significint setup of the remote systems. For an ongoing project, it can be a very good solution. For a one off, not so much. See the Jet Replication Wiki for lots of information on Jet Replication. One advantage of this solution is that it works completely offline (i.e., no Internet connection).
use Access for the front end and SQL Server (or some other server database) for the back end. Provide a mechanism for remote users to connect to the centrally-hosted database server, either over VPN (preferred) or by exposing a non-standard port to the open Internet (not recommended). For lightweight editing, this shouldn't require overmuch optimization of the Access app to get a usable application, but it isn't going to be as fast as a local connection, and how slow will depend on the users' Internet connections. This solution does require an Internet connection to be used.
host the Access app on a Windows Terminal Server. If the infrastructure is available and there's a budget for CALs (or if the CALs are already in place), this is a very, very easy way to share an Access app. Like #2, this requires an Internet connection, but it puts all the administration in one central location and requires no development beyond what's already been done to create the existing Access app.
For non-Access solutions, it's a matter of building a web front end. For the size app you've outlined, that sounds pretty simple for the person who already knows how to do that, not so much for the person who doesn't!
Even though I'm an Access developer, based on what you've outlined, I'd probably recommend a light-weight web-based front end, as simple as possible with no bells and whistles. I use PHP, but obviously any web scripting environment would be appropriate.
I agree with David: a web-based solution sounds the most suitable.
I use CodeCharge Studio for that: it has a very Access-like interface, lots of wizards to create online forms etc. CCS offers a number of different programming languages; I use PHP, as part of a LAMP stack.
We have an ASP.NET web application hosted by a web farm of many instances using SQL Server 2008 in which we do aggregation and pre-processing of data from multiple sources into a format optimised for fast end user query performance (producing 5-10 million rows in some tables). The aggregation and optimisation is done by a service on a back end server which we then want to distribute to multiple read only front end copies used by the web application instances to facilitate maximum scalability.
My question is about the best way to get this data from a back end database out to the read only front end copies in such a way that does not kill their performance during the process. The front end web application instances will be under constant high load and need to have good responsiveness at all times.
The backend database is constantly being updated so I suspect that transactional replication will not be the best approach, as the constant stream of updates to the copies will hurt their performance.
Staleness of data is not a huge issue so snapshot replication might be the way to go, but this will result in poor performance during the periods of replication.
Doing a drop and bulk insert will result in periods with no data for user queries.
I don't really want to get into writing a complex cluster approach where we drop copies out of the cluster during updating - is there something along these lines that we can do without too much effort, or is there a better alternative?
There is actually a technology built into SQL Server 2005 (and 2008) that is designed to address this kind of issues. Service Broker (I'll refer further as SSB). The problem is that it has a very steep learning curve.
I know MySpace went public how uses SSB to manage their park of SQL Servers: MySpace Uses SQL Server Service Broker to Protect Integrity of 1 Petabyte of Data. I know of several more (major) sites that use similar patterns but unfortunately they have not gone public so I cannot refer names. I was personally involved with some projects around this technology (I am a former member of the SQL Server team).
Now bear in mind that SSB is not a dedicate data transfer technology like Replication. As such you will not find anyhting similar to the publishing wizards and simple deployment options of Replication (check a table and it gets transferred). SSB is a reliable messaging technology and as such its primitives stop at the level of message exchange, you would have to write the code that leverages the data change capture, packs it as messages and also the unpacking of message into relational tables at destination.
Why still some companies preffer SSB over Replication at a task like you describe is because SSB has a far better story when it comes to reliability and scalability. I know of projects that exchange data between 1500+ sites, far beyond the capabilities of Replication. SSB is also abstracted from the physical topology: you can move databases, rename machines, rebuild servers all without changing the application. Because data flow occurs over logical routes the application can addapt on-the-fly to new topologies. SSB is also resilient to long periods of disocnnect and downtime, being capable of resuming the data flow after hours, days and even months of disconnect. High troughput achieved by engine integration (SSB is part of the SQL engine itself, is not a collection of sattelite applications and processes like Replication) means that the backlog of changes can be processes on reasonable times (I know of sites that are going through half a million transactions per minute). SSB applications typically rely on internal Activation to process the incomming data. SSB also has some unique features like built-in load balancing (via routes) with sticky session semantics, support for deadlock free application specific correlated processing, priority data delivery, specific support for database mirroring, certificate based authentication for cross domain operations, built-in persisted timers and many more.
This is not a specific answer 'how to move data from table T on server A to server B'. Is more a generic technology on how to 'exhange data between server A and server B'.
I've never had to deal with this scenario before but did come up with a possible solution for this. Basically, it would require a change in your main database structure. Instead of storing the data, you would keep records of modifications of this data. Thus, if a record is added, you store "Table X, inserted new record with these values: ..." With modifications, just store the table, field and changed value. With deletions, just store which record is deleted. Every modification will be stored with a timestamp.
Your client systems would keep their local copies of the database and will regularly ask for all database modifications after a certain date/time. You then execute those modifications on the local database and it will be up-to-date again.
And the back-end? Well, it would just keep a list of modifications and perhaps a table with the base data. Keeping just the modifications also means you're keeping track of history, allowing you to ask the system what it looked like a year ago.
How well this would perform depends on the number of modifications on the back-end database. But if you request the changes every 15 minutes, it shouldn't be that much data every time.
But again, I never had the chance to work this out in a real application so it's still a theoretic principle for me. It seems fast but a lot of work will be required.
Option 1: Write an app to transfer the data using row level transactions. It might take longer but would result in no interruption of the site using the data because the rows are there before and after the read occurs, just with new data. This processing would happen on a separate server to minimize load.
In sql server 2008 you can set READ_COMMITTED_SNAPSHOT to ON to ensure that the row being updated is not causing blocking.
But basically all this app does is read the new data as it is available out from one database and into the other.
Option 2: Move the data (tables or entire database) from the aggregation server to the front-end server. Automate this if possible. Then switch your web application to point to the new database or tables for future requests. This works but requires control over the web app, which you may not have.
Option 3: If you were talking about a single table (or this could work with many) what you can do is a view swap. So you write your code against a sql view which points to table A. You do you work on Table B and when it's ready, you update the view to point to Table B. You can even write a function that determines the active table and automate the whole swap thing.
Option 4: You might be able to use something like byte-level replication of the server. That sounds scary though. Which is basically copying the server from point A to point B exactly down to the very bytes. It's mostly used in DR situations which this sounds like it could be a kinda/sorta DR situation, but not really.
Option 5: Give up and learn how to sell insurance. :)