Can MS Enterprise Library Logging be used for multiple applications? - exception

I'm wondering if its - a) possible; b) good practice - to log multiple applications to single log instance?
I have several ASP.NET apps and I would like to aggregate all exceptions to a centralized location that can be queried as part of an Enterprise Dashboard app. I'm using both the EL logging block and the EL exception blog along with the Database Trace Listener. I would like to see exceptions across all apps logged to a single db.
Any comments, best practice guidelines or answers would be extremely welcome.

Yes, it is definitely possible to store multiple application logs in a central location using EL.
An Enterprise Dashboard application that lets you view exceptions across applications and tiers, and provides reporting is a great reason to centralize your logging. So I'll say yes to question b as well.
Possible Issues/Negatives
I'm assuming that you are using the Database Trace Listener since you mention that in your question. If there are a large number of applications logging a large number of log entries combined with users querying the (potentially large) log database, there is the potential for performance to degrade (since the logging is done synchronously) which could impact your application performance.
Another Approach
To mitigate against this possibility, I would investigate using the Distributor Service to log asynchronously. In that model, all of the applications would log to a message queue (using the MSMQ Trace Listener). A separate service then polls the queue and would forward the log entries to a trace listener (in your case a Database Trace Listener) which would persist the messages in your dashboard database. This setup is more complicated. But it does seem to align with what you are trying to achieve and has some other benefits such as asynchronous processing and the ability to log even if the dashboard database is down (e.g. for maintenance).
Other Considerations
You may also want to think about standardizing some LogEntry properties across applications. For example, LogEntry doesn't really have an "application" property so you could add an ExtendedProperty to represent the application name. Or you may standardize on a specific format for the Message property so that various information can be pulled out of the message and stored in separate database columns for easier searching and categorization.

Related

Volume or frequency limitations of SQL Server Database Mail

I've created a nightly sync between two database applications for a small construction company and setup simple notifications using database mail to let a few people know if the load was successful or not. Now that they see this notification is working I've been asked to provide status updates to their clients as employees make changes to the work order throughout the day.
I've done some research and understand DB Mail is not designed for this type of feature but I'm thinking the frequency will be small enough to not be a problem. I'm estimating 50-200 emails per day.
I couldn't find anything on the actual limitations of DB Mail and wondering if anyone has tried something similar in the past or if I could be pushed in the right direction to send these emails using best practice.
If we're talking hundreds here you can definitely go ahead. Take a peak at the Database Mail MSDN page. The current design (i.e. anything post-SQL2000) was specifically designed for large, high-performance enterprise implementations. Built on top of Service Broker (SQL Server's message queuing bus) it offers both asynchronous processing and scalability with process isolation, clustering, and failover. One caveat is increased transaction log pressure as messages, unlike in some other implementations, are ACID-protected by SQL Server which in turn gives you full recoverability of the queues in case of failure.
If you're wondering what Service Broker can handle before migrating to a dedicated solution, there's a great MySpace case study. The most interesting fragment:
We didn’t want to start down the road of using Service Broker unless
we could demonstrate that it could handle the levels of messages that
we needed to support our millions of users across 440 database
servers,” says Stelzmuller. “When we went to the lab we brought our
own workloads to ensure the quality of the testing. We needed to see
if Service Broker could handle loads of 4,000 messages per second. Our
testing found it could handle more than 18,000 messages a second. We
were delighted that we could build our solution using Service Broker,
rather than creating a custom solution on our own.

Concept clearance regarding Mobile Apps

I am new to mobile programming although I have some experience of working on web products.
I have a few concepts which I need cleared...
What is the difference between an MBAAS(Like Kii or Parse) and a data store(like MongoDB)?
How will I tie MBaas and MongoDB together? Also, if i need to connect MBaas to an RDBMS how to go about it?
On some MBaas websites I read about objects in cache getting synchronised with objects in server etc. In what shape are these cached objects? Are they JSON bodys?
Can a session be shared between an application and a browser session in the same mobile ?
Can multiple applications access the same MBaas space ? What happens if multiple applications need to access the same data base? Is it possible ?
I have an application, can it use the same cache area for storing the ids/passwords of to different users ?
Please help me as I am not getting enough documents on the internet...
Thanks in advance,
Dee.
Those are all pretty good questions when you're starting to take a look at MBaaS. I'll try to answer according to my experience:
1) MBaaS provide a higher level of abstraction that a database. It delivers higher level services instead of just persistence. Think of services like user management, analytics, push, etc on top of just data management. MBaaS almost always provide a data management service but it's higher level since it runs on top of databases like MongoDB (since MBaaS services require scalability they often rely on NoSQL databases but they do not expose the db api to you directly). Pros: you get to deal with a simpler/straightforward data management api. Cons: you don't get granular control over the data operations as seen on a db
2) To tie MBaaS to other database you need to rely on the MBaaS import/export services (I suppose that this makes sense to you after reading the first question). Pros: you don't worry about how the data is stored in an MBaaS (it will scale, have integrity, etc). Cons: you don't have a low level access to the data (you do it through the MBaaS API). But I must say MBaaS are improving a lot in what they allow you to do with the data (this is getting better)
3) maybe you read something about the offline capabilities of some MBaaS. Some of them keep operations and/or changed objects in a cache and then synchronize with the backend when online. The shape of these can vary with each MBaaS but JSON is often used when it's time to communicate with the backend (JSON is convenient for transfers of data/operations but it's not necessarily an internal representation of MBaaS client cache)
4) Not in the traditional web sense of having a session with a cookie. MBaaS usually work at the level of a user session relying on authentication services (they are particluarly strong in this area). Some MBaaS provide an anonymous user functionality where users of your app can do a session without explicit authentication (but this can't be correlated to a the same user doing an anonymous session on the web). In general you'll have to use user authentication to share web activity with MBaaS activity.
5) On the first generation of MBaaS this wasn't possible. Everything was designed thinking in independent apps. But some problems started to arise like "what if I want to share users among different apps"? So MBaaS provides are adding more services that address this sort of issues (like single sign on for multiple apps)
6) I'm not sure if I follow but if you have an application that uses an MBaaS you're probably going to use the MBaaS authentication services to login a user so the fact that you're using one device/one cache is not an issue for allowing your app to authenticate multiple users. Let me now if this is not exactly what you asked (I can edit the question)
Hope this helps you get a better picture.
Best!

What's the most efficient architecture for this system? (push or pull)

All s/w is Windows based, coded in Delphi.
Some guys submit some data, which I send by TCP to a database server running MySql.
Some other guys add a pass/fail to their data and update the database.
And a third group are just looking at reports.
Now, the first group can see a history of what they submitted. When the second group adds pass/fail, I would like to update their history. My options seem to be
blindly refresh the history regularly (in Delphi, I display on a DB grid so I would close then open the query), but this seems inefficient.
ask the database server regularly if anything changed in the last X minutes.
never poll the database server, instead letting it inform the user's app when something changes.
1 seems inefficient. 2 seems better. 3 reduces TCP traffic, but that isn't much. Anyway, just a few bytes for each 2. However, it has the disadvantage that both sides are now both TCP client and server.
Similarly, if a member of the third group is viewing a report and a member of either of the first two groups updates data, I wish to reflect this in the report. What it the best way to do this?
I guess there are two things to consider. Most importantly, reduce network traffic and, less important, make my code simpler.
I am sure this is a very common pattern, but I am new to this kind of thing, so would welcome advice. Thanks in advance.
[Update] Close voters, I have googled & can't find an answer. I am hoping for the beneft of your experience. Can you help me reword this to be acceptable? or maybe give a UTL which will help me? Thanks
Short answer: use notifications (option 3).
Long answer: this is a use case for some middle layer which propagates changes using a message-oriented middleware. This decouples the messaging logic from database metadata (triggers / stored procedures), can use peer-to-peer and publish/subscribe communication patterns, and more.
I have blogged a two-part article about this at
Firebird Database Events and Message-oriented Middleware (part 1)
Firebird Database Events and Message-oriented Middleware (part 2)
The article is about Firebird but the suggested solutions can be applied to any application / database.
In your scenarios, clients can also use the middleware message broker send messages to the system even if the database or the Delphi part is down. The messages will be queued in the broker until the other parts of the system are back online. This is an advantage if there are many clients and update installations or maintenance windows are required.
Similarly, if a member of the third group is viewing a report and a
member of either of the first two groups updates data, I wish to
reflect this in the report. What it the best way to do this?
If this is a real requirement (reports are usually a immutable 'snapshot' of data, but maybe you mean a view which needs to be updated while beeing watched, similar to a stock ticker) but it is easy to implement - a client just needs to 'subscribe' to an information channel which announces relevant data changes. This can be solved very flexible and resource-saving with existing message broker features like message selectors and destination wildcards. (Note that I am the author of some Delphi and Free Pascal client libraries for open source message brokers.)
Related questions:
Client-Server database application: how to notify clients that data was changed?
How to communicate within this system?
Each of your proposed solutions are all viable in certain situations.
I've been writing software for a long time and comments below relate to personal experience which dates way back to 1981. I have no doubt others will have alternative opinions which will also answer your questions.
Please allow me to justify the positives and negatives of each approach, and the parameters around each comment.
"blindly refresh the history regularly (in Delphi, I display on a DB grid so I would close then open the query), but this seems inefficient."
Yes, this is inefficient
Is often the quickest and simplest thing to do.
Seems like the best short-term temporary solution which gives maximum value for minimal effort.
Good for "exploratory coding" helping derive a better software design.
Should be a good basis to refine / explore alternatives.
It's very important for programmers to strive to document and/or share with team members who could be affected by your changes their team when a tech debt-inducing fix has been checked-in.
If not intended as production quality code, this is acceptable.
If usability is poor, then consider more efficient solutions, like what you've described below.
"ask the database server regularly if anything changed in the last X minutes."
You are talking about a "pull" or "polling" model. Consider the following API options for this model:
What's changed since the last time I called you? (client to provide time to avoid service having to store and retrieve seesion state)
If nothing has changed, server can provide a time when the client should poll again. A system under excessive load is then able to back-off clients, i.e if a server application has an awareness of such conditions, then it is therefore better able to control the polling rate of compliant clients, by instructing them to wait for a longer period before retrying.
After considering that, ask "Is the API as simple as it can possibly be?"
"never poll the database server, instead letting it inform the user's app when something changes."
This is the "push" model you're talking about- publishing changes, ready for subscribers to act upon.
Consider what impact this has on clients waiting for a push - timeout scenarios, number of clients, etc, System resource consumption, etc.
Consider that the "pusher" has to become aware of all consuming applications. If using industry standard messaging queueing systems (RabbitMQ, MS MQ, MQ Series, etc, all naturally supporting Publish/Subscribe JMS topics or equivalent then this problem is abstracted away, but also added some complexity to your application)
consider the scenarios where clients suddenly become unavailable, hypothesize failure modes and test the robustness of you system so you have confidence that it is able to recover properly from failure and consistently remain stable.
So, what do you think the right approach is now?

Database and tools for synchronize data between embedded computers in "real time"

I have 3 Beagleboards that needs to share data between each other as fast as possible. They are running debian (with a real time kernel) and are connected to each other via wlan.
All beagleboards have different sensors attached. All Beagleboards need the sensor data of the others in real time (or at least as fast as possible -the data are used in control algorithms for actuators).
The system is supposed to be used for demonstrate a concept and does not need to be 100% fault proof but as close as possible.
What is the best way to design such a system?
Ideas:
Design program for UDP broadcast and some sql server or just an object/class on the receiver end.
Embedded MySQL/High Performance MySQL with replication or cluster.
SQLite - need some addons?
Any other solutions might be better, I have never designed such a system before. Any help is much appreciated.
If "as fast as possible" is your requirement you need to do the sharing of data yourself and use the database just to store shared data.
You can implement a publisher / subscriber mechanism. One of your nodes becomes master and each of other nodes subbscribes to this node at startup. Master node multiplies and routes messages from subscribers.
Another (faster) option is implementing the publisher / subscriber mechanism without a master node. Each node registers itself to other nodes, it is similar to broadcasting you mentioned.

notifying applications on db INSERT

Consider an application with two components, possibly running on separate machines:
Producer - Inserts records into a
database, but does little to no reading from the database.
Multiple instances may be running
concurrently.
Consumer - Must be notified when a record is inserted into the
database by an instance of component
A. May also have multiple instances.
What is the best way to perform the notifications, assuming that producers will be inserting 10-100 records into the database per second at peak times? The database technology is currently MySQL, but this is not necessarily set in stone. I can see a few different ways:
Use something like MySQL message queue to "push" INSERT notifications to subscribers (consumers). Producers would have no knowledge that this was occurring.
Have producers interact with an intermediate layer that performs the INSERT, and pushes notifications to a message queue that consumers are subscribed to.
Have consumers poll the database frequently to check for new additions (seems like a bad idea)
etc.
As far as coupling is concerned: Is it a good idea to have a two relatively separate application components perform direct queries on a shared database, or should one component "own" the database while the other component indirectly interacts with the DB via calls to the owning component?
I like the second proposed solution (the intermediate layer), as it separates the notification from the database work, and could possibly be part of a two-phase commit XA transaction. If the consumers need the database content in addition to the notification, that can be accomplished via MySQL replication. This could also address the coupling question, as the consumer components could have read-only access to their replicated instances.
Using a messaging solution would also address any potential bottlenecks in the database-only solution, as it would separate the notification and storage into separate processes.
Depending on the language, you have a number of choices for the message distribution. If you're using Java, I'd actually recommend JGroups rather than JMS, as it's somewhat easier to configure.
If Java isn't your language of choice, Apache's Active MQ supports a number of languages for interfacing. Apache's Qpid is an AMQP implementation that also supports a number of languages (Java, C++, Python, Ruby, etc.)
Other messaging options could include XMPP, STOMP, or RestMS implementations.