I have been using AOP for "classic" things like logging and security for a while and am starting to take it further.
One problem I come across frequently with desktop applications is the need to store user-specific data locally. To that end, I have built a component that works well for me that stores data as XML in an application-specific subfolder of the LocalApplicationData folder (on Windows, but the concept applies to any OS).
Each application needs to store it's own data, but I have also built a code library where several components also need to store data.
One approach I could take is to tightly couple each of my components that need the local storage service to my implementation of local storage. However, a change to the interface of that local storage engine would be expensive.
Is this problem domain well-suited to AOP? Are there better approaches? Are there pitfalls that I'm not seeing?
I really don't see the cross cutting concern which would make this a problem for AOP to solve.
Simply define a little API for storing the information. Make your library implement that interface. Put everything together with Spring, any DI Tool or manual glue code and you are done.
Related
all.
I'm using DBExpress and C++ Builder(Delphi) 2007 and MySQL, firebird , ...
I'd like to make win 32 application which use Database(located on my web server).
I tried using DBExpress (TSQLConnection for MySQL), it's so so slow...
and I tried local database then upload/download using Indy..
but it was not good and little complicated.
So what is the base way to use web-based database for win 32 application?
Do you have any experience? or any document or any comment will be so so graceful..
thanks a lot..
Database connections via an Internet link (using a VPN or not) are slow - you are perfectly right. The main reason IMHO is the "ping" delay of every request, which is very low on a local network, and much higher via Internet. So direct connection is not a good idea.
In latest versions of Delphi, you have the DataSnap components, which is the new "standard" (or Embarcadero recommended) way of doing remote access (including web access). Even if it was found at first to be a bit limited, the latest versions are perfectly usable, and are becoming a key product for cross-platform application building with Delphi. But it is not available for Delphi 2007.
One much matured product (and available for Delphi 2007) is Data Abstract:
Data Abstract is a framework for building database-driven applications
using the multi-tier data access model, for a variety of platforms.
Of course, this is not free, but this is a proven and efficient solution.
You may also take a look at our Client-Server ORM, which can connect to any DB, and is able to implement a RESTful SOA architecture with Delphi 2007, even without using the ORM part - that is, you can use your existing DBExpress-based source code, and expose easily some web interfaces to the data. It is Open Source, and uses JSON as communication format over a secured authentication mechanism. There is a lot of documentation included (more than 700 pages of PDF), which also tries to introduce to the SOA world.
Take a look at Datasnap: info
You need a data access library, which offers features:
Thread safety. In general, you will need to use a dedicated connection for each thread.
Connection pooling. To make connection creation (what is needed for (1)) fast, there must be a connection pool.
Fast execute SQL command, open result set, fetch capabilities.
Tracing. With any one library you may run into performance issues. You need a tool to see what is going on wrong. For that you will need to see and analyze the client and server communication.
Result set caching and ability to read it simultaneously from different threads. You may have few read-only tables, which you will fetch once and cache in your application. But you will need a machanism to read this data from threads. Kind of InMemTable cloning.
My answer is biased, but you may consider AnyDAC. It has all these and many other features.
PS: dbExpress should work too. Try to find first the reason for your performance issue, and not a different library. Because the same may happen with other library ...
DB applications over a slow link need a different approach than those using a fast link. You have to be careful about how much data you move around, and about how many roundtrips your application perform.
Usually an approach when the needed subset is cached on the client, modified, and the applied to the database is preferrable (of course if changes do not neeed to be seen immediately, and the chances of conflicts are low).
No middleware will help you much if the application is not designed with handling a slow link in mind.
I found information about this already, but of more general kind and focused on "if the data shuld change a lot...". I will try to be one step more specific here.
I am developing a web application. It's possible to configure what should be presented or not. E.g. In a form, there can be a number of different drop-down lists, but it should be configured which drop-down lists should be presented.
Hence, it's going to be a lot of reading of the config info. Updating the configuration will be done very seldom. Also, the configuration itself should be performed with using a web application as well.
What's the best strategy, using files or database for the config data?
I guess this depends on if you are already using a database for the rest of the web application. If you are then it makes sense to just add another table. Otherwise the overhead of setting up a database server and managing connections just for configuration is too much. In which case a flat file using structured text is probably your best bet.
If you are already using a database, you could cache the results so that the overhead of looking up the results is lower, then clear the cache when the config is updated.
The best strategy is encapsulation.
If you encapsulate access to your configuration data properly, you'll be able to start off with whichever implementation meets your short term requirements, safe in the knowledge that you can change it later.
Up until I read the requirement of
the configuration itself should be performed with using a web application,
I'd have said a flat file or PHP include would have sufficed, but given that requirement (and the availability of MySQL), I'd say use a database.
Plus, you never know when the config's update frequency will increase.
As I create more and more fields and content-types, I see that Drupal creates a huge numbers of tables (>1k) in MySQL and after a point my system becomes very slow.
I have tried several MySQL performance tuning tips, but nothing has improved the performance significantly. Enabling caching makes for good speed in the front-end, but if I try to edit a content-type from the admin back-end, it takes for ever!
How do you cope with that? How do you scale Drupal?
If sheer number of tables has become the database performance bottleneck, I'd have to agree with Rimian. You can define your own content types programmatically, and then develop your own content type model by leveraging the Node API.
API documentation and an example of doing just that are here: http://api.drupal.org/api/drupal/developer--examples--node_example--node_example.module/6
The code flow is basically:
Make Drupal recognize your content type
Define the fields it needs to take using the Forms API
Define how each of the Node API's functions should behave (view, load, save, etc.).
This allows you control over how things are stored, yet still gives you (and all contributed modules) ability to leverage the hook system for Node API calls.
Obvious drawbacks are missing out on all of the features/modules that directly depend on CCK for their functionality. But at >1k tables (which suggests a gargantuan number of content types and fields), it sounds like you're at that level of custom work already.
I worked on a Drupal 5 site with more than a million nodes and this was a serious issue.
If you're scaling Drupal up to enterprise level, consider not using CCK for your fields and developing your own content model with the node API. It's actually quite easy.
The devel module offers a Performance Monitoring tool that will show you all queries performed organized by time, showing which hooks and modules called them, etc.
Just don't run on production.
I have a client software program used to launch alarms through a central server. At first it stored configuration data in registry entries, now in a configuration XML file. This configuration information consists of Alarm number, alarm group, hotkey combinations, and such.
This client connects to a server using a TCP socket, which it uses to communicate this configuration to the server. In the next generation of this program, I'm considering moving all configuration information to the server, which stores all of its information in a SQL database.
I envision using some form of web interface to communicate with the server and setup the clients, rather than the current method, which is to either configure the client software on the machine through a control panel, or on install to ether push out an xml file, or pass command line parameters to the MSI. I'm thinking now the only information I would want to specify on install would be the path to the server. Each workstation would be identified by computer name, and configured through the server.
Are there any problems or potential drawbacks of this approach? The main goal is to centralize configuration and make it easier to make changes later, because our software is usually managed by one or two people at most.
Other than allowing for the client to function offline (if such a possibility makes sense for your application), there doesn't appear to be any drawback of moving the configuration to a centralized location. Indeed even with a centralized location, a feature can be added in the client to cache the last known configuration, for use when the client is offline).
In case you implement a [centralized] database design, I suggest to consider storing configuration parameters in an Entity-Attribute-Value (EAV) structure as this schema is particularly well suited for parameters. In particular it allows easy addition and removal of particular parameters and also the handling parameters as a list (paving the way for a list-oriented display as well in the UI, and therefore no changes needed in the UI either when new types of parameters are introduced).
Another reason why configuartion parameter collections and EAV schemas work well together is that even with very many users and configuration points, the configuration data remains small enough that is doesn't suffer some of the limitations of EAV with "big" tables.
Only thing that comes to mind is security of the information. In either case you probably have that issue though. Probably be easier to interface with though with a database as everything would be in one spot.
I'm not really asking whether I should use either a RDBMS or config files for 100% of my application configuration, but rather what kind of configuration is best addressed by each method.
For example, I've heard that "any kind of configuration that is not changeable by the end-user" should be in config files rather than the database. Is this accurate? How do you address configuration?
(I'm primarily concerned with many-user web applications here, but no particular platform.)
I find that during development it is of great benefit to have configuration stored in a file.
It is far easier to check out a file (web.config, app.config, or some custom file) and make changes that are instantly picked up when the code is run. There is a little more friction involved in working with configuration stored in a database. If your team uses a single development database you could easily impact other team members with your change, and if you have individual databases it takes more than a "get latest" to be up and running with the latest configuration. Also, the flexibility of XML makes it more natural to store configuration that is more than just "name-value" pairs in a file than in a relational DB.
The drawback is where you want to reuse the configuration across multiple apps or web site instances. In my own case, we have a single config file in a well-known location that can be referenced by any application.
At least, this is how we store "static" configuration that does not have to be updated by the system at runtime. User settings are probably more suited to storage in the DB.
The oneliner: As a general principle - the more likely the config data should change the better to put it into db.
The legal disclaimer:
You would need to have almost always a kind of "bootstrapping" configuration, which must be saved into a file, thus if you are using a db to store your configuration the size of the "bootrapping" conf would depend on the other great principle:
"Work smarter not harder !!!"
One thing to conside is how much config data there is, and perhaps how often it is likely to change. If the amount of data is small, then saving this in a database (if your not already using a db for anything else), would be overkill, equally maintaining a db for something that gets changed once every 6 months would probably be a waste of resources.
That said, if your already using a database for other parts of your site, then adding a table or two for configuration data is probabley not a big issue, and may fit in well with the way you are storing the rest of your data. If you already have a class for saving your data to a db, why write a new one to save to a config file.