Does anybody still use Client Server Architecture [closed] - language-agnostic

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been writing software for several decades now and these days everything is web.
Before the web we had Client Server apps that were basically thick client applications that spoke directly to the database. They had some disadvantages, such as deployment was cumbersome, Did not scale because DB handled all traffic. Of course back then distribution of apps was limited to being on a desktop on a corporate network. The benefits to these apps were that they had fewer layers and were quick to develop.
There are times when the requirements call for an app behind a firewall with a dedicated database and a relatively small amount of clients. I suggest (sometimes on StackOverflow) the old Client/Server type architecture and everybody looks at me like I have 3 legs and 6 arms.
With modern technologies that allow automatic deployments of apps and the tools we have today. Is there a reason this technology is not viable ? Is it that the new generation of developers only know web stuff ?

I can think of at least two large-ish markets where client-server is still big:
Online games and virtual worlds, such as Battlefield or Second Life. Usually you need a thick client plus a connection to a shared server.
Custom-made scientific software. Complex technical or scientific software, especially if it needs an interactive graphical UI that does direct manipulation, is sometimes written in this fashion too.

I'm sure thick clients are still being developed, even today.
Having said that, choosing a web-based architecture is not about the "new generation of developers" only knowing web stuff, you do get a lot of advantages if you can make your application web-based:
Deployment is dead simple. Even with things like ClickOnce, automatic updates, etc, nothing beats simply refreshing the page to get the latest version
You can use something like Silverlight to get 99% of the benefits of a desktop application (in terms of the ability to run code on the client)
Web applications can be made available remotely much more easily than desktop applications (a lot of companies have remote workers these days, setting up a VPN is a pain if all you want to do is access payroll (or whatever))
But at the end of the day, it's all about the right tool for the job. Web applications don't help when you want to write plugins for Office (Word, Outlook, etc), they don't help if you have to control custom hardware (POS terminals, etc - although you could write that into the server in some cases...), and probably a few more cases as well.

We have some Flex apps that communicate with XML based web services that are pretty close to old school Client Server apps. But rather than using SQL, they speak a custom XML language and render SOAP responses.

We currently develop and deploy numerous client/server applications annually. The development is simple and automated. We are not limited to the database technologies we are able to deploy. Client/server deployments are faster for calculations, form updates and reporting. The Web/Cloud based applications are less responsive than an application running on a client station (thick client).
This is because of the distribution of cpu load. Whereas a server side application requires the server to perform all calculations the client side can run this on the local machine. As any system gets more complex the moments that a user has to wait for results increases. These moments of employee time are more expensive as they involve more of the paid employees. These moments add up within an organization as a great many "man hours" over a year.
The problems with updates are solved within our development tool set. Just as when you may open your favorite browser it notices that the version you are using is not the most recent we embed that same process within our client/server applications. In fact we don't give them a choice to update. Since updates may, many times, require database changes we force the update to happen before the user is allowed to run the software.
To improve visibility of the information contained with our custom client/server systems we offer custom developed web sites that have specific applications such as field dispatch or customer support forum integration into the desktop client/server applications. From my perspective I see a complete integration of client server and responsive web applications taking a better position in the years to come.

Related

How can I make a browser extension payments system? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I've found today in my inbox an email from google where they announce that CWS payments API is deprecated
I'm working to create a Chrome extension that I want to release with the in-app payments support to let the user purchase a license to unlock full features. I was oriented to the CWS native payments API, but Google's decision to deprecate the API is a very bad news.
At the moment I've found a nice Wordpress plugin that will manage licensing, I'm thinking of using it to create a licenses backend but I'm not sure about it because it's mainly focused to be used for wordpress themes or plugins, so to implement it on client side for an extension would require some workarounds.
How do you will manage your in app purchases and licensing for Chrome extensions or Electron apps?
Alright, so as I am in the same situation as you are, I did a little bit of research. Here is a summary of my findings and comments on the matter.
There are three things to think about before you get started with the implementation:
The type of payment processing service you want to use;
The way you want to limit features for the free version (and for multiple tiers of plans);
The security of your users information through your extension.
Let's go through each of these one at a time.
1. Type of payment processing
There are two main types of service providers that will allow you to collect payments in you extension. Payment processing platforms are the first type: they allow you to process payments and will generate receipts, but they won't manage the different taxes and regulations of different countries. If you operate solely in one country, or in a few countries where taxes and regulations are the same, this won't affect you.
However, if you have users around the world, especially in Europe, implementing the rules to handle all of the different taxes and regulations can get really complicated and messy. But you have to do it, otherwise you put yourself in a situation where you are at risk of getting fined. That is where the second type comes in: the merchants of record. These are companies that will charge the users on your behalf, removing all of the complexities of taxes and regulations from your plate. They're essentially acting as a reseller of your products. Of course, they take a small cut from your revenue to pay for the weight that they're taking off your shoulders and putting onto their own.
Payment processing platforms will be cheaper (ex.: 2.9% + 0.30$ per transaction for Stripe), while merchant of records take a bigger cut (ex.: 5% + 0.50$ for Paddle). However, if you deal internationally, the 2.1% higher price is likely more advantageous for you, just because it saves you a lot of time and development work.
It's important to note however that merchant of records are unlikely to take on a brand new project, especially for Chrome extensions. That's because the amount of revenue those extensions generate on average is pretty low, and often not really worth it for them. Still, I suggest you hit up a few of them before deciding do go the classic payment processing way, just in case you can get in touch with a salesperson who sees potential in your project and is willing to take you on.
Here are a few merchant of records:
Cleverbridge
2Checkout (offers both MoR and basic payment processing services)
Paddle (does not support new Chrome extensions at the moment)
FastSpring (does not support Chrome extensions anymore, as of 2021)
Here are a few payment processing platforms:
Stripe
Paypal (from my experience, Paypal is a lot less developer friendly than Stripe)
2. Limiting features for free or tiered plans
The way features are limited for non-paying users will differ from one extension to the other.
If the features you want to limit in your extension already rely on a backend, to fetch or process data for example, it would make sense to implement the limitations on the server side. You would simply pass the user's ID, which could be stored in chrome.storage, to each request made to the backend. In addition to that, you could also disable the related elements on the client side, such as hiding or greying out buttons, tabs or fields, to make it clear to the user that those features are locked. You'll want to make sure the limitations are in place on the backend as well however, because otherwise a user could just inspect your extension and enable premium features without paying.
If your extension mostly or only operates on the client-side, then you will have to render the interface conditionally, based on the user's plan. The scripts or interfaces that will be added will most likely have to be returned by a backend, as pretty much anything that is done only on the client-side could potentially be inspected and exploited. In that case, any backend technologies or platforms you are most familiar with can probably be used to set things up.
Keep in mind that most of the payment processing and MoR listed above have APIs and guides on how to implement them securely in apps and websites. However, if you know Wordpress well and can set up a secure communication between your Wordpress and your extension, go ahead. If you want to use an online service like Zapier to link existing authentication and licensing services together, go ahead and do that!
There could be a lot more details in this section - there is a ton of material to cover, so I suggest you look for articles and tutorials online to help guide you in this process if you don't have much experience in the matter.
3. Security
This section won't be long, but it is very important one. No matter which payment processing platform you decide on or how you limit access to features in your extension, it is crucial that you make sure that your users information can never fall into the hands of another user. That includes reverse engineering and exploits of your system.
The more things you decide to handle yourself, the more risk there is, especially if you are not experienced. Keep that in mind when making your decision(s).
That's all for me. I hope that helps a bit!
I know it's probably a lot of information without any detailed "how-to", but without having in-depth knowledge of your product and situation, it is impossible to say what you should do exactly.
P.S.
If that can offer any guidance, here's what I will be doing for my own extension. Seeing as it's already very reliant on a PHP backend, I will add a few features to the backend in order to communicate with the Paddle API. So all of the limitations will be implemented on the backend, and I will add messages and visual indicators on the frontend to inform the free users of what they can and cannot do.
[Edit]
I just received a message from Paddle indicating that they do not support new Chrome extensions at the moment. Sorry for the misleading there.
[Edit: June 2021]
After an update earlier this year, FastSpring has updated their security standards, which makes it unusable within Chrome extensions. After I enquired, their support agents informed me that they do not support Chrome extensions anymore (and that it was only "accidentally" supported before).

In which domains are message oriented middleware like AMQP useful?

What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration?
In which domain are they typically used and in which domains are they typically not used?
For example, say, is Google using such solution for it's main search engine or to power GMail?
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Does it make sense for desktop apps that need to communicate with a server?
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.
This is a great question.
The main uses of messaging are: scaling, offloading work, integration, monitoring, event handling, routing, networking, push, mobility, buffering, queueing, task sharing, alerts, management, logging, batch, data delivery, pubsub, multicast, audit, scheduling, ... and more. Basically: anything where you need data but don't want to make a database request. (Caching is another, longer story).
Another way of looking at this is to notice that many applications used to be built by assuming that users (people) would perform actions that would be fulfilled by executing a transaction on a database (including reads, writes). But today, many actions are not user-initiated. Instead they are application-initiated. For example "tell me when the book that I want to buy is in stock". The best way to solve this class of problems is with messaging of some sort. Whether you call it middleware or web push or real time salad dressing does not matter. It's all messaging.
When you enable applications to initiate or react to events, then it is much easier to scale because your architecture can be based on loosely coupled components. It is also much easier to integrate those components if your messaging is based on a stable, scalable, serviceable tool, preferably using open standard APIs and protocols.
I hope this helps. We try to maintain a list of useful links about messaging here
Please get in touch with questions and comments on any of this, we are dead easy to find.
To address your specific questions:
In which domain are they typically used and in which domains are they typically not used?
Like databases, messaging systems crop up everywhere.
For example, say, is Google using such solution for it's main search engine or to power GMail?
Google uses a lot of home grown technology, but a lot of their open source contributions and known use cases suggest that messaging is (or should be) central to some of the main services.
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Very much so.
An example use case is scaling web page requests. When the user makes a web request, the web server puts it onto a queue for background processing. This means that the web server can keep working while the request is processed. It also means that the web server does not need to know how the request is handled, making system maintenance, upgrade and rollback much simpler because the main parts are 'decoupled'.
So, anyway, the web request gets processed by a back end service, or possibly by many services, eg 'look up book titles', 'draw shopping cart', 'get advertisement', 'check user account'... Finally all the results get put onto another queue, ready for collection and user response by the web server. Typically the system will include a timeout of around 100ms so that any late requests just get thrown away. The user sees anything that got processed in the time interval. This is one reason why some large ecommerce sites have pages that appear to load in stages.
There are many more use cases...
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Definitely. If you have an unknown, or unbounded, number of users, server side instances, and application latencies, then it makes sense to use messaging, even if just as a scalable substrate for non-blocking RPC.
Does it make sense for desktop apps that need to communicate with a server?
In lots of cases. One very common case is when the server pushes events to the desktop app, eg game event, tweets, price feeds in finance, system alerts....
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
Definitely not only for those 'legacy integration' cases but they are important too. At RabbitMQ, the biggest customers we have in terms of pure scale or message volume are cloud providers and big web application providers.
I will answer only one answer, from prior experience - take a look at this middle-ware that is employed by big companies here - middle-ware has one purpose - to glue dis-connected systems (written in disparate languages) together so that they can interact with one another and streamline the business process - Entera as I have had experience with, creates a middle layer in which the unix box using processes written in C, interact with the mainframe system (DB2, COBOL) via a front-end written in PowerBuilder (I am not naming the company!).
From the description I have given, Entera is a middle-ware which hosts a number of things - smooth integration of the flow of data regardless of the endian format, ability for different languages to talk to the middle-ware broker (a broker is a CORBA or DCE like process, that conforms to 'The Open Group) that listens on a particular port) and is specified by an IDL which makes a process appear to be local - if you understand the terminology used in Remoting under Microsoft's .NET Framework, you are not far off the mark! The middle-ware generates stubs which are linked at compile-time and manages the creation of the process, hosting it off a port, multi-threading at run-time, and also, the modern front-ends (such as .NET, Java, PowerBuilder even the unspeakable VB6...ok...VB.NET for the purists out there) can interact by opening a connection to the specified port on a particular IP address, and using the stubs generated, can interact with it directly.
Obviously, from what was described you can see how the legacy systems can have new life breathed into it and thus scalability of the process, the major downside of this is the cost factor which can run into thousdands of dollars. Big companies who uses mainframes as their back-end processing systems for billing/invoicing, who generate a huge revenue can obviously afford such an expensive product - to them it would seem like throwing pennies into a pool of water...because of the use of middle-ware which prolongs the business process, and breathe new life into it, can extend the business by a good number of years into the future without worrying about 'legacy' tag attached to it.
Incidentally, I carried this out as part of my thesis for my BSc. in Information Systems which covered this commercial front-end. There was an open source version of the middle-ware available on sourceforge called FreeDCE, but development efforts have declined or stopped.
Edit:
#cocotwo: That is exactly what middle-ware does as you said it is a plumbing tool...message oriented middle-ware is not really heard of AFAIK because I would imagine, the processes (functions) would need to be called as if they are locally visible within the application domain of the front-end to make it easy to interact with.
Using messages may have its advantages over RPC calls in that the messages are queued in a safe-holding area in the event that a network disconnection occurs - there may be some data caching going on within that aspect to allow the front-end to continue regardless...it would be useful in the instances of 'updating a status of a particular billing/invoice number' - a one-way write-data to the back-end via the middle-ware.
Ok, big companies would have advanced systems infrastructure in that technicians are constantly around the clock to ensure a smooth delivery of data-flow so that would have to be factored in. The company that I worked with had IBM Global Support contract to fulfill in order to ensure a maximum uptime 99% with 6 nine's after the decimal point...with hot-swapping/balanced-clusters/mirroring systems in place...
Whereas with RPC, if the disconnection occurs, the front-end would have to be restarted or would have to handle the disconnection event. It really depends if the message-queueing middle-ware handles each message in real-time and pass back results to the front-end immediately...
This is where each (Message-queueing and RPC related middle-ware) have their strengths and weaknesses...and also the cost mitigation factor such as support, maximum up-time, development efforts and training - that's a biggie here as middle-ware are really proprietary (despite following the 'The Open Group' layout/standards) and complex to setup and to glue the whole thing together via scripts.
Good answers and discussion here. Our consulting team has two preferred "messaging" solutions: RabittMQ and NXTera a high speed RPC middleware, the contemporary version of Entera mentioned above. My partners and I have developed several solutions using RabittMQ, it is the best tool available in that space right now. Additionally, I happen to work for the company that makes NXTera/Entera.
From experience I can clearly say that both of these products meet the need for reliability and low maintenance as discussed above. There are situations where a messaging service, like RabittMQ, is the right choice -- where Publish and subscribe, certified delivery, Queuing or store-and-forward are required.
In other cases, RPC's (remote procedure calls) are the best and fastest solutions for transactional and distributed processing for enterprise or cloud-based applications. When it is right to use an RPC, but SOAP/.NET (yes these are RPC implementations) are too slow, expensive or complex, a lightwieght high speed RPC middleware like NXTera/Entera is the right choice for us.
There is some use case overlap between RPC middleware and message oriented middleware, and where there are you can use either successfully. But both are strong and dependable choices.
The large companies I work with use both RPC and MoM side-by-side. As far as Internet companies, Google (Protocol Buffers) and Facebook (Thrift) show that RPC's have a roll to play in modern web and cloud-based development.

Balancing level of integration with ease of adding new software in an intranet: portals, cms, etc?

Quick background: i'm the portal admin for our medium sized company. Currently, our intranet policy is to try to integrate everything into our intranet portal, (which we use mostly as a CMS, with a handful of applications integrated as well). This means that all of our software appears to the end user to come from one site, which is good. But it also means that we have to modify just about every piece of software that we want to incorporate into the website to fit into the portal. The disadvantage of this is that each component of our portal (the cms, the blogging, forums, etc) are not best-of-breed, and to be quite honest, they are pretty bad compared to their free and open source counterparts (wordpress, phpbb, mediawiki, are examples that come to mind). Because the users are forced to use these subpar tools, they aren't happy.
We are currently looking at the other end of the spectrum, where each piece of software in our intranet isn't integrated, but we are able to use best-of-breed free software. We would be able to much more rapidly roll out new services to our company, but the down side is that the services wouldn't be integrated. A users profile in wordpress (movable type, in our case) is not connected to their profile in the other applications, for example. The software overlaps, finding information is more difficult, users aren't happy either.
How does your company balance the ability to rapidly integrate new tools with the desire to have a single coherent interface presented to the user? Do you pick one enterprise platform and force yourselves to stay wtihin its boundries or do you attempt to provide cohesion between many disparate tools?
Unfortunately every tool needs separate analysing and very often difficulty depends on many different factors (technology, frameworks, design). But the most important is integration level and integration points (Identity, interface).
Edit:
btw, good idea would be to spend some time on some prototyping and evaluation of potential solutions.

How to leverage an Open Source Project commercially? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Assuming you have been involved in an open source project (GPL'ed) that has been around for as long as 5-10 years, during this time it has been fairly successful - despite a good handful of commercial/proprietary alternatives.
Now, you've come to realize that the long term contributors would like to leverage the project commercially, possibly even in order to make a living or start a company based on it. So that they can exclusively work on it, without depending on other, unrelated, work.
So, what are some of the viable and recommended steps to turn an open source/GPL project into a commercial "success" (in the sense of self-sufficiency), so that long term contributors may preferably be paid to work on the project, without affecting the open source nature of the project itself?
In other words, what are generally some of the more common revenue-creating mechanisms for open source software, and how can these be successfully introduced/implemented - also, what prerequisites/conditions apply?
I saw a company a few years back that took a handful of OSS spam and virus filters, built a web interface to administer them all at once, put it on a 1U server, and sold it as a network security appliance.
It was a nice product for mid sized companies that wanted a single solution for all spam and virus filtering, that auto-updated itself and was easy to administer.
Technically they were just selling the server, and the web admin tool, all the OSS components were freely available, if you wanted to spend the time setting them all up individually.
You should think in terms of the "product halo," which refers to all of the related items and services surrounding a product that are not the product itself. For example, MySQL is open source and freely downloadable, but its product halo could include services like installation, customization, consulting, training, etc. Or Zend contributes heavily to PHP and offers Zend framework, but they also have a number of commercial products surrounding those offerings. Active State creates the Komodo IDE and has an open source version and then a commercial version that extends the open source version. Or take Linux...or any other number of examples. A book that you might find interesting on the topic is Wikinomics.
I think the main issue is the business model adopted by the project owners and the ones who want to turn it into revenue. It will depen on what kind of project is it, such as end-user product or as software API. In the case of end-user projects, Software as a Service seems a very good choice as a business model.
Look out for examples, and case studies on successful projects, such as apache, firefox, sugarCRM...
Focusing on specific niches is also a very important thing.

What will we do after Access? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Microsoft seems hell-bent on deprecating the swiss-army-knife of database tools. What else comes close for facading/file-swapping/cloning/name-your-acronym-connecting arbitrary database servers/spreadsheets/CSV's/flatfiles?
What weird kinds of functionality have you squeezed out of Access? And what else is there to take its place?
Access is not a DBMS. Or at least it's not just a simple DBMS. It's a very good RAD environment, a simple way to create SQL code graphically, and a regular front-end to fully fledged DBMs.
Neither SQL Server (Express or MSDE) nor Oracle, MySQL, etc. will ever replace it, until they come integrated with a simple programming language, a Crystal Reports like facility and a way for beginners to get around without having to learn SQL.
At my first professional job I developed a very big system completely in Access. Front end for the clients, admin front for me, reports and monitoring for management, permissions per user, automatic tasks run at certain times, etc. I came to learn a lot of its flaws and strengths as a result.
I've seen marvelous apps done with it, as well as pieces of crap. I still use it for personal projects, and ain't' ashamed of it (for instance, a Sudoku player, or a Karnaugh mapping implementation). There's an MVP who's created a Paint clone completely in Access, though I believe that's extreme.
Access' pearls: It's nice to easily test a database design idea and have sketch forms, reports, etc. created for you. If you change a column's name (or even a table, though that fails sometimes) it's nice to see all references to that have changed to the new name, automatically. The "sub-form" control rocks, I longed for it on VB6. And the "Thunder" button to do repeated filtering on tables is great, I wish I had something like that on SSMS!
The problem with replacing Access - and replacing Access is the problem which stops me in the vast majority of cases recommending a move to Ubuntu or SUSE desktop to my business clients - is not that Access is widely used for its database facilities: it's not except with the most Micky Mouse of user-written departmental applications which are relatively trivial to re-code. The problem is the medium sized applications where the data was migrated long ago to the corporate SQL Server.
These are a nightmare. They're often badly written (I've acquired a fair few to administer over the years) and encapsulate reams of business logic. Recoding them in anything is generally quoted at a couple of man-months at the best - usually twice or three times that, and it's unusual for a department of the size these are found in to have the budget to support that. Moreover although the arrival of AJAX and good desktop-like controls has meant that this is at least now possible in theory, in practice these are of then massively integrated with the rest of the MS Office desktop and virtually impossible to disentangle with out users seeing a drop in usability in the short to medium term - which is a show stopper in itself.
I really do not know what the solution is, apart from the slow replacement of creating new systems with other methods and hoping for the gradual demise of existing apps. Trouble is I think Access could well be the Cobol of the 1990s - it'll be around for ever supporting legacy apps because it's too costly to rewrite from scratch.
As an aside, does anyone else coming from a non-Access traditional Win32 coding background have the experience of finding that the standard of coding in even professionally written Access apps is generally below average? Although superficial (but important) stuff like formatting and variable names are generally fine I find over and over again that program structuring is poor. I know that this may often be because these apps have grown like Topsy, and VBA really isn't conducive to good coding anyway, but even allowing for these factors things generally seem worse than one might expect.
I think the easy answer is nothing... Access is commonly used because it is the only option and it is extensible. There is simply nothing else out there that is installed on nearly every business machine in the world as access is.
If you are looking for an alternative, Oracle Application Express is a fairly powerful web based app that can be run on Oracle XE. It is a potential alternative to Access but does not support Master-Detail tables as well as access.
There is a continuum of developers in the world, rather than hard and fast boundaries. People range from business managers and IT professionals. I consider myself to be an advanced amateur developer, somewhere between the two. As such I use MS Access at work to organise a large amount of data in a small architectural office including timesheets, financials and architectural specifications. Sure, the application now is a mass of stinking p** that has grown over almost five years.
I've been searching for something better than Access for ages- I can create simple apps in VB.NET however the learning curve is huge from VBA. I've looked at all sorts of options. Often you need Crystal Reports to get any kind of reporting capability, or the IDE is non-intuitive, or linking a field to a data object takes ten minutes each time, or there is not integration with other office products at all. The boss is not going to pay for something that costs a bomb, either. I'd love to get away from Access, but nothing I've looked at gets anywhere near ticking all the boxes.
the nice thing about Access is its answer to large IT bloat. It comes with MS office so its already approved for use on locked down computers but I don't have to attempt to struggle for weeks/months to get an application approved through various departments, coding hours to account for, and all the testing for an application i can whip up in an afternoon with Access. Sure SQL server would be nice to use, but not worth the headache.
I doubt Microsoft will kill off Access. With Access 2007's integration with Sharepoint and the rapid growth of SharePoint, Access may in fact have a resurgence as an off-line and reporting tool for SharePoint web sites.
I don't think MS has any intention whatsoever of getting rid of Access. They may transform it into more of an end-user tool than a programmer's tool, but it is never going away. The forking of the Jet database engine into the traditional Jet 4 version that ships with every copy of Windows (because Active Directory uses Jet 4 as its data store) and the version that is owned by the Access development group (the ACE, with its ACCDB file format, which is, de facto, Jet 4.5 or maybe Jet 5).
Access is a hugely popular and useful application and functions in a whole host of levels within any number of organizations, large and small.
Why is there no open-source alternative to Access?
Because it's way too hard to create such a complex piece of software that does so many different things well.
My cousin is a serious FileMaker guy. He seems to be doing great and has grown a small firm around it. Apparently FileMaker is a cross-platform Mac/PC system for rapid app development...
Maybe something like that will rise up with the business power-user/RAD set?
Microsoft may have a history of intentionally killing off database systems like this. I listened to a .Net Rocks interview one time with Les Pinter, where he claimed that he once heard a top Microsoft exec say that every copy of FoxPro that sells costs Microsoft thousands in lost SQL royalties. And where is FoxPro today? Officially, it is was end-of-lifed in March of 2007. So how did it get from prominance to demise? Well, Les says that Microsoft acquired it and ran it into the ground on purpose.
I am not usually big on conspiracy theories, but this does resonate with Microsoft's track record from that era.
Anyway, trivia aside, I believe there will be more RAD-style database tools... They empower non-developers and allow developers to solve certain types of problems very quickly. I have an aversion to using them for large projects that, unfortunately, cascades - small projects tend to grow over time. So as a result I only use them for the very tinest things.
As for the long term consequences... Well, I have seen scenarios where they didn't scale well and all those fragmented solutions started to look a lot like technical debt. It is actually possible to hook Access up to a SQL Server back-end, which solves a lot of problems.
Probably the biggest/weirdest thing I did with Access was writing an EDI system from scratch. For those of you who have worked first-hand with EDI, you know what I'm talking about. What a silly idea that was. My problems here had more to do with VBA than Access though -- I remember just really needing interfaces and not having them.
I also used it for code generation back before things like Codesmith were available. It generated business objects (CRUD and some other basics) for ASP Classic. That actually worked awesome.
in my experience Excel is even more widely used inside corps. We're just now doing a project where we convert ~ 60 000 Excel documents (with 4-12 sheets in each) to Sharepoint and Infopath forms. ;)
Microsoft would like us to move to using Office Business Applications - essentially hooking up the office apps to databases. Add SharePoint into the mix and there is a lot of possibility. Also plenty of licencing fees for MS as well.
I have seen access used to integrate and front end GIS and health data. It blew me away how well this app was coded and documented.
As Mark. Access was my first approach of database and I found it powerful at the time. It has some nice features like generating SQL from "query by example". Its form features and capability to print on various format (sheet of labels for example) was nice too.
On the downside, it is proprietary, and each new version was incompatible with the previous one: if you load a base made with Access 97 with Access 2000, you can no longer load it with the older one...
Although I don't do much personal database works (list of addresses, mostly), for such work I would use either Open Office's database tool (not tried yet) or a good old open source database (MySQL, SQLite come to mind as lightweight bases) with a GUI front end, for example, SQuirreL SQL Client, and probably JasperReport as report front end.
Not as integrated as Access and with steeper learning curve, but somehow more flexible.
Now, I am sure we can find some simple good old non-relational database for the simplistic uses I had at the time. :-)
I welcome the day when Access breathes it last breath and joins the likes of Clippy.
Access is well-intentioned, but it has become a crutch. Even in large companies with able IT staffs, Access applications can run rampant, providing a pain point for knowing the global landscape when it comes to products to maintain. Linked Access databases that point at other datasources, unmaintained Access applications, and just shear flexibility are issues, in my opinion.
I think that Access is actually too powerful, too flexible, and too extensible for its own good. In Microsoft's well-intentioned attempt to bring rapid development to the desktop database realm, it really has opened a Pandora's box. Look at it from another perspective, too. Assume that a company has a few applications that are written in Access. The developer who wrote them leaves. These applications are just important enough that they still need to be used, but not important enough that IT gets the approval to port them to a more technologically capable platform.
Now, the situation is that if no one on the team knows Access, it is requirement for the new developer. This means that you might have to pass on a developer who is the most technically well-rounded and the best fit if he does not have legacy chops. I speak from experience, on this. We are down to two legacy Access applications, and are trying feverishly to convince of the needs to either incorporate the functionality into related, code-based projects or into new projects of their own. I have one developer with Access "chops", and am not going to base a candidate search on whether someone knows Access or not in the event that he leaves.
As far as the weirdest thing I've seen squeezed into Access...
I am a police dispatcher for a smaller university, and we (like almost every agency) use a CAD (computer aided dispatch) and RMS (record management system) system.
Our previous CAD/RMS software was built ENTIRELY into Access. You opened Access, and through an ugly GUI, entered calls for service, everything. Officers wrote reports through the same interface.
It worked great at first, and then as the database size grew, it became extremely slow and difficult to use. This is what happens when the state makes you go with the lowest bidder on a project...
Now we use a CAD/RMS solution that is browser-based, backed by MS SQL.
I don't think that Access is going away anytime soon. The beta of office 2010 is out with an updated Access included and the Microsoft blogs are hyping the features of Access 14 (the version after 2010) which include improved Access Projects (.ADPs) with better support for SQL Server 2005/2008 and better .Net integration.
If i were to look for a new integrated database development system providing front and backend features Oracle APEX would be the main contender. Front ends are web based requiring no runtime on the client, the whole system is free to download and instal (express edition) and given a few years the entrance barrier for new users hopefully will be reduced so it is something laymen can dabble in.
Access is just migrating to more of either a single user on a desktop or a few users on a shared database file without much security. If you want to take it to a slightly higher level, use Access as a frontend to SQL Server.
Well now it seems Access 2010 is looking to get the hooks into SharePoint in an attempt to "web enable" the Access application. There are even host sites catering to this technology. Maybe all those who were concerned Access couldn't scale can fear no more?
Access definitely has both pro's and cons, it's just another tool to use but not abuse. Every adult job I've ever had used ran on windows, so Access or something like it will exist. I feel sorry for the places that are stuck in Access quicksand or lost in excel hell. But are we forgetting that all that can be corrected and better yet prevented with a bad ass bi team and proper training.
PostgreSQL, MySQL, FileMaker, <insert name of database that is not Access here>, Excel, custom parsers, natural language importers, Perl just because it is a swiss army knife, grep awk sed, m4, the old versions of Access before the demise of Access, ...
weird functionality? Rather than the normal myriad of ways to access Access, I use SQL statements to access Access. The SQL statements that I use work with other databases as well as Access -- weird I know.
Like many, I have used and abused access over the years, always felt a little dirty though ... I felt a little better about it when I came across this post by Rob Conery recently:
http://blog.wekeroad.com/blog/hacking-your-vote/
Would never have dreamed of using access in a voting system. Scary.
FileMaker is a good database for shifting from MS Access.
It is a cross-platform database (mac/PC). It has a Web Viewer, through which you can connect to the web world. For example, charts, maps etc can be shown in this web viewer.
FileMaker is easy to use for beginners. You could also explore the scripting mechanism and achieve data manipulation.
The latest FileMaker 10 has several new interesting features. My vote is for FileMaker.
I believe File Maker Pro will probably become a new standard if people ever figure out it exists.
FMP has all of same features / short comings of Access plus you can actually make a real client / server setup if you know what you're doing.
In a single file you can define your forms, reports, tables, etc. It is also cross platform and runs on Windows or Mac, and can be adapted to web based too. All by design.
Coming from the "real" SQL servers to File Maker Pro was really hard mentally but once I got the hang of it I found it was pretty amazing. Now as a database it's nothing special but as a database application development system that "normal" people can use it really shines.
If you PLAN on a network setup I would suggest taking the time to learn how to separate the storage database from the application database up front. Otherwise upgrades require you do lots of data export / import and that can take a while or be almost impossible if your tables change significantly.
I've built a call center application that automatically handled incoming phone number lookup and automatically dialed regular POTS phones using FMP on NT. That was about 6 years ago so I imagine it's improved since then.
I've only used Access when I wished Excel could do a "Left Inner Join". Otherwise a MS has done a fair job making there C#/SQL offering simple (and free) to use for light weight RDB projects.