According to http://sourceforge.net/projects/c3p0/files/bin/ it seems that the last stable version of c3p0 was released in 2007.
Isn't it abandoned project?
We use it heavily and it works fine, but I am a little bit afraid that it will not get fixes and support, causing that new versions of hibernate, JDBC drivers, databases and finally transition to java7 will make it work worse.
i've been in grad school in an unrelated field since 2007, which has kicked my butt and rendered c3p0 maintenance slow at best. c3p0 is not formally abandoned, but until i'm out of this program, it defacto has been. (the update from ~1.5 yrs ago is a pretty big deal, and i've gotten no bad feedback on the changes, i advise using that. but it was a one-off, not the long sequence of release and feedback that i upgrade to a stable release.)
i do intend to bring c3p0 forward when i am done, which will hopefully be this summer. but i mostly feel terrible about the long lapse in maintenance, and understand apologetically that many users have felt compelled to seek alternatives.
c3p0 development has started up again, with a new release (c3p0-0.9.2-pre2).
We switched to BoneCP because we had some IOExceptions on high load we could not fix. After switching to BoneCP everything run fine. we are driving a very high volum site with peaks of 3000 dynamic page views per second and I can really recommend BoneCP.
In 2020 I would recommend HikariCP. It is very fast, stable and has excellent documentation.
Related
We're developing a web site. One of the development tools we're using has an alpha release available of its next version which includes a number of features which we really want to use (ie they'd save us from having to implement thousands of lines to do pretty much exactly the same thing anyway).
I've done some initial evaluations on it and I like what I see. The question is, should we start actually using it for real? ie beyond just evaluating it, actually using it for our development and relying on it?
As alpha software, it obviously isn't ready for release yet... but then nor is our own code. It is open source, and we have the skills needed to debug it, so we could in theory actually contribute bug fixes back.
But on the other hand, we don't know what the release schedule for it is (they haven't published one yet), and while I feel okay developing with it, I wouldn't be so sure about using it in production so if it isn't ready before we are then it may delay our own launch.
What do you think? Is it worth taking the risk? Do you have any experiences (good or bad) of similar situations?
[EDIT]
I've deliberately not specified the language we're using or the dev-tool in question in order to keep the scope of the question broad, as I feel it's a question that can apply to pretty much any dev environment.
[EDIT2]
Thank you to Marjan for the very helpful reply. I was hoping for more responses though, so I'm putting a bounty on this.
I've had experience contributing to an open source project once, like you said you hope to contribute. They ignored the patch for one year (they have customers to attend of course, although they don't sell the software but the support). After one year, they rejected the patch with no alternative solution to the problem, and without a sound foundation to do that. It was just out of their scope at that time, I guess.
In your situation I would try to solve one or two of their not-so-high priority, already reported bugs and see how responsive they are, and then decide. Because your success on deadlines will be compromised to theirs. If you have to maintain a copy of their artifacts, that's guaranteed pain.
In short: not only evaluate the product, evaluate the producers.
Regards.
My personal take on this: don't. If they don't come through for you in your time scale, you're stuck and will still have to put in the thousands of lines yourself and probably under a heavy time restriction.
Having said that, there is one way I see you could try and have your cake and eat it too.
If you see a way to abstract it out, that is to insulate your own code from the library's, for example using adapter or facade patterns, then go ahead and use the alpha for development. But determine beforehand what the latest date is according to your release schedule that you should start developing your own thousands of lines version behind the adapter/facade. If the alpha hasn't turned into an RC by then: grin and bear it and develop your own.
It depends.
For opensource environments it depends more on the quality of the release than the label (alpha/beta/stable) it has. I've worked with alpha code that is rock solid compared to alleged production code from another producer.
If you've got the source then you can fix the any bugs, whereas with closed source (usually commercially supported) you could never release production code built with a beta product because it's unsupported by the vendor who has the code, and so you can't fix it.
So in your position I'd be assessing the quality of the alpha version and then deciding if that could go into production.
Of course all of the above doesn't apply to anything even remotely safety critical.
It is just a question of managing risks. In open source, alpha release can mean a lot of different things. You need to be prepared to:
handle API changes;
provide bug fixes and workarounds;
test stability, performance and scalability yourself;
track changes much more closely, and decide whether to adopt then yet;
track the progress they are making and their responsiveness to patches/issues.
You do use continuous integration, do you?
Im curious what your preferences and thoughts are on the idea of doing as little testing as possible behind the scenes and rolling our as many new features as possible, as quickly as possible, and testing on the production site, or troubleshooting them to hell until they're bulletproof, and then releasing them to the public.
Perhaps a middle ground may be more appropriate. Your "brand" will suffer a great deal if either:
the software you release is a steaming pile of dung (as in your former cases); or
the software is not released in a timely fashion (as in your latter case).
In both those cases, you won't be very likely to stay in business for long.
The shop I operate in recognises the fact that software will have some bugs in it. All high severity bugs must be fixed before release and all low severity bugs must have a plan in place for fixing after release.
Software maintenance (basically bug fixing and answering customer questions) is an important part of our development process.
In addition, the "cost" of fixing a bug becomes more as the discovery of said bug moves away from the developer and towards the customer.
Fixing a bug I find during unit testing involves only me though it can affect others if my stuff is delayed.
Finding a bug during system test means other phases are definitely delayed since the code has to come back and be changed and unit tested again before once again being promoted to system test.
Finding a bug after your software is live is a whole other world of pain, involving communications with customers, multiple managerial reporting lines all wanting to leave an impression of their boot in your rear end and putting any bug fix through all phases, or risking adverse effects otherwise - a particularly nasty place in the ninth circle of hell is reserved for those developers who, in fixing their bug, introduce yet another one.
Rolling out code to a production server with 'as little testing as possible' to get it live quicker is setting yourself up for a life of pain. What you're suggesting really is to get your users to test your system for you, that would be a beta program, but even before you get there you should have performed a good level of testing and be confident that the app works, otherwise you're not going to keep many users for long.
From a developer perspective I would only be happy releasing code that I am confident is working as planned. From a user perspective I wouldn't want to be using an app that kept falling over, no matter how early in the development cycle it is.
If it's not ready, then don't release it.
It rather depends on the desires of the management than on the desires of the customers. Given a choice of 'you can have it working, or you can have it Friday', the average target-and-goal loving manager will prefer to have it Friday.
If you actually have a choice, please leave it until it works. You'll save yourself and everyone else a deal of time and trouble.
Time(do it right) < Time(do it again) + Time(correct database) + Time(explain and apologise)
(Fundamental law of software engineering.)
You should test and review the code during development, before the feature is even finished.
You should test the whole feature for functionality before moving to production.
You should release a small number of features often, so that you get feedback on the feature. Even if the feature works perfectly, it may still not be exactly what the user wants, or you find that something can be improved when the feature is used in practice.
It depends on the pain levels, and expectations, of your customers and how well your customer facing staff can manage their, ahem, 'feedback'.
If your customers are expecting to go quickly to high volume mass production, on a very tight schedule with fierce competition, with what you're delivering them (think consumer electronics like mobile phones) then they won't thank you at all for any surprise. They'll be very scared of having to recall hundreds of thousands of units for an upgrade.
Perhaps you're delivering to someone who's also doing research, a university department or similar, who may bend your delivery to fit a purpose that it's not intended for. They don't mind, may even expect, problems and are happy to find a way through. They may well be excited by the features and forgive you the bugs as long as they find you're listening to their feedback.
The most skillful customer facing staff I worked with were able to judge how long it would take the customer to notice the deficiencies in the deliveries we were providing, how long it would take us engineers to plug the gaps, and realise that by the time the customers noticed the problem we'd have a patch. The customer gets an early delivery so the contract is secure, is not too inconvenienced by the bugs, is happy with the support, all in all a happy world. It's a tricky call though; If you don't release anything until it's perfect you'll never have a customer as someone will always undercut you yet release something too early and disappoint then you're going to be replaced when the opportunity arrises. Get your judgement of the patch development time wrong and your customer will be unhappy.
In short it's something about open communication, something about bluff, deceit and deception, and a whole lot about judgement to know which to do and when.
I think it depends on your userbase somewhat. For example, I choose to use cutting edge and less stable features on my linux box. But I think in general, and especially in web development where generating pageviews is usually a high priority, you want to go with what works for most people, and that's stability.
If you are performing a beta or alpha test with a handful of people, sure. However, if this is code that is meant to be used by the general public or your business, then no. Buggy code reflects poorly on the programmer, and I know that when something crashes or behaves unexpectedly it tends to annoy people.
Therefore, I would much rather release polished, thought out code that may not have as many bells and whistles than code that gives people a poor experience.
One footnote, however, is that you must know when enough is enough. Every programmer can spends eons going over every line of code, saying "Well, if I move this to here, I can get a .001% speed boost" or anything on the same line of thinking. Believe me, I have this problem as well, and tend to obsess. The skill to say something is "good enough" is a hard one to learn, but it is absolutely necessary, in my opinion.
Imagine you have a program written e.g. in Java 1.4 or C# 1.0. Of course this program doesn't use generics and other language features that were introduced with later versions.
Now some non-trivial changes have to be made. And of course, you already have a newer IDE/compiler so you could use Java 1.6 resp. C# 3.5 instead. Would you use this oppurtunity to upgrade to the latest language features, i.e. use generic containers and get rid of many casts, etc. Or would you leave it like it is, and use the new features only for new parts. Or even stay with the features of the version originaly used, to maintain a level of consistency?
My basic rule for code maintenance: If it ain't broke, don't fix it.
It's just one specific form of refactoring, so all the advice on when and how to do that applies.
One consideration here is your present and future ability to attract and retain developers to perform maintenance and support on an application built with outdated technology. Many seasoned developers worked with those older versions of Java and C# when they were current, so it isn't that we have no experience with it. But if we spend, say, a year working in .Net 1.0, we will be forgetting what we now know about subsequent versions. And while we are spending that year working in old technology, all of our friends and competitors are honing their skills on the latest technology. We will never be able to make that year up.
In addition to falling behind, we will find it intensely frustrating not to be able to use the functionality in the later versions that we are now comfortable using.
So, younger developers will not have had experience in older technology (.Net 1.0 was released in January 2002). Older developers have been there and don't want to go back.
On the other hand, if you stay back in the older version, you will be comfortable with your skillset. You won't have to spend a lot of time learning about newer technologies. And perhaps best of all, you will have job security.
As far as possible, leave working code alone. It may seem like the code should all be using the latest programming secret sauce, but there is a lot to be said for tried and tested code.
I would only consider rewriting code that must be heavily modified in any case to add a new feature, and even then I would only change the programming metaphor if it will speed up the writing of the new feature.
Otherwise you will constantly be debugging code which was previously working, and using up energy that could have gone into improving the product.
It depends. Does it need to be compiled by old compilers? Do you have enough time/money to change everything?
In theory, it's a simple balance between costs and benefits, so you should only do the rewrite if the benefits outweigh the costs.
The problem with that is that it's almost impossible to measure the real costs (not only of doing the work, but of not doing other things that might contribute more). Generally speaking, the benefits can't really be measured at all -- you can only guess at how much difficult something would be if you'd stayed with the old code.
That leaves us with little chance to do anything based on rational measurements. That leaves a simple rule of thumb: leave old code alone until your only choices are to rewrite it or abandon it completely.
I've got an app that is just over 1yr old but it still evolving and will likely revolve more and more around a growing database schema in the near to mid term. The app was written using LinqToSql which has met our needs fairly well. We have run into a few pain points but have worked around them. I would currently consider this app to have fairly minimal database demands.
Now that the EntityFramework appears to be the ORM Microsoft is pushing people towards I'm wondering if it isn't inevitable that we will want to migrate in that direction.
I have heard a lot of good and bad about EntityFramework. So I am wondering if I would be better of taking the plunge now or waiting for v2.0 when VS10 arrives? Any thoughts? Will I lose any functionality if I do it now?
If I were to decide to migrate, can anyone point me at some resources on what is involved?
Thanks!
Re wait or change now? Personally, I'd wait (see Do you think it’s advantageous to switch to Entity Framework?) for VS2010 (as a minimum) - and until the beta comes out I can't check whether the things I use in L2S that EF lacks are implemented yet.
The resources etc question may be a dup of How to move from Linq 2 SQL to Linq 2 Entities??
My coworkers and I were having a discussion about this yesterday. It seems that no matter how well we prepare and no matter how much we test and no matter what the client says immediately before the site becomes public, initial site launches almost always seem to be somewhat rocky. Some clients are better than others, but often things that were just fine during testing suddenly go horribly wrong when the site becomes public.
Is this a common experience? I'm not just talking about functionality breaking down (although that's often a problem as well). I'm also talking about sites that work exactly the way we wanted them to, but suddenly are not satisfactory to the client when it's time to make the site public. And I'm talking about clients that have been familiar with the site during most of the development process. Meaning, the public launch is definitely not the first time they've seen the site.
If you've dealt with this problem before, have you found a way to improve the situation? Or is this just something that will always be somewhat of a problem?
Don't worry. This is completely and entirely normal and happens with every piece of software. Everything that can go wrong will go wrong, and the most volatile entity in the development process, the client, will be the cause of these things.
You could do all the Requirements Gathering in the world, write a 100 page Proposal, provide screenshots and updates to the project hourly and the client will still not approve. On a personal note, I feel that the Internet is one of the worst mediums for this, as designs are a lot more free-flowing nowadays and the client will always have a certain picture in his/her mind; one that won't look like the finished product.
I find that a bulletproof contact with defined stages and sign-off sheets are the best way to handle such a situation. Assuming that your work is contracted you should ensure that at each stage the client is shown the work and is forced to approve each and every change made. At least that way if the client wants something changed you can tell them that they've already signed off that section and the additional work will cost them extra (also defined within the contract).
Not only did this approach work for me, it made the client stop and think about what he/she REALLY wanted. Luckily for me many of my clients are already tech-oriented, so they understand that these things can take time, but those that haven't a clue about Web Development expect things to be perfect within a couple of days. As long as you make sure that everything is covered in the contract the client will think about what they want and won't pester you with issues after.
Of course, anything you can do in regards to Quality Control would be fantastic and help the project move along nicely. Also ensure that some form of methodology is planned out before the project and that this methodology is known by the client(s). Often changes in fundamental areas can be costly and many clients do not seem to realise that a small change can require many things to be changed.
Yes, saw this several times on our projects (human beings are fickle).
Something that helps us in these situations is a good PM/Account Manager that can handle the customer, which makes things a little bit bearable on the technical level.
Web site launches are usually fairly smooth for us. Of course, we do extensive validation including code inspections, deployments to proto-servers (identical to our production servers), and mountains of documentation.
After every launch, we have a meeting to discuss what went well and what didn't so that we can make adjustments to our overall process and best-known-methods documents.
As for clients that change their minds at the last minute... sigh... we minimize that by having them sign off on the beta version. That way, there is no disagreement when the project is launched. If there is a disagreement, there is always a next release.
For what it's worth, the last site launch I did went off without a hitch. Now, it wasn't a high-traffic site, and there were some bugs that I did eventually fix, but there wasn't anything troubling on the day of the actual launch.
This was an ASP.NET/C# site. It wasn't terribly large or complicated, but it wasn't trivial either. Probably the most notable thing is that it was 100% designed, implemented, and tested by myself, from the database schema all the way up to the CSS. It was also my first time using ASP.NET. There were plenty of bumps in development but by the time I launched it I was pretty familiar with them and so knew what to expect.
I think the lesson to be learned from this is to have a good design up-front, solid implementation skills, and good testing, and a new site doesn't have to be a nightmare. There's at least a possibility of a trouble-free launch.
I wouldn't limit your statement to just web sites. I have worked on a lot of projects over the years and there are always details that get "discovered" when going live. No amount of testing removes all the fun things that can happen.
One thing I will say is what you learn in the first couple of hours of a new system going "on-line" is way move valuable that all the stuff learned during development. It's show time when the real cool problems and scenarios appear. Learn to love them and use these times as a learning point for the next time. Then each time it will be just at fun!
We used to have this problem a lot, but much less recently.
Partly for us it is about firmer project management and documenting the specification (as suggested in other answers here) but I believe more of the difference came from:
Expectation management - getting the client to accept that iterative changes are still to be expected after launch, that this is normal and not to worry about it
Increasing authority - we are now a well established (13 years) web developer and we can speak with a lot of expertise
Simply being more experienced - we can now predict in advance most of the queries that are likely to come up, and either resolve them, mitigate them or bring them to the client's attention so they don't sting us on the day
Plus, we rarely do big fanfare launches - a soft launch makes things much less stressful.
My experience is that web site launches are almost always rocky. I've only had two exceptions to this common truth. The first was a site developed for a small business ran by one person. This went smoothly because, well there was only one person to please so is was fairly easy to track what they wanted. The other was a multi-million dollar website launched by a fortune 500 company. This happened to go smoothly because there were 2 PMs and a small army of consultants there to manage the needs of the customer. That coupled with a one month of straight application load testing and a 1,000 user beta launch meant when the site finally went "live", I was able to get a full nights sleep (which is fairly uncommon). Neither of these situations constitute then norm though. Of course, there's nothing better than several thousand beta testers hitting your site to help find those contingencies that you never thought of.
I'm sure you can figure out the kind of errors that always sneak in, so for example is it due to rather superficial testing? E.g. randomly clicking around and checking if things appear to be right.
In order to improve I propose something along the following:
Create documents/checklists that specify all testing procedures.
Get regular people to test, not just the folks who built the application.
Setup a staging environment which closely resembles production.
Post-launch, analyze what went wrong and why it went wrong.
Maybe get external QA to check on your procedures.
Now, all those suggestions are of course very obvious but implementing them into your launch procedures will require time.
In general this really is an ongoing process which will help you and your colleagues to improve. And also be happier, because fixing bugs in production just makes you age rapidly. ;-)
Keep in mind, you won't be done the first time. Documents are heavy which is why people don't read them. People are also lazy and don't follow the procedures. This means that you always have analyze what happened, go back and improve the procedures.
If you have the opportunity I'd also spend some time on looking why nothing went wrong with another launch and comparing this to the usual.