CaseStudy in Bugzilla - mysql

I wish to conduct a case study in bugzilla, where I would like to ideally find out some statistics such as
The number of Memory Leaks
The percentage of bugs which are performance bugs
The percentage of semantic bugs
How can I search through the bugzilla database for softwares such as apache http server or mysql database server to generate such statistics. I would like an idea of how to get started?

I finally figured it out and am going to show my approach here. It's more or less a manual process. Hopefully this helps others as well who might be doing case-studies:
Selecting the bugs:
I went to bugs.mysql.com and searched for all bugs which were marked as resolved and fixed. Unfortunately, for mysql you cannot select specific components. I filtered out a random time range (2013-2014). And saved all of these in excel file(csv)
Classifying and filtering the bugs:
I manually went through the bugs, I skipped the ones which I could see clearly belonged to the documentation component, installation, compilation failure, or required restarts.
Then I read the report, and checked if the report actually suggested a bug fix, and if the bugfix was semantic (i.e. change limits, add a condition check, make sure some if condition is correctly marked for an edge case etc. - most were along similar lines). I followed a similar process for performance, resource-leak (both cpu resources, and memory leak considered in this category), and concurrency bugs.

Related

How to monitor new warnings in code generated by error checking tools?

Do generic tools exist for keeping track of warnings in code?
Some static-analysis tools generate a large number of false-positive warnings, so changing the code isn't desirable. Disabling individual warnings isn't always a practical option either *.
Do tools exist that take a list of locations in a file (which could be generated from static analysis tools), which could be run on a regular basis to detect the introduction of new warnings?
Even though diffing the outputs works on a basic level, it would be more useful if changes to line-numbers for example could be done without re-raising the warnings to the developers attention - every time the file was modified.
* While annotations can suppress these in some situations - it's not always practical if there are thousands of warnings for example or when multiple error checkers are being used. In other cases the tools that are reporting errors don't support annotations to disable individual warnings.
Many up-to-date analysis tools can set a baseline that separates technical debt and new warnings. Here’s, for example, the article "How to introduce a static code analyzer in a legacy project and not to discourage the team", explaining such mechanism:
To quickly start using static analysis, we suggest that PVS-Studio
users apply the mass warning suppression mechanism. The general idea
is the following. Imagine, the user has started the analyzer and
received many warnings. Since a project that has been developed for
many years, is alive, still developing and bringing money, then most
likely there won't be many warnings in the report indicating critical
defects. In other words, critical bugs have already been fixed due to
more expensive ways or with the help of feedback from customers. Thus,
everything that the analyzer now finds can be considered technical
debt, which is impractical to try to eliminate immediately.
You can tell PVS-Studio to consider all these warnings irrelevant so
far (to postpone the technical debt for later), and not to show them
any more. The analyzer creates a special file where it stores
information about as-yet-uninteresting errors. From now on, PVS-Studio
will issue warnings only for new or modified code. By the way, it's
all implemented in a very smart way. If an empty line is added at the
beginning of a file, the analyzer will size up the situation as if
nothing has really changed and will remain quiet. You can put the
markup file in the version control system. Even though the file is
large, it's not a problem, as there's no need to upload it very often.
The tool has the feature which you are talking about. Firstly, there is a suppression mechanism for uninteresting warnings. You may make all the warnings or the selected ones uninteresting. Secondly, the tool stores, not the line numbers but hashes of lines and hashes of nearby lines. This information allows not to issue warnings on the old code while editing the file.
I’m not sure if there is a third-party tool that can do all this. But I suggest paying attention to SonarQube.

How can I investigate these mystery Django crashes?

A Django site (hosted on Webfaction) that serves around 950k pageviews a month is experiencing crashes that I haven't been able to figure out how to debug. At unpredictable intervals (averaging about once per day, but not at the same time each day), all requests to the site start to hang/timeout, making the site totally inaccessible until we restart Apache. These requests appear in the frontend access logs as 499s, but do not appear in our application's logs at all.
In poring over the server logs (including those generated by django-timelog) I can't seem to find any pattern in which pages are hit right before the site goes down. For the most recent crash, all the pages that are hit right before the site went down seem to be standard render-to-response operations using templates that seem pretty straightforward and work well the rest of the time. The requests right before the crash do not seem to take longer according to timelog, and I haven't been able to replicate the crashes intentionally via load testing.
Webfaction says that isn't a case of overrunning our allowed memory usage or else they would notify us. One thing to note is that the database is not being restarted (just the app/Apache) when we bring the site back up.
How would you go about investigating this type of recurring issue? It seems like there must be a line of code somewhere that's hanging - do you have any suggestions about a process for finding it?
I once had some issues like this, and it basically boiled down to my misunderstanding of thread-safety within django middleware. Basically the django middleware is I believe a singleton that is shared among all threads, and these threads were thrashing with the values set on a custom middleware class I had. My solution was to rewrite my middleware to not use instance or class attributes that changed, and to switch the critical parts of my application to not use threads at all with my uwsgi server as these seemed to be an overall performance downside for my app. Threaded uwsgi setups seem to work best when you have views that may complete at different intervals (some long running views and some fast ones).
Since you can't really describe what the failure conditions are until you can replicate the crash, you may need to force the situation with ab (apache benchmark). If you don't want to do this with your production site you might replicate the site in a subdomain. Warning: ab can beat the ever loving crap out of a server, so RTM. You might also want to give the WF admins a heads up about what you are going to do.
Update for comment:
I was suggesting using the exact same machine so that the subdomain name was the only difference. Given that you used a different machine there are a large number of subtle (and not so subtle) environmental things that could tweak you away from getting the error to manifest. If the new machine is OK, and if you are willing to walk away from the problem without actually solving it, you might simply make it your production machine and be happy. Personally I tend to obsess about stuff like this, but then again I'm also retired and have plenty of time to play with my toes. :-)

MS Access: 100% CPU usage

We have an MS Access 2003 ADP application with SQL Server. Sometimes, without any apparent reason, this application starts consuming 100% of CPU time (50% on a dual-core CPU system). This is what Windows Task Manager and other process monitoring/analysis tools are showing, anyway. Usually, the only way to stop such CPU thrashing is to restart the application.
We still don't know how to trigger this problem at will. But I have a feeling that it usually happens when some of the forms get closed by a user.
NB: Recently we noticed that one of the forms consistently makes CPU usage raise to 100% whenever it gets minimized. Most of the time CPU usage goes back to normal when that form is "un-minimized". Perhaps, it's a different problem, but we'd like to uncover this mystery, too. :)
Googling for a solution of this problem didn't yield very good results. The most frequent theory is that MS Access gets into some sort of waiting-for-events loop which is practically harmless, performance-wise, because the thread running that loop has very low priority. This doesn't seem to help us because in our case (a) it certainly does hurt the system's performance and (b) it's still unclear what exactly makes Access to get into such "bad state" and how to avoid that.
I've gotten this CPU usage problem before in the past, but I don't remember if we ever discovered a solution or it just went away at some point.
In your post, you didn't mention reviewing the VBA. I'd recommend looking for a loop that under certain conditions becomes an endless loop.
I wonder if it is a hangover from this problem that access used to have in the "old" days
http://support.microsoft.com/kb/160819
Whilst the article does say it is fixed in versions >=2000 it still might be something.

HTML localStorage setItem and getItem performance near 5MB limit?

I was building out a little project that made use of HTML localStorage. While I was nowhere close to the 5MB limit for localStorage, I decided to do a stress test anyway.
Essentially, I loaded up data objects into a single localStorage Object until it was just slightly under that limit and must requests to set and get various items.
I then timed the execution of setItem and getItem informally using the javascript Date object and event handlers (bound get and set to buttons in HTML and just clicked =P)
The performance was horrendous, with requests taking between 600ms to 5,000ms, and memory usage coming close to 200mb in the worser of the cases. This was in Google Chrome with a single extension (Google Speed Tracer), on MacOSX.
In Safari, it's basically >4,000ms all the time.
Firefox was a surprise, having pretty much nothing over 150ms.
These were all done with basically an idle state - No YouTube (Flash) getting in the way, not many tabs (nothing but Gmail), and with no applications open other than background process + the Browser. Once a memory-intensive task popped up, localStorage slowed down proportionately as well. FWIW, I'm running a late 2008 Mac -> 2.0Ghz Duo Core with 2GB DDR3 RAM.
===
So the questions:
Has anyone done a benchmarking of sorts against localStorage get and set for various different key and value sizes, and on different browsers?
I'm assuming the large variance in latency and memory usage between Firefox and the rest is a Gecko vs Webkit Issue. I know that the answer can be found by diving into those code bases, but I'd definitely like to know if anyone else can explain relevant details about the implementation of localStorage on these two engines to explain the massive difference in efficiency and latency across browsers?
Unfortunately, I doubt we'll be able to get to solving it, but the closer one can get is at least understanding the limitations of the browser in its current state.
Thanks!
Browser and version becomes a major issue here. The thing is, while there are so-called "Webkit-Based" browsers, they add their own patches as well. Sometimes they make it into the main Webkit repository, sometimes they do not. With regards to versions, browsers are always moving targets, so this benchmark could be completely different if you use a beta or nightly build.
Then there is overall use case. If your use case is not the norm, the issues will not be as apparent, and it's less likely to get noticed and adressed. Even if there are patches, browser vendors have a lot of issues to address, so there a chance it's set for another build (again, nightly builds might produce different results).
Honestly the best course of action would to be to discuss these results on the appropriate browser mailing list / forum if it hasn't been addressed already. People will be more likely to do testing and see if the results match.

How can I support the support department better?

With the best will in the world, whatever software you (and me) write will have some kind of defect in it.
What can I do, as a developer, to make things easier for the support department (first line, through to third line, and development) to diagnose, workaround and fix problems that the user encounters.
Notes
I'm expecting answers which are predominantly technical in nature, but I expect other answers to exist.
"Don't release bugs in your software" is a good answer, but I know that already.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Give exceptions meaningful names and messages. They may only appear in a stack trace, but that's still incredibly helpful.
Allocate some time to writing tools for the support team. They will almost certainly have needs beyond either your users or the developers.
Sit with the support team for half a day to see what kind of thing they're having to do. Watch any repetitive tasks - they may not even consciously notice the repetition any more.
Meet up with the support team regularly - make sure they never resent you.
If you have at least a part of your application running on your server, make sure you monitor logs for errors.
When we first implemented daily script which greps for ERROR/Exception/FATAL and sends results per email, I was surprised how many issues (mostly tiny) we haven't noticed before.
This will help in a way, that you notice some problems yourself before they are reported to support team.
Technical features:
In the error dialogue for a desktop app, include a clickable button that opens up and email, and attaches the stacktrace, and log, including system properties.
On an error screen in a webapp, report a timestamp including nano-seconds and error code, pid, etc so server logs can be searched.
Allow log levels to be dynamically changed at runtime. Having to restart your server to do this is a pain.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Non-technical:
Provide a known issues section in your documentation. If this is a web page, then this correspond to a triaged bug list from your bug tracker.
Depending on your audience, expose some kind of interface to your issue tracking.
Again, depending on audience, provide some forum for the users to help each other.
Usability solves problems before they are a problem. Sensible, non-scary error messages often allow a user to find the solution to their own problem.
Process:
watch your logs. For a server side product, regular reviews of logs will be a good early warning sign for impending trouble. Make sure support knows when you think there is trouble ahead.
allow time to write tools for the support department. These may start off as debugging tools for devs, become a window onto the internal state of the app for support, and even become power tools for future releases.
allow some time for devs to spend with the support team; listening to customers on a support call, go out on site, etc. Make sure that the devs are not allowed to promise anything. Debrief the dev after doing this - there maybe feature ideas there.
where appropriate provide user training. An impedence mismatch can cause the user to perceive problems with the software, rather than the user's mental model of the software.
Make sure your application can be deployed with automatic updates. One of the headaches of a support group is upgrading customers to the latest and greatest so that they can take advantage of bug fixes, new features, etc. If the upgrade process is seamless, stress can be relieved from the support group.
Similar to a combination of jamesh's answers, we do this for web apps
Supply a "report a bug" link so that users can report bugs even when they don't generate error screens.
That link opens up a small dialog which in turn submits via Ajax to a processor on the server.
The processor associates the submission to the script being reported on and its PID, so that we can find the right log files (we organize ours by script/pid), and then sends e-mail to our bug tracking system.
Provide a know issues document
Give training on the application so they know how it should work
Provide simple concise log lines that they will understand or create error codes with a corresponding document that describes the error
Some thoughts:
Do your best to validate user input immediately.
Check for errors or exceptions as early and as often as possible. It's easier to trace and fix a problem just after it occurs, before it generates "ricochet" effects.
Whenever possible, describe how to correct the problem in your error message. The user isn't interested in what went wrong, only how to continue working:
BAD: Floating-point exception in vogon.c, line 42
BETTER: Please enter a dollar amount greater than 0.
If you can't suggest a correction for the problem, tell the user what to do (or not to do) before calling tech support, such as: "Click Help->About to find the version/license number," or "Please leave this error message on the screen."
Talk to your support staff. Ask about common problems and pet peeves. Have them answer this question!
If you have a web site with a support section, provide a hyperlink or URL in the error message.
Indicate whether the error is due to a temporary or permanent condition, so the user will know whether to try again.
Put your cell phone number in every error message, and identify yourself as the developer.
Ok, the last item probably isn't practical, but wouldn't it encourage better coding practices?
Provide a mechanism for capturing what the user was doing when the problem happened, a logging or tracing capability that can help provide you and your colleagues with data (what exception was thrown, stack traces, program state, what the user had been doing, etc.) so that you can recreate the issue.
If you don't already incorporate developer automated testing in your product development, consider doing so.
Have a mindset for improving things. Whenever you fix something, ask:
How can I avoid a similar problem in the future?
Then try to find a way of solving that problem.