Corrupt chrome extensions and user retention [closed] - google-chrome

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have a decently popular Chrome extension, and yesterday I accidentally released a corrupted version of it, and didn't catch it for 10 hours. Within those 10 hours, the extension was updated for most users and I lost about half of them over the night based off of my Google Analytics reports (I had about 600 pageviews every 30 minutes, and now I only have 285). When I found out about my mistake I quickly reverted to an older version that works, but now, about 30 hours after an update that fixes the corruption, my pageviews are still the same.
My questions are:
Have I lost all those users or has it simply not updated for them yet?
If an extension is corrupt does it still check for updates or do the users have to press repair?
Any insight would be fantastic. As you can imagine, losing half your users over night because of 1 line of code is difficult to process.

What could have happened?
Well, this isn't a really clear situation, but, based on your information, there are a few possible scenarios:
Your users have uninstalled your extension because of this corrupt version. In this case (the worst one), it's pretty impossible to bring your users back, unfortunately.
Your "corrupted" version had issues with the update handling. For example issues with chrome.runtime.onInstalled to check the update and add new features, or issues with the Analytics part. This means that:
Your extension worked fine before the update.
It has been updated with a broken update handling function/method.
The new update (rollback to a working version) didn't solve anything, because your already corrupted extension is now unable to apply updates and/or send page views to analytics.
Your users disabled your extension in an attempt to narrow the issues (that's very uncommon, an edge case).
Your users didn't get the new update yet (which after thirty hours is also pretty uncommon).
What could you do?
Again, let's split the situation up:
In the first case, you cannot really do anything, unfortunately. That was a bad mistake! Learn from it and always test a thousand times before pushing updates.
Thinking about the second case? You should test your corrupted version on your machine, maybe using Chrome Canary to make things faster. This obviously means that you should have the previous versions stored somewhere; if you haven't got them, then it gets pretty hard, and for the future: always keep backups of your previous versions. Installing the old version, then manually updating to the corrupted one, and finally to the last one, can really help you understand what's going on. You should meticulously check the update method and see if there's something wrong.
Note: if you're not listening to chrome.runtime.onUpdateAvalilable and manually calling chrome.runtime.reload() to update your extension immediately, a Chrome restart may be necessary for it to update.
Just wait, although that's uncommon for this situation to happen, but this also is the only thing you can do in such cases.
Well, same as case #3.
If an extension is corrupt does it still check for updates or do the users have to press repair?
Well, there's no such thing as a "corrupted extension". Chrome will always check for updates (at least if the user didn't disable them in the chrome://flags), even if your extension is just a bunch of SyntaxErrors. Don't worry about this.
Extreme fix
In case that you're not sure about what to do, a re-design of your extension and a drastical purge from all the to-dos and bad practices is always a good thing. Just backup your previous version first, and start working on the 2.0 beta! Possibly, updating and improving the extension will bring much more users to it than just solving an existing issue. Personally, I almost always experienced an installation increase performing drastic restyles and re-writing better code from scratch.
I wish you to find the problem and bring your users back ASAP.
As you can imagine, losing half your users over night because of 1 line of code is difficult to process.
Yes, I do imagine. As an extension developer I really suffered for similar mistakes when I was learning update handling. So... well, break a leg! Wish you the best.

Related

Cant delete disks on Google Cloud: not in ready state

I have a "standard persistent disk" of size 10GB on Google Cloud using Ubutu 12.04. Whenever, I try to remove this, I encounter following error
The resource 'projects/XXX/zones/us-central1-f/disks/tahir-run-master-340fbaced6a5-d2' is not ready
Does anybody know about what's going on? How can I get rid of this disk?
This happened to me recently as well. I deleted an instance but the disk didn't get deleted (despite the auto-delete option being active). Any attempt to manually delete the disk resource via the dev console resulted in the mentioned error.
Additionally, the progress of the associated "Delete disk 'disk-name'" operation was stuck on 0%. (You can review the list of operations for your project by selecting compute -> compute engine -> operations from the navigation console).
I figured the disk-resource was "not ready" because it was locked by the stuck-operation, so I tried deleting the operation itself via the Google Compute Engine API (the dev console doesn't currently let you invoke the delete method on operation-resources). It goes without saying, trying to delete the operation proved to be impossible as well.
At the end of the day, I just waited for the problem to fix-itself. The following morning, I tried deleting the disk again, as it looks like the lock had been lifted in the meantime, as the operation was successful.
As for the cause of the problem, I'm still left clueless. It looks like the delete-operation was stuck for whatever reason (probably related to some issue or race-condition going on with the data-center's hardware/software infrastructure).
I think this probably isn't considered as a valid answer by SO's standards, but I felt like sharing my experience anyway, as I had a really hard time finding any info about this kind of google cloud engine problems.
If you happen to ever hit the same or similar issue, you can try waiting it out, as any stuck operation will (most likely) eventually be canceled after it has been in PENDING state for too long, releasing any locked resources in the process.
Alternatively, if you need to solve the issue ASAP (which is often the case if the issue is affecting any resource which is crtical to your production environment), you can try:
Contacting Google Support directly (only available to paid support customers)
Posting in the Google Compute Engine discussion group
Send an email to gc-team(at)google.com to report a Production issue
I believe your issue is the same as the one that was solved few days ago.
If your issue didn't happen after performing those steps, you can follow Andrea's suggestion or create a new issue.
Regards,
Adrián.

Slightly Slow Approval to the Gallery, non-removal of existing script

So, I have to make a minor bug fix to all of my scripts: I didn't realize there was a limit to the amount you could push into the Cache (BTW Google, I'm pretty sure this isn't documented anywhere).
Anyhow, so my three line fix resulted in my having to resubmit a bunch of scripts. Typically this isn't a big deal, Google is usually super awesome about approving them (usually the next business day). However, unfortunately they seem to be taking more time this time. This became a problem because I had to do a presentation today, and I just assumed they would be approved by now (I fudged it and just showed a spreadsheet with the script already installed).
So, I guess my main question here is maybe would have a more graceful upgrade process? It sometimes doesn't make sense to have the script removed from the gallery when waiting for approval.
Thanks!
Ben
I've opened an issue a while ago regarding this (nearly 2 years now). You probably want to star it to keep track of updates.
About the approval process, it is not "reliable" as you could see. I had scripts that took 3 months to be re-approved and then, the next upgrade, only a couple of days.

Time to wait before forking open source software? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm working on an application and I needed an API wrapper for it. I noticed that most of the API calls I needed weren't implemented, so I went ahead with adding them in. There are a few bugs that need fixing which I'm planning to fix as well.
My problem is that development of the wrapper is almost non-existant at the moment. A bug submitted with a patch from October 2009 has been ignored so far.
I've emailed the main developer so I can commit my changes or even submit them somewhere, since on the homepage, it said that he's the person to contact with this sort of stuff. I've also asked about this on the discussion board, with no response.
My question is, how long should I wait for a response before forking this wrapper? It's one of only two open source wrappers for this API and listed on the API Doc's page. I hate to see that there's no improvements to it.
So, how long should I wait. What's normal for this kind of thing?
In case it matters: the licence is Simplified BSD
UPDATE:
The original developer finally responded; so I didn't end up forking. Apparently he was just very busy with work.
A good (relevant) article to read for anyone coming across this question: http://dashes.com/anil/2010/09/forking-is-a-feature.html
And thanks to everyone for your answers!
You can fork any time you want. Once I was in similar situation. As I had informed project admin that I'm going to fork, I obtained a response and it wasn't necessary :P
BTW I have written to sourceforge crew (project was hosted on sf) and that was their advice to fork.
Perhaps I am a little late but would like to answer on the level of definition.
The term Forking (branching away) refers to a split between groups and development in different directions. In this case a branching away can not clearly be seen so there was not actually a forking of a product. The action was clearly an alteration (extension) for a personal need. Should a product experience alterations and the result again be returned to the group it comes from is forking also not the proper definition. By definition open source encourages you to alter.
It depends if you plan to maintain your fork. If you do then the chances are it will become a better project than the original. Otherwise maybe wait a couple of weeks. Still, even if you released today there's nothing to stop the original project merging your changes so the community as a whole benefits either way.
There's no protocol, just call your fork something else and give the original prosjekt plenty of kudos for the original work.
Forks happen all the time, it's not necessarily a 'divorce' with the origial maintainers ... just happy coding.
Your additional calls might be usefull for someone else, but then again it might not.
Does the project has a publicly known mailing list/bug tracker, and if is - is it affordable to submit a patch there? Also, can't be a developer - become a maintainer at one of popular Linux distros, (submit a Gentoo bug/Launchpad entry).
If there's no sense in such actions - just fork.
Sounds like you've done the right thing though and tried to stay within the existing branch and now it is appropriate to fork.
If nothing else, forking is a more powerful action than most other things i.e. if you fork and still don't get the original developer's attention then you can be satisfied that you did the right thing. Of course once forked, there's no real reason why there can't be some convergence at a later stage.

Bitbucket reliable? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I understand this question is on the edge of being acceptable for stackoverflow, but still, I feel it is worth asking.
I've started using bitbucket.org a couple of days ago, attracted by mercurial hosting, 1 free private repository, a wiki and an issue tracker. Just what I needed for my project.
I have to say, the features offered and the website's interface looks great, and I didn't have any problems with mercurial-related things so far. However, after these couple of days I am doubting if I should move somewhere else while it still easy (I haven't advertised the wiki page yet, etc...), because I am running constantly into minor and major issues:
Over these few days, I've noticed a lot of site slowdowns and a couple of timeouts
I find the wiki to be rather limited in features (apparently it is based on Creole Wiki, never heard of it before). It does not allow for, for example, right-aligning of images, borderless tables, etc. (well maybe it does, but the documentation doesn't tell)
I've noticed some bugs in the wiki (a TOC-generation macro issue was reported over a year ago, but still not fixed)
I've tried making my wiki public by changing the settings in the Admin panel, but it doesn't work.
some more wiki things [like inserting images is awkward, creating a new page isn't very obvious, internal linking has it's issues as well, .. ]
the sort order in the newsfeed was wrong when I pushed a multi-commited changset
It's very nice (and brave!) they have an publicly accessibly issue-tracker for bitbucket, but seeing a list of over 500 open issues (28 pages * 20 issues per page) doesn't give the impression they are taken care of as well as they could. At least some issues could have been moved to some 'will-not-consider' state. I am afraid my bug report about the private/public wiki page will still be in there within one year
The blog has a lot of post about 'downtimes'
Now, I don't want to be too hard on the people/company running bitbucket, since it isn't clear to me whether it is practically run by a single person (in which case it is truly amazing) or a well-run company (in which case it is not :-). Perhaps they have some growing pains... It is hard for me to tell.
So, what I am looking for here, is some experiences of other people with bitbucket, and advice if I should hold out, and wait until things improve (good chances for this?). Or not.
Jesper from Bitbucket here.
We're a pretty small team. In fact, most of the time, it's mainly me who does sysadmin/coding. This leaves very little time to develop new things, and sometimes, it doesn't even allow me to keep everything running smoothly (slowdowns/short outages always happen when I sleep.)
I realize this won't work in the long run, and something needs to be done. Therefore, I have decided to hire a bunch of people, mainly developers, but also a dedicated sysadmin and 1 or 2 UI guys (to make things prettier/more functional.) I'm currently wading through applications, and there are a lot of promising applicants in there.
Wrt/ stability, I've also provisioned 2 (much) larger instances from Amazon, where we do our hosting. We're throwing more money at this. I'm migrating a bunch of users/repositories to these larger instances today, and immediately following this, we will focus on making things faster as well.
Question was asked 2010, but I think this question needs a slightly more updated answer. I've been using Bitbucket for a few months now and as far as I can tell, it is an amazing git hosting system. You are provided with an issue tracker, wiki, unlimited public/private repositories, team collaboration, etc. Also, I have not yet encountered any downtime or slowness. On top of all of this, Bitbucket has an amazing UI, making navigating through source code and branches amazingly easy.
I would definitely recommend using this, and SourceTree.
I have not tested Bitbucket with really massive commits.
We have been using BitBucket HG for about six months, and I have little doubt but that we will move to a different VCS. It merges things badly, makes mistakes on complex commits, hurts our productivity. I don't know which parts are HG vs BitBucket, but I don't even have time to find out. Of course this is happening at the worst time, we have a do or die deliverable in two weeks.
I've been using BitBucket for a few years 1 year at my past employer and 2 years at my present employer.
It generally works fine without any problems. However, about once a month there will be some slowness. Over this particular week there were outages spanning multiple workdays where things were slow or we were unable to push our code changes for about an hour here or there.
So to summarize, most of the time it is reliable, but occasionally, about one day a month on average it is not reliable.

How can I support the support department better?

With the best will in the world, whatever software you (and me) write will have some kind of defect in it.
What can I do, as a developer, to make things easier for the support department (first line, through to third line, and development) to diagnose, workaround and fix problems that the user encounters.
Notes
I'm expecting answers which are predominantly technical in nature, but I expect other answers to exist.
"Don't release bugs in your software" is a good answer, but I know that already.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Give exceptions meaningful names and messages. They may only appear in a stack trace, but that's still incredibly helpful.
Allocate some time to writing tools for the support team. They will almost certainly have needs beyond either your users or the developers.
Sit with the support team for half a day to see what kind of thing they're having to do. Watch any repetitive tasks - they may not even consciously notice the repetition any more.
Meet up with the support team regularly - make sure they never resent you.
If you have at least a part of your application running on your server, make sure you monitor logs for errors.
When we first implemented daily script which greps for ERROR/Exception/FATAL and sends results per email, I was surprised how many issues (mostly tiny) we haven't noticed before.
This will help in a way, that you notice some problems yourself before they are reported to support team.
Technical features:
In the error dialogue for a desktop app, include a clickable button that opens up and email, and attaches the stacktrace, and log, including system properties.
On an error screen in a webapp, report a timestamp including nano-seconds and error code, pid, etc so server logs can be searched.
Allow log levels to be dynamically changed at runtime. Having to restart your server to do this is a pain.
Log as much detail about the environment in which you're executing as possible (probably on startup).
Non-technical:
Provide a known issues section in your documentation. If this is a web page, then this correspond to a triaged bug list from your bug tracker.
Depending on your audience, expose some kind of interface to your issue tracking.
Again, depending on audience, provide some forum for the users to help each other.
Usability solves problems before they are a problem. Sensible, non-scary error messages often allow a user to find the solution to their own problem.
Process:
watch your logs. For a server side product, regular reviews of logs will be a good early warning sign for impending trouble. Make sure support knows when you think there is trouble ahead.
allow time to write tools for the support department. These may start off as debugging tools for devs, become a window onto the internal state of the app for support, and even become power tools for future releases.
allow some time for devs to spend with the support team; listening to customers on a support call, go out on site, etc. Make sure that the devs are not allowed to promise anything. Debrief the dev after doing this - there maybe feature ideas there.
where appropriate provide user training. An impedence mismatch can cause the user to perceive problems with the software, rather than the user's mental model of the software.
Make sure your application can be deployed with automatic updates. One of the headaches of a support group is upgrading customers to the latest and greatest so that they can take advantage of bug fixes, new features, etc. If the upgrade process is seamless, stress can be relieved from the support group.
Similar to a combination of jamesh's answers, we do this for web apps
Supply a "report a bug" link so that users can report bugs even when they don't generate error screens.
That link opens up a small dialog which in turn submits via Ajax to a processor on the server.
The processor associates the submission to the script being reported on and its PID, so that we can find the right log files (we organize ours by script/pid), and then sends e-mail to our bug tracking system.
Provide a know issues document
Give training on the application so they know how it should work
Provide simple concise log lines that they will understand or create error codes with a corresponding document that describes the error
Some thoughts:
Do your best to validate user input immediately.
Check for errors or exceptions as early and as often as possible. It's easier to trace and fix a problem just after it occurs, before it generates "ricochet" effects.
Whenever possible, describe how to correct the problem in your error message. The user isn't interested in what went wrong, only how to continue working:
BAD: Floating-point exception in vogon.c, line 42
BETTER: Please enter a dollar amount greater than 0.
If you can't suggest a correction for the problem, tell the user what to do (or not to do) before calling tech support, such as: "Click Help->About to find the version/license number," or "Please leave this error message on the screen."
Talk to your support staff. Ask about common problems and pet peeves. Have them answer this question!
If you have a web site with a support section, provide a hyperlink or URL in the error message.
Indicate whether the error is due to a temporary or permanent condition, so the user will know whether to try again.
Put your cell phone number in every error message, and identify yourself as the developer.
Ok, the last item probably isn't practical, but wouldn't it encourage better coding practices?
Provide a mechanism for capturing what the user was doing when the problem happened, a logging or tracing capability that can help provide you and your colleagues with data (what exception was thrown, stack traces, program state, what the user had been doing, etc.) so that you can recreate the issue.
If you don't already incorporate developer automated testing in your product development, consider doing so.
Have a mindset for improving things. Whenever you fix something, ask:
How can I avoid a similar problem in the future?
Then try to find a way of solving that problem.