I release chrome extension updates on a regular basis. It has been observed that we have a lot of machines running old versions of extension even days after the update.
I tried logging the version number on update for our users. Following are the stats from the day of giving the update.
Days since update | % of users updated on that day
0 34
1 28
2 12
3 7
4 3
5 3
6 1.7
7 1.2
The extension is published using google developer dashboard .I requested no additional permissions since the last version.
I have the following queries.
Is this normal?
Google says that the apps/extensions get autoupdated within an hour of releasing update once the browser is closed/restarted.
Does it mean 40% of my users don't even restart chrome in 2 days?
Is there a way to force an update into all machines on the same day itself ?
Related
I'm not an advanced programmer.
I have a webapp in Google Appscripts where users use it to punch in and punch out their working hours. I have 5 setinterval functions that updates their hours every minute.
The 1st screenshot is from dev version where I'm the only user but the 2nd screenshot is from the prod version. (I've hid the function names)
As you can see in the 1st screenshot that the functions runs exactly every minute. But in the 2nd screenshot since it is shared with several users, it is triggered many times within a minute. This obviously increases the stress level of the app and it takes some time to process the request since there are already functions running several times a minute.
My questions:
Is there a solution where I could limit the executions to run only once per minute no matter how many users are actively using the app.
Will deploying the functions as a library and calling it in the webapp reduce the number of executions?
Dev version
Prod version
If I have hundreds of triggers set to work on a specific day of the week within the same hour time frame (lets say Monday between 8 am to 9 am) - will they be executed in the way that will burden the least on the server?
If not - according to which consideration the execution time will be determine?
Is there a limtation of amount of triggers in an hour/day? What is it? (in Quotas for Google Services I found only a Triggers total runtime).
Thanks!
I have 61 websites on a live Magento instance. 10 or so are no longer used but we get about a thousand orders per week and have been running on Magento for over 2 years.
I went to delete one of the websites in the Manage Stores, and there is a little drop down for creating a DB backup that I left set to yes, thinking originally that it meant backup the reference in the DB at first glance.
It has been about an hour now and our sites are virtually down as no one can place orders, and the admin is unaccessible.
Is there a safe way to stop this process?
Is there a way to at least trace progress to see if there is some sort of SQL lock? I do not want to wait for hours only to find out that it will never finish. Ive read around here that the built in Magento backup tools are not good at all.
I have a Wordpress/woocommerce store and I am trying to clean things up a bit as it is very sluggish on the backend.
While investigating I found that my wp_options table alone is over 360 MB in size (I don't know what normal is but that seems large) By doing random spot checks, it seems to be almost entirely full of woocommerce sessions like this:
_wc_session_119a59e205553cc7d91bbf19b0b64768 and wc_max_related which have no expiration.
I used woocommerce->system status->tools to delete all expired wc transient
I installed the Transients Manager plugin and deleted all expired transients but it only removed about 300 entries. It still reports 7,300 transients (http://i.stack.imgur.com/GXmNw.jpg)
That seems like a lot considering I have only had about 30 customers in the last 2 or 3 days and i am concerned that that is slowing my admin panel. Is it safe to delete all wc_sessions at a time when there is no-one currently on the site? If so, do I do that by doing 'clear all sessions' in woocommerce->system status->tools? I don't want to delete customer orders or anything like that but my understanding is that these are just open carts etc.
You did not include which version of WooCommerce you are running but there are usually 2 reasons why there are more customer sessions than expected:
CRON tasks not working
Bots visiting the site and creating multiple sessions
A customer session is stored for a period of 48 hours in WooCommerce.
Remedies to your situation are posted in this stackoverflow question:
woocommerce generating more sessions than users
UPDATE: With WooCommerce 2.5 woocommerce-large-sessions has been merged into core.
For some of my clients CRON tasks work and we blocked bots from the site's add-to-cart. wc_sessions are still out of control. I found this plugin which was created because storing wc_sessions in the wp_options table caused trouble with options caching.
This plugin moves wc_sessions out of wp_options and into it's own table and implements its own cleanup hourly.*
Plugin: https://github.com/kloon/woocommerce-large-sessions
I just started using this so I will come back to confirm this is working.
I am running mercurial on my sourceforge project. I am updating the repo using tortiseHg on windows. Whenever I update files, their commit times are always off by a few hours. For example, I just updated a file about 5 minutes ago, and it says that it was updated 6 hours ago. The file I updated about 6 hours ago says it was updated about 30 minutes ago.
What could be causing this?
Probably due to a time zone difference between you and the sourceforge servers where both or either one of you is reporting local time?