I was very cautious about adding a service worker to my PWA that would cache all my files. I tried to implement a system that would always call the server to get a "version" file so that when that "version" file updated, the cache would be cleared.
However, something didn't work correctly, and now the clients no longer call the server at all, since they have the files they need. This is perfect for offline use! But those clients will never call the server again so when I update the site to fix the problem (which I have done), they do not get the update!
Any suggestions on how I can connect with those clients again?
The easiest thing for you to do is deploy a change to your service worker code. In that version clear your cache and remove the buggy code.
Don't worry this happens a lot when you start working with service worker caching. :)
Related
So I came across a strange issue: as soon as I hit send Web Push notification in One Signal dashboard, the Mysql server dies - memory usage goes to 100% and the cpu goes nuts. Soon after it crashes and Mysql resumes normal operation.
I must mention that this happens even if images used in the notification are not hosted on the same server. Even if nobody actually clicks the notifications for the first 5minutes, it still crashes.
Our list has about 11.000 subscribers.
What could be the issue? I just dont know what to try anymore. Tried upping the max_connections and other my.cnf settings according to mysqltuner.pl. No luck.
This is happening on a Magento 1.9 store, with the following specs: 24gb RAM, 240gb SSD, 12-core 2ghz, CentOS7, running Apache with REDIS, php5.5.
UPDATE: Fixed by revising the OneSignal settings and the way notifications are sent + enabled skip-name-resolve in my.cnf.
I have alerted this issue to onesignal several times but each time they suggest to go for good hosting service.
I think the problem is with the onesignal wp plugin. I have a dedicated cloud server from vultr but still site goes down right after new push notification.
You will have no problem if you send push notification directly from onesignal dashboard.
Because all my request goes to deaf ears, now I do not use onesignal.
I've written a Chrome extension and companion native messaging host. I don't have any issues with it failing to start or crashing, but I would like to be able to restart it for updates of the extension. I can't find anything in the documentation or elsewhere regarding this. Is it even possible, or does the browser need to be restarted? Due to the nature of the extension, I'd like to avoid restarting the browser if possible.
Documentation can be found here, but it's not exactly robust.
https://developer.chrome.com/extensions/nativeMessaging
Upon further investigation I have found that restarting the native host application manually is not required. Chrome does this itself on update of the extension. However, that breaks the ability to send messages to the native host application from content scripts that have already been loaded, which was causing the issue I was seeing. Pages can be reloaded to fix messaging.
How does one keep OpenShift gears up-to-date? For example, updates to:
The Linux kernel
Important components/libraries like libc
Apache
Apache modules like mod_wsgi
Python
Python packages
Does OpenShift automatically update these and then restart the gear (or reboot the node)? Or does OpenShift send email notifications and the end-user can restart the gear during maintenance windows? What is the model?
What got me thinking about this was back in January there was a remote-code-execution bug in Ruby on Rails that everyone had to patch immediately.
This FAQ seems to suggest that some level of upgrades happen automatically, but it isn’t clear whether this only applies to the OpenShift-specific code, or also other components like the kernel, Apache, etc.
I can tell you from my experience that changes to the openshift system are not always automatic. They had a change about 10 days ago and I'm still tracking down what they did to make my app run correctly. As far as I know there was no email sent. I did find a blog post of some of the major changes, not all. Of course, they introduced at least one bug that I know of. YMMV
My experiences over the last few weeks have been the following:
Last week there seemed to be an unannounced reboot of the server. I detected this by logging from a custom action hook. I didn't receive any email about it and I didn't see any notice at https://twitter.com/openshift_ops or https://openshift.redhat.com/app/status.
This week, there was the Heartbleed OpenSSL vulnerability and it seems like some gears were restarted. I didn't receive any email about it, Twitter didn't show anything, but there was information on the status page.
Let's say I'm building a "secure" offline HTML5 app which must be run locally in the web browser without needing to download more files from the server. Let's say I connect to the server initially with the web browser over HTTPS (TLS) and download the HTML, JavaScript and CSS required to run locally. I can reasonably assume that the first time I download the files that it is done securely as it is a brand new server that no-one else knows about yet. All the files get stored in the HTML5 Offline Application Cache. Now I have everything I need to run the application locally and shouldn't depend on the server for anything else.
Now every time I run the app, the application will use the HTML5 Offline Web Application Manifest to see if there are any updates from the server for the app to be downloaded. Potentially this could be a problem. If an attacker has now targeted my server and has done a MITM attack on the connection they could alter the application manifest, causing an update to be triggered and therefore make the client download new JavaScript and HTML. This would easily compromise the security of the application as the application relies on the integrity of those files.
What are some possible options to prevent this? Can we do any of these:
1) Completely disable or block updates from the server after all files have been downloaded. Then if the manifest is changed on the server, or the attacker serves up a new manifest, then the client ignores the new manifest and keeps using it's local copy of the files.
2) Detect if the manifest has been changed, or an update event triggered, or the browser is downloading new files. Therefore notify the user that this has occurred. And if it's not expected from the user, then it would indicate an attack. I understand that the there is a 'downloading' or 'updateready' or 'checking' event listed in the spec. Is there a way for the JavaScript to detect that those events have been fired?
3) Store a version value or cryptographic hash of the files inside the browser's local storage. Then on page startup, if the files change unexpectedly, we can throw up an alert to the web browser notifying the user they have been unexpectedly changed.
4) Perhaps use some sort of cache header that forces the browser to cache the files indefinitely. In other words, a kind of hack to make it ignore new manifest files that are sent by the server. This sounds like it could probably work as there are lots of issues that can cause the application not to update even when the manifest file is changed.
Thank you in advance.
I just uploaded a Wordpress site from my local machine to a Bluehost shared server. Ran fine locally, but now it is loading very slowly (107 seconds for home page). Bluehost tech support ran GTMetrix site analyzer and came back with "it's the CSS in your theme". They say nothing is wrong with the server.
I definitely need to clean up my CSS, but I didn't think it could have such a large impact on load times. Am I wrong?
Looking at the resource load times with Chrome's developer tools makes it immediately clear that it's your main document that is responsible for the delay; not the CSS, not anything else.
Therefore we have to assume it's something in your own code that causes the delay. Since it worked fine when run locally, the most likely scenario is that your code is trying to connect to some server (perhaps a database?) that it cannot connect to, and the delay is due to the connection finally timing out.
Recommendation: double check the places where you make connections to any external resource, and especially the credentials used when you do so. Is your host authorized to make these connections the same as your local development machine is? If you are connecting by IP, are those IPs accessible from your host?
The problem is loading
http://ad.doubleclick.net/adi/N5192.395082.LOT18.COM/B5529584;sz=300x250;ord=[timestamp]?
which is probably a script?
That takes 59 seconds for me. The rest is fast.
From the Chrome dev tools (Network tab): http://screencast.com/t/8DdtXeEv
The solution: turn off your ads.
You can use quick cache plugin which will speed up you site without compromise.
http://wordpress.org/extend/plugins/quick-cache/