I'm a web developer, and often run scripts to fix things that might time out due to server or browser settings. In the past, Chrome would just spin and spin as long as it takes until the script was done - even if it takes an hour, but they changed things and now, it imposes its own cutoff time is the server doesn't respond fast enough while the server continues to execute the script.
Now, this is annoying, it forces me to log events to a file, rather than just dump to the screen, but the worst part is Chrome thinks it is a great idea to try reconnecting to the URL after it times out. That then starts to execute the same script which probably is already running again.
The issue here is that I often create scripts to run ONCE and never again, and if the script is run more than once, it could completely destroy things.
Say I create a script to remove the first 4 characters from each field in a 1 million row database. Running the script via Chrome would eventually time out and then it would run the script again several times without letting you know. Suddenly, the data that was already reduced is being reduced again, destroying the data.
This is a serious concern that was never an issue before because Chrome wouldn't automatically try to reload a page that failed to load. So, I'm looking for a way to disable this new feature and stop Chrome from automatically reloading on a failed page load. It displays an error page saying "Click here to reload", but it completely ignores the user and decides to reload whether you click it or not.
I just ran a script to copy files from an EC2 instance to an S3 bucket as part of some cleanup, but I see from the logs that it actually ran 4 times before I closed the tab - even though I never asked it to reload. That meant it copied these same files 4 times. Fortunately, in this case, it just wasted S3 access, since it overwrote the existing files.
Yes, I realize that there are many ways of preventing the script from running more than once, from flock to renaming the file immediately after executing it. The issue is speed. These fix scripts are not intended to be full blown applications complete with all the bells and whistles, they are meant to be a fast way to apply a fix. I would rather make a change in Chrome to disable the new way it works so that I can continue to work as I have for over 10 years.
This is referring to an auto reload, and I'm not calling it a "refresh" because the page never loaded in the first place. This has nothing to do with the millions of questions regarding refreshes, and that is all I get when trying to search this problem out.
Probably this can resolve the issue:
go to chrome://flags/
set to Disabled flag Enable Offline Auto-Reload Mode (or Offline Auto-Reload Mode)
set to Disabled flag Only Auto-Reload Visible Tabs
Relaunch browser
Now I have page with error ERR_CONNECTION_RESET that does not reload itself automatically anymore
Related
I have just moved a WP installation from one hosting provider to another. Everything went fine except for a problem I have with the new installation. Please note that I have moved from a regular VPS to a kinda powerful and fast dedicated machine.
The thing is that now, the website is slower than when in the previous server. It takes 6-7 seconds to load a page and according to Chrome's Dev Tools network panel, it has a period of 3-4 seconds to the get the first response byte (TTFB), which is insane.
I have tried the following with no success:
Review database for anomalies
Disable all plugins (and delete them)
Disable template (and delete it)
With these last two actions, I lowered the loading time to 5-6 seconds, which is a lot for small site (a few hundreds of posts and 50-60 pages), with no comments enabled. I still have the 3-4 TTFB period.
After that, I installed the Query Monitor plugin and found out that, at every page load, WP performs hundreds (ranging from 400 to 800) database queries and, in some cases, even 1500 database queries. OMG!
Honestly, I am quite lost here. I mean, on one hand I have this strange database behavior I cannot really understand. And on the other hand, I cannot help wondering how it was faster on the previous & slower server.
By the way, I have moved from MySQL to MariaDB, which should be even faster too. Indexes are kept when dumping & importing the file. I am lost. :(
Any help is greatly appreciated. Apologies for my english (not my language) and please let me know if the is some important information missing. I will be glad to provide all the necessary information that help me/us troubleshoot this.
Thanks in advance!
I think you should optimize your MySQL config (my.cnf in Linux or my.ini in Windows). To view problems in MySQL you can try run the script MySQLTuner: https://github.com/major/MySQLTuner-perl.
I'm working on an assignment for school, my files for the website are stored on a distant server which I access via VPN and remote server connection on macOS.
When I modify my html files, the changes aren't reflected immediately, sometimes after a day or two (in fact quite randomly, can be an afternoon, an hour).
It's a bit problematic when you try to have long code sessions. Sometimes, one page actualises but not the others.
I'm not having any problems with my php files, they actualise immediately.
I've tried several things without any changes:
Emptying the cache
Trying on different web browsers
Disconnecting from the server and VPN
Waiting :)
System infos :
macOS 10.12.2
Safari 10.0.2
Thanks for the help, I personally think it's a problem with the server, but I won't be able to change that, hopefully, it's something I can fix.
-I have an html page with a textbox element with autocomplete feature.
-The autocomplete list is filled from Mysql table called X.
-A user can open this page from multiple browsers or windows at the same time.
-The user is able to add new records or update existing records to table X from the same page.
Now as he adds new records I want the other window or the browser detect that a change happened in the table and refresh the autocomplete list so it is visible there too.
How can I achieve this?
I am thinking of checking if the table changed on every keypress of the textbox, but I am afraid that's gonna slow the page.
The other solution I was thinking is can I apply a trigger in this case?
I know this is used alot for example you can open your gmail account from multiple browser or window and if you edit anything you will be able to see it from the rest.
I appreciate your help as I searched alot about this but I couldn't find a solution.
This is a very broad question and has many, many answers. It also depends on your database back end. Among many is couple of note worthy ones, if you use a Bus of some sort in the back end you can push your change to the db then to the bus and your web client can consume it from there so it know to refresh. The other is use a trigger (if you're using MSSQL) to push the change, using a CLR assembly your created to an MSMQueue and consume it from there, it'll reduce the constant polling for the db. Personally I always use the Bus for this kind of things but it depends on your set up.
A SQL trigger wouldn't help here - that's just for running logic inside the DB. The issue is that you don't have a way to push changes down to the client (except perhaps Web sockets or something, but that would probably be a lot of work), so you would have to resort to polling the server for updates. Doing so on key press might be excessive - perhaps on focus and/or periodically (every minute?)? To lessen the load, you could have the client make the request using some indicator of the state that it last successfully fetched, and have the server only return changes (deletions and insertions - an update would be a combination of the two) so then rather than the full list every time it is only a delta.
Within a single browser you may be able to incorporate local storage as well, but that won't help across multiple browsers.
Another option would be to not store the autocomplete options locally and always fetch from the server (on key press). Typically you would not send the request when the input length is less than some threshold (say, 3 characters) to try to keep the result size reasonable. You can also throttle the key press event so that multiple presses in quick succession get combined into only one request sent, and also store and cancel any outstanding asynchronous requests before sending a new one. This approach will guarantee you always get the most current data from the database, and while it will add a degree of latency to the autocomplete in my experience it is rarely an issue.
Previously when covering events I've typed my reports into an html file and ftp'd it to the server. If I lose connectivity or the laptop crashes, I've got the last saved copy.
We just switched our event coverage area to a database holding Textile-formatted entries done via a web form text area.
Is it at all possible to create a mechanism whereby a local copy of the textarea is saved so I can keep working during connectivity failure? Using Windows on the laptop. Guess the low tech way would be to type in a word processor and just keep pasting into the web form.
You could take a look at the local storage features the browsers have to offer.
Local storage is a nice idea, cookies might also be a solution (which even works in older browsers) if the texts are not too long.
However, I'd use server-side backups (created automatically via AJAX requests) and only fall back to local backups if there's no connection to the server. You can never know if local backups persist when the browser or even the whole system crashes.
There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago. Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!