My website doesn't load sometimes for users and it says "This website can't be reached" - mern

I've recently deployed a MERN app using AWS and everything seems to be working. However, I've just found out from some people testing the site that sometimes they can't get onto the site. They just get a "This website can't be reached" message.
Does anyone know why this happening for them, it's never happened for me, but apparently it happens quite a lot for them. Is it something to do with the browser and version they're using?

Related

New Developer Portal - Deprecated content detected

Someone that left our company some time ago started working on upgrading our APIM developer portal from the legacy version to the current version. The changes were never published. I'm going to pick this back up again and am seeing an alert message when switching between pages. The message says:
Deprecated content detected Your developer portal's content is based off the pre-production version of default content. Learn more about the problems it may cause and how to switch to the production version of content.
It contains a link to a bookmark that no longer exists on a page that still does:
https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-developer-portal#preview-to-ga
I'd like to update the default content before publishing changes. Can anyone point me information on how to accomplish this? It looks like the new portal has been released for a couple of years now, so I guess that I'm not surprised that the bookmark and associated content have been removed. Hopefully the information is still available somewhere and I just haven't been able to find it!
Any assistance would be greatly appreciated!
Sorry, I think I didn't ask my question very well and I've also learned more since posting it.
What I learned is that we are not using the legacy portal at all. The new portal was deployed, but was not functioning correctly. The error I was seeing was being displayed in the new portal design interface and publishing updates did not seem to correct the issue.
What I ended up doing was using the Migrate script in the api-management-developer-portal repo on GitHub to "publish" the version of the dev portal from one of our other APIM instances into the one that was broken. This essentially fixed the issue and got it to a point where we could work with it again.
Thanks!

My webpage taking too much time while reload

I have three websites that i hosted in a shared hosting. The websites are loading fast in first time. But when you trying to reload the page, it took up to 40 seconds. I reinstalled databases, themes, plugins, i deleted the whole site and reinstalled again but nothing changed. Also i tried different browsers, did some pagespeed and gtmetrix analysis but its still sucks when i reload. I opened a ticket to hosting provider and they said there's nothing wrong with our servers. I'm using LiteSpeed server with PHP 7.2. There's a lot of system resources so theres nothing wrong about it. As a newbie, what could be the problem?
Here's my website https://yazilimcilarinmolayeri.com
Note: if i did any English mistakes, please forgive me. I'm from Turkey.
I tried disabling CloudFlare and it worked. Idk maybe cloudflare's caching system and the plugin's cache overlapped. So yeah, my problem solved. Thank you for all your help guys.
Btw i activated cloudflare again and its working well for now.
Although your website is working fine right now, this usually happens when you have cached resources, but I'm not sure as to why.
The browser recognizes that a resource is cached and it checks with the server if it needs to be updated. This checking might sometimes take too long (again i'm not sure why) and the browser sometimes won't timeout the request, meaning the page is still waiting for the response to be received.
As #Buttered_Toast mentioned, try clearing your cache, and that should fix it.

Phishing Detected! warning in Chrome

I have encountered the "Phishing Detected" warning in Chrome browser on my dev site. Interestingly I don't encounter the same warning in Firefox or Safari even though, as far I can tell, they are using the same phishing database (although in Safari preferences it says "google safe browsing service is unavailable"). I also don't encounter the warning on the same page of the production sites.
It first popped up on a new account verification page I created which amongst other things asked users to confirm their PayPal account with the GetVerifiedStatus API. This requires only name and email.
I have also encountered the warning on a configuration page which asks for the PayPal email address which the user wishes to receive payments to.
Neither page requests a password or any other data that would be considered a secret.
As you might gather I have zeroed in on a potential false positive on the PayPal portion of the content as if perhaps I am phishing for PayPal information beyond the payers email address. There has been no malicious code injection or any such thing. Even when i've removed all content from the page the warning is still present.
I reported the first incorrect detection to Google, and intend to do the same for the second incident, however what I really want to clear up is:
What content can lead to this warning?
How can I avoid it in the future?
How can I get some info from the "authorities" on which urls are blocked? (Webmaster Tools is not showing warnings for the dev site)
How can I flush my local cache of "bad sites" in case I want to re-test?
Clearly having a massive red alert presented to a user on a production site would be disastrous, and there is a (perhaps deliberate) lack of information about how this safe browsing service actually works.
I have been developing a website for a banking software developer and ran into the Phishing warning as well. Unlike you I had no PayPal associations in any of my code and well not even any data collection besides a simple contact form. Here are some things I managed to figure out to resolve my false positive warnings.
1) The warnings in Chrome (red gradient background) is a detection method built into the Chrome browser itself and it does not require to check any blacklists to give the warning. In fact Google themselves claim that this is one of the methods that they discover new potentially harmful sites. When your site is actually on the blacklists you get another red warning screen with diagonal lines in the background. This explains why you only see the warning in Chrome.
2) What actually triggers this warning is obviously kept kind of hidden. I could not find anything to help me debug the content of my site. You have pretty much done this, so for anybody else in need of help, I had to isolate the parts of my site to see what was triggering the warnings. Due to the nature of the site I was working on it turned out to be the combination of words and phrases in the content itself. (e.g Banking Solutions, Online Banking, Mobile Banking). Alone they did not trigger anything but when loaded together chrome would do its thing. So I'm not sure what your triggers are or even what the list of possible triggers are. Sorry...
3) I found that simply quitting Chrome completely and restarting it resets the "cache" for whether it has perviously detected a page. I closed Chrome hundreds of times while getting to the bottom of my warnings.
Thats all I have and hope it helps.
Update: My staging area was accessed via an IP address. Once I moved the site to use a domain instead all the warnings stopped in chrome.
I experienced the same today while creating an SSL test report for my web server customers. What I had there was simply something like this:
"Compare the SSL results of our server to the results of a well-known bank and its Internet banking service". I just wanted to show that the banking site had grading B whereas ours had grading A-.
I had two images from SSL-Labs (one the results for my server and the other the results of the bank). No input fields, no links to any other site and definitely no wording about then name of the bank.
One h1, two h2 titles and two paragraphs plus two images.
I moved the HTML to the page and opened it in my Chrome browser. The web server log told me that a Google service had loaded the page after 20 seconds from my first preview. Nobody else had seen it so far. The phishing site warning came to me (webmaster) in less than an hour.
So it seems to me that the damn browser is making the decision and reporting to Google which then automatically checks and blocks the site. So the site is being reported to Google by Google tools, the trial is run by Google and the sentence is given by Google. Very, very nice indeed.

Chrome basic auth and digest auth issue

I'm having some issues with Chrome canceling some HTTP requests and I'm suspecting cached authentication data to be the cause. Let me first write down some important factors about the application I'm writing.
I was using Basic Authentication scheme for some time to guard several services and resources in my web app.
In the meantime I was using/testing the app heavily using Chrome with my main Google Account fully synced. Most frequently I was using my name - "lukasz" - as the username in Basic Auth.
Recently I have switched my application to use Digest Authentication.
Now, some of the HTTP requests I'm making are failing with status=failed with no apparent reason. It only happens when I'm using user "lukasz", if I enter some other unique username - there is no problem.
I looked everywhere in the backend and frontend and I couldn't locate the issue to be in our code. I can easily reproduce this with user "lukasz" each time. So I reverted my code to Basic Auth (while not touching the rest of app) and the problem was gone.
That led me to think that there is something wrong with cached passwords. So I cleared the cache in Chrome, but that didn't help. After several hours of analyzing the issue I decided to make sure that I'm running fresh instance of Chrome, so I reinstalled it (deleting the disk data along the way). TADAAA! The problem was gone and I couldn't reproduce this anymore.
Then I synchronized my Google Account with this newly installed Chrome and after a short while the requests to my app started failing again!! So I took a deeper look at this (cleaning profile data from disk and redoing all the steps) and indeed it looks like the problem starts as soon as my account is synced with cloud!
Yes, I know it sounds dodgy. It sounds ridiculous. It sounds stupid. But I am almost sure that those two problems are somehow related (failing requests and account sync).
My idea is this: Chrome somehow remembered that I was using "lukasz/my-pass" with Basic Auth for certain services. After I switched to Digest Auth the same combination of credentials (lukasz/my-pass) is now acting funny. Perhaps under the hood Chrome still thinks that this is Basic Auth and cancels requests when it learns otherwise?
UPDATE:
I've did some low level debugging with chrome://net-internals/ and it appears that the problem is while reading cache entry. This seems to prove my initial assumption.
I did some investigation and found this article. Apparently always adding "Last-Modified" header to my http response has solved the issue in Chrome (I'm still having some problems in FF, but that's off topic).
However, it still doesn't solve my issue entirely. Why the requests were failing in the first place?
You could try using incognito mode and see what happens. It may give you some hints without having to clear the cache or re-installing Chrome.
Also take a look at How to clear basic authentication details in chrome

SQL injection attack from localhost on live server

This is quite complex to explain but I keep getting injection attacks from another website by just clicking on a link. Oddly though it seems Google Chrome is the one generates it.
To elaborate, I have this site: http://byassociationonly.com and I have this site: http://dev.byassociationonly.com/example (can't name site as its a client site).
Whenever I click on any of the links on http://byassociationonly.com, in Google Chrome, on my machine, none of them work and I get an injection attack (I am using a plugin to send me email notifications when something like this happens, Wordpress Firewall).
The notification I receive is this: http://cl.ly/image/2U111T0m2X35
I just don't understand this error at all, Ive never had a problem before.
I've even removed the code within that page its referencing, which is from single.php, yet the problem still exists. I thought there were conflicts with my MAMP servers running locally but even if they are switched off, the problem still exists but localhost:8888 isn't referenced at all within wp_config.
However if I do this within Firefox, I don't get any notifications at all and the links work fine.
Has anybody got any ideas how to identify where the problem lies and solutions to fix?
As requested here's the code on the single.php page, that the error is reffering to: http://pastebin.com/QKqtLXQi
Did you recently install any Chrome extension? I have run into a similar problem before and after hours of troubleshooting it turned out to be an extension blocking some stuff. The fact that it works fine in FF, it feels like an isolated issue.