Running Chrome Version 96.0.4664.110 (Official Build) (64-bit) under Win10.
Recently, through user error, we deleted apx. 100 articles from our customer knowledge base, which is hosted by Zendesk. Not one-at-a-time but by deleting the entire "brand" containing the info for our product.
It's uncertain, at best, if Zendesk will recover these valuable articles. So, I've been trying to find cached copies per the link below. I don't know enough about caching to understand if this "data" is stored on my hard drive, in the cloud, or both.
https://www.webnots.com/how-to-view-content-of-cached-page-when-it-is-not-accessible/
I've tried the above solutions (fortunately, I found some URLs from the deleted articles) but to no avail. In fact, options 6 and 7 did not work at all. And it would be nearly impossible to determine the article IDs (which are part of the article URLs) for 100 pages.
At this point, this process feels like a scavenger hunt: now and then I'll find a promising page or URL, but I've yet to find a complete article.
Perhaps I need to approach this problem from another angle? Can anyone offer some additional suggestions? And if I'm lucky enough to find a usable copy of an article, how do I move it from the cache into a live web page?
Many thanks,
Steve
Related
I'm experiencing the same issue from time to time across a number of WordPress sites - so I figure it's worth starting a conversation as Google doesn't seem to have much to say about this.
In a number of sites, at differnt times, using ACF Pro, certain data seems to go missing from the front-end. Simply resaving the post in admin and then refreshing on the front-end resolves the display issue - per post.
As this is happening in hundreds of posts on a brand new site this time, it's worth seeing if anyone has had this - and what they've done to resolve it. Finding each post amongst thousands is going to be too much effort to hunt and resave each from the admin panel.
After being in touch with the developer of the plugin, they pointed us to a caching plugin as being an issue. I haven't had any issues since disabling this caching.
I normally leave a website alone and check every now and then after it is finished. I wasn't paying attention to my google analytics, and then on april 18-20 there is a significant drop on number of users & the bounce rate became 100%.
I'm a web developer by profession, and i didn't spot anything wrong.
The site doesn't seem to be hacked, and just before i reinstall everything (files and DB), I wanted to ask if you can find something is wrong with it?
Site in question
http://www.graciesingapore.com/
Regards,
Jairus
The site loads a lot of different javascript and css files, maybe you can try a caching plugin like WP-Rocket or WP Fastest Cache to bundle and cache the files.
Ask some questions...
When you put the site live, was the performance good?
If it was what changed in-between then and now?
Was it an automated core/plugin/theme update that slowed down the
site?
What happened to traffic after 20th April?
Did bounce rate decrease and visitors increase?
What do your server logs show for the 18th to the 20th?
etc etc...
Your site does load a ton of resources and on the programs page there are two images each over 1MB that load in. Try WP Smush over your media folder
But Its a lie
I am facing this issue with my installer:
Installer exe when downloaded from the websites in Chrome are flagged as being malicious software.
To overcome this problem so far we have tried changing domain names,but its not a permanent solution.
Even after signing my exe with Thawte certificate the flag is still there.
I have scanned my exe with all popular AVs and there is nothing malicious in it.
How can I get rid of this chrome flag?
The webmaster site doesn't do anything to help with the false positive on installers. All it does is tell you the file is potentially malware without giving you any way of appealing or asking for a review of their findings.
With FireFox and Chrome and others using this data, you would think Google would provide a way to appeal. It is ironic the company starting the false positive initiative with Microsoft is the worst offender in creating false positives.
You may be able to get around the issue by supplying your site's URL to "Google Webmaster Tools". You don't even have to supply a URL for every single "malicious" file; an overall, top-level URL for your site (or, for your little corner of Blogger.com) seems to be adequate.
If you've got a Google account, just log in and go to this URL. There's a prominent textbox with an "Add Site" button next to it that does the trick. This worked for me, in a matter of minutes (and I don't have a "certificate", other than the one I got for winning a raft race in Pre-K).
Oh, and I too have experience working in heuristics, as part of my degree. "Heuristics" are really just what ordinary, unpretentious people call "rules of thumb"!
It may be that Chrome is using heuristic analysis, to determine that this file is "malicious". That is to say, it is basically saying "Because this file possesses these qualities, we therefore believe this file to be malicious".
Given that a few years ago, someone got hold of a root certificate authority, and proceeded to make genuine security certificates (so that people wouldn't be suspicious, and because they were actually genuine, browsers did not notify the user), for their sites which asked for personal data (bank username and password, etc), and stole it, until that authority was annulled a few days later.
Therefore, simply having an .exe with a certificate (a genuine one, which of course, you have), won't suffice, in Chrome's mind, for the above two reasons.
I'm sorry I can't tell you how to get rid (or at least alleviate somewhat) this issue, but I thought it'd be helpful for you to have some possible reasons as to why this is occurring.
**EDIT: Sources to back up my claims: http://news.techworld.com/security/3266817/online-fraudsters-issue-fake-security-certificates-for-google-yahoo/
http://www.bbc.co.uk/news/technology-14819257**
I also have experience working in heuristics, as part of my programming degree.
We have inherited a website of around 30,000 pages (actual content), each with a unique title and rich content. Whatever we try, Google seems to not like listing the new site and visits have dropped by 80% (vs. old site & domain).
The website was redeveloped recently and changed domain at that point too, which hasn't helped work out what is happening and this marked the drop in visitors. The old domain was registered Oct 2005, the new in 2009, so both have some age to them. In Webmaster tools recently submitted a notice that the site address had changed, possibly too soon to see any effect of this (7th Dec).
The older CMS was hard to redirect from, so have a very large .htaccess file (1MB), is there a limit to how large this file should be with redirects? I could perhaps code something in PHP to handle the 30,000 redirects programatically, but the URL's in the old were pretty strange using comma separation and other symbols. I have used header checkers and the correct 301's are being returned.
We also submitted a sitemap with 25,000 pages via Webmaster tools, of which it listed 11! There were no errors and as I say, the page content is rich with descriptive titles.
Google can see 68,000 pages in Webmaster tools, but the actual listed in only 175, so the problem seems quite significant and the others remain 'unselected'. The curve of the 'unselected' seems to reflect the efforts we have put in to have the site list, yet they seem not to be indexed.
Site: http://bit.ly/VKYClf
(The older site was the same name but hyphenated)
I have researched a lot, but all steps so far have been fruitless and the pages listed hands around the 170 mark.
Can you think of any specific steps worth taking to identify any factors preventing the site from listing?
Thanks in advance and happy to provide more information on anything.
EDIT: In case it helps anyone else, the website is built of Wordpress but uses custom vars to generate lots of pages on the fly... since WP 2.9 a canonical tag was added, but the two were not playing well together and they were pointing to anything WP could find with that ID... have now removed and hopefully things are moving forward
It's going to take awhile to get fully indexed if you just submitted 25000 url's, it may take months for that.
I would recommend you log into google webmasters, go to health (fetch as Google) you can give individual url's to crawl, Google should respond within 24 hours. If your indexed page still does not show up you have major problems that I would not be able to assist with. You get 500 fetches a week. If you want your site indexed right away it may be your only shot.
I would like to synchronize our library management system (OLIB) with our Amazon's account in order to keep track of our bought eBooks. Is that even possible? The closest I've got is Amazon's Product Advertising API:
https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html
However it seems to work with Amazon's product list but there is no mention about how to access my account to manage my purchased Kindle eBooks.
Short answer, no. There's a slightly longer answer at a related question, https://stackoverflow.com/a/12905025/1251108.
In general Amazon does not seem interested in exposing customer purchase history.
Yes it is, but not officially. You can probably build something on top of the work I have done. So far I have exposed functionality for
listing bought content
listing personal documents (stuff you have emailed or Instapaper deliveries)
deleting items
You can find my code on GitHub - published just a few minutes ago :) Be aware that this works by emulating a browser, and as such will be very fragile to changes in the UI. But Amazon has not changed anything since I complained to them several years ago, so I would not worry too much ...