I upgraded my website to wordpress 3.4 and it's caused an enormous amount of damage to my site. Half of my posts weren't accessible on the site and 404d, and pages 3 and 4 of my posts on my website 404d as well.
I backed up before upgrading (thank god I had a gut feeling there'd be headaches) using PressBackup. After restoring, I managed to finally be able see my other posts that were missing before, but there's a still a problem. Pages 3 and 4 still don't work ie http://www.winvenue.com/page/3/. Interestingly all the posts that disappeared were from page 3 and page 4.
I'm not sure why I got all these issues, and it's really annoying because this is an active website will hundreds of readers. I'd really like the get this fixed, any help is really appreciated. Thanks
Without knowing about your specific setup, there's some general things you can try.
I'd check the database to see if the posts are really there. If they are, see if the ones that show are any different for the ones that aren't shown.
Then disable all plugins etc to see if any of those can cause problems with the new version. If it works without plugins, turn them on one by one to see which one(s) cause problems.
Restoring from backup probably dosen't include the .htaccess file which responsible on the permalinks.
Try and regenerate your .htaccess file using settings->permalinks->save or manually
Try setting the following field on Settings->Reading (wp-admin/options-reading.php)
Blog pages show at most [5] posts
It could be that your pagination thinks there are 2 posts per page when there are actually 4 (for example) which would cause this effect.
I've also experienced the same issue when upgrading, seems like others as well, when I did a search on Google. Try searching for a draft version of the missing pages, usually WordPress will backup automatically while your typing. Also try the Trash folder, you never know. You may also need to possibly revert back to an older backup file, which may contain the missing pages.
Plugins maybe change your permalink rewrite rules. So, try to deactivate your plugin, all of them. Then reset your permalinks: Setting > permalinks, don't change anything but saving it. Check your site, if it's normal, then it must be one of the plugins.
If it doesn't work, before reactivating the plugins, change to default theme (twenty eleven) andd see if it works with it.
For all I can see, this kind of problems most often come from misconfigured internal rewrite rules.
EDIT: Have you tried to not use pagination?
You can also try to debug by deactivate the canonical redirection by adding:
remove_action( 'template_redirect', 'redirect_canoncial ')
on your functions.php. This will disable the internal URL redirection.
upgrade your permalinks settings
Related
In pre v9.5 websites I used realurl.
When you change the site-title from tileV1 to tilteV2 the page could be accessed with domain.tld/titlev1.html AND domain.tld/titlev2.html.
The realurl path of titlev1 was marked as MovedPermanent (and redirects) to titlev2 and an expire date for the v1 is sent to the browser (+30 days).
My editors don't care a lot about SEO and the rest. They copy pages, deleting the content, moving them around the pagetree - nobody cares about the slug, often this is default-titlexx.html.
With realurl this was less painful compared to the native URL-handling in TYPO3.
I could not find any documents/websites discussing the problem.
Am I missing some option to set? How do you solve this?
Training the editors to double check the slug, but this only works for 2 days then I get the next error report: The page is broken cause a page has moved...
Thanks for any help in this issue!
There are some differences between TYPO3 9 and 10. In 10 you have additionally functionality which handles this:
when changing a slug, TYPO3 offers to change the slugs of the subpages - the editor gets a choice to do this or not
redirects will get created automatically, here you also have the choice to allow redirects for subpages as well.
You might also want to look at the extensions that are available for TYPO3 9 which try to enhance the Slug handling, e.g. sluggi. (I have not used this myself, but it looks like it may be helful).
There are also a number of open issues about Slug handling and redirects.
What I would recommend
Think about updating to 10. There are not quite so many breaking changes and you might find the update quite painless. Or look at the slug extensions.
Familiarize yourself with the slug and redirect handling (in v10). (Yes, the documentation may be incomplete but a lot of it is self-explanatory. Just test changing Slugs on some pages with subpages and move a page and the functionality should reveal itself).
Using permissions (possibly also workspaces) you can restrict what editors may do. You might also want to restrict editing to some pages (e.g. editlock, permissions). If they can't handle something responsibly, maybe they should only have restricted access. (I realize this is not so easy to solve, we have the same problem and defining very fine-grained permissions makes it impossible to handle at some point. Also, the editors should be able to create new pages).
Sorry, I don't have all the answers right now. To be honest, our site is a little messed up in this respect currently, because of using TYPO3 9 for a while and not handling this problem. At some point, the URLs of subpages start to deviate and then it is difficulty to get it cleaned up.
What is better now, though: In realurl the editors often did not fill out the URL segment and then the URL changed every time the page title changed. This is now handled in a better way, I think, where you explicitly have to define the URL.
In my situation, a wordpress install, we use the core enqueue functionality for styles and scripts, and the version number parameter which adds a GET param after the filename, for cache busting. We bump this on changes to the linked file, as per normal. This is all well and good and technically working.
My issue is that our host sets an expires header for html files for 10 days, so the html ends up in the browser cache. The html includes the link tag, which includes the old version number, which means that they get the old CSS/JS.
When we encounter this in testing, we just Ctrl-Shift-R and all is well, but I would prefer not to be asking our user's to clear their cache everytime we make a change.
My 'Nuke it from Orbit' solution would be to ask them to not cache html, but this seems like a Bad Idea(tm). Is there a good method for busting the browser HTML cache from our end? I feel like this should be a common issue and a solved problem, but maybe I'm just googling the wrong terms here because everything I have seen so far is basically - change the URL's; which seems even more of an extreme solution (Take that, accumulated SEO Ranking!)
So recently I've discovered that my mediawiki pages are not functioning correctly. For example, when I edit MediaWiki:ipbreasons-dropdown in an attempt to add extra ban reasons to the dropdown.
The wiki recognizes the edit, even showing a link and diff in RecentChanges, but for some reason the extra dropdown item never shows.
The same is happening with MediaWiki:Grouppage-staff. Obviously this is a huge problem. Anyone know any way I can fix this without re-installing mediawiki?
Sounds like a LocalisationCache problem. There are no magic wands for such issues, you need to debug a bit e.g. issuing wfMessage( 'ipbreasons-dropdown' ) in eval.php. If the message doesn't contain what expected, go over the documentation for localisation cache again, it might be something simple like file permissions.
Just updated Pycharm, and now it won't recognise any of my HTML tags. Do you also have this issue, or have I messed with some settings? A few days ago I changed a few of the HTML settings, but can't really remember what I did...
So, all of the yellow marked tags are not recognised by Pycharm anymore? I have no idea what I've done to cause this, unless its an update issue, but I searched online and could not find others with the same problem.
Had the same problem. Reading through this bug report I tried the following:
File | Invalidate caches
Worked like a (py)charm ;)
Delete your .idea folder, and then restart pycharm.
If you can refine your post, we'll be able to help you; can you point to a hi-lighted tag with your mouse and read the warning on pycharm status bar. Also you can do this by pointing to the warning indicators on the right in front of each line. Here are some things you can check:
settings/code style/html bring it back to defaults
settings/inspection bring it back to defaults
settings/file types choose html and check the registered patterns, it may be broken, you should find *.(htm, html, sht, shtm, shtml)
you could also un\re-install html tools plugin.
I have a website hosted on IIS to do some testing. However whenever I change the html files in the website directory and referesh the webpage in my browser (chrome), nothing changes. Do I have to force the server to update and see the new changes, or is there something else?
I think that's not server related problem. (Of course you can try to restart server, or system if nothing helps)
Try followings
Clean your cookies, browsing history.
Then force refresh the page by hitting F5 / CTRL+F5 / CTRL+R.
Check with another browser
AFAIK you don't need to force any IIS reset or anything of that kind. As the other comments and answers already suggested something else is probably going on:
browser cache
perhaps IIS is not serving the files you're changing (a duplicate perhaps)
... etc
Try some Rubber Duck Debugging to find the problem, helped me out more than once with this kind of "This should just work, why doesn't it?" problem.
I've been using IIS for over a decade and it is very good about recognizing changes in your content and serving the latest. You don't have to refresh it. Some files like web.config or global.asa are special and when they are changed IIS will automatically restart the site for you.
Mime types like html, txt, gif, and jpeg are assumed by proxies and browsers to be very static and are cached aggressively in those layers (vs asp, jsp, etc).
This superuser question talks about refreshing in Chrome -- apparently its not always simple.
If, however, you want to give IIS a kick the easiest way is with the command line:
iisreset
I doubt it will fix your problem but it might make you feel better :)
This could be the browser cache (And yes! sometime Chrome is too smart). As you can see people answer here, their solutions can help. However, I would like to point possible problems of each solution and give my favorite solution.
clean browser history: no one like it, pretty annoy that you have to clean verytime.
force refresh by f5 or ctrl +f5: sometime this does not work.
check with another broswer: you can face the same problem when you do another change.
My favorite solution is that if your url is 'http://localhost/page1.html', you can call as 'http://localhost/page1.html?fake=xxxxx'. The xxxx can be any thing. You can change it anything you want. This solution fakes different urls for brosers but actually it is not different.