I have been searching for a few days and it is hard to find specifics for wikimedia.
Does anyone know of an extension, or report that will show internal links to nonexistent pages?
We host a wiki where we have users that have crated some excellent content, but to have to go through each page to find the links to pages that have not been created yet is next to impossible due to the amount of content.
Any help or ideas would be greatly appreciated...
Go to your Special:SpecialPages list and you will find
Wanted categories
Wanted files
Wanted pages
Wanted templates
which are (more or less) dynamic lists of all redlinks in your wiki.
If you want to only see a list of dead links for a specific page or a group of pages, you can use DynamicPageList to run a linksfrom query with openreferences = yes. Go here for more info: http://semeb.com/dpldemo/index.php?title=DPL:Manual_-_DPL_parameters:_Criteria_for_page_selection#linksfrom
Related
I'm sorry I do not know how to word that title better. I have tried searching google but my terminology isn't helping my results.
Let me explain the context. When you're on a news website or blog and you're on their homepage like: www.homepage.co.uk/ and then you click an article it will go somewhere like this: www.homepage.co.uk/2017/article/ how do they make the 2017 appear? because if you remove the /article/ from the url it takes you to an archive of all the links in that year? I don't understand, is there a process to this?
When I click a link in my website it goes to: www.website.co.uk/link
I want to be able to have that 2017/link/ in the url so they can find the archive of that year just like on their websites?
How do I do this?
I am sorry if I am not explaining this very well.
I understand changing my filenames to : "2017/article.html" might work but I do not believe that is the correct way of doing it?
Thanks a lot for your time and suggestions!
You're asking about a couple of things: one is the taxonomy of the site. Taxonomy, if you don't know, is the "shape" of or how your site is organized. News sites, for instance, are usually organized by date and perhaps topic (Health and Leisure, Politics, Entertainment, etc.). The other aspect of your question is regarding what you might call RESful "hacking" of URLs. One of the tenents of REST is that URLS (uri, to be accurate) are supposed to be hackable. A news site might have /2017/10/10 to display all articles for Oct 10. Maybe you remove the last "10", and get all the articles for October so far. If you are not using a site platform that does this for you, you will have to maintain that taxonomy yourself, and manually write all the links. Systems such as Drupal and Joomla, among others, will translate your taxonomy into automatically-maintained links. In editing a page on one of these platforms, you typically only refer to the system's internal name of the page (could be a shortened version of the article's title in the above example), and the underlying engine takes care of reconstructing the URL for you (in case the page moves, or its tags/taxonomy changes).
This is a big topic, and I encourage you to do some further reading:
http://searchcontentmanagement.techtarget.com/feature/Building-a-website-taxonomy-in-eight-steps
https://www.drupal.org/docs/7/organizing-content-with-taxonomies/organizing-content-with-taxonomies
By "dynamic links", I mean a list of links that will constantly be updated.
To illustrate my question, I have a website that I am constantly writing new articles for. I currently have about 10 articles. If someone is to read article #5, there is a list of links to all 10 articles in the right panel of the page. As I update the site, and article #1 becomes out of date, I'd like to replace article #1 with article #11. Rather than updating the links within every article (so 10 times), is there a way to update the links once and have them all update simultaneously to every page?? Could I create an iframe for this??
Thanks for any and all help!
What's your goal? Do you want to learn to be a web developer? Or are you mostly concerned with getting your articles published?
If you want to be a web developer, I'd recommend steering clear of large CMS system like Wordpress or Drupal. Those are great products. But you want to learn the basics first. I think starting a PHP tutorial is the way to go.
If you just want to publish your articles, I'd recommend you find a nice place to create a blog. There are so many to choose from. It all depends on how much you want to spend.
Feel free to ask follow up questions. Web development sounds simple. But it's really a complex topic. I can't imagine what is must be like starting out these days with so many choices and competing technologies.
One way to do it would be to use Server-side includes. (Wikipedia) They work like this:
<!--#include file="some-content.html" -->
or
<!--#include virtual="some-folder/some-content.html" -->
The difference is file="" finds a file relative to the current page, whereas virtual="" finds it from the domain root. Either way, this method can use any type of regular text file as a source. The actual addition of the content is done by the server (hence the name) so its contents will be parsed as regular HTML and all CSS will apply to it as if the file were part of your page. I don't know about compatibility with different hosts, but if your web server supports it, this is probably the easiest way to go.
What would be the best way to handle this situation?
Company has two types of products, therefore two seperate webpages to serve up each:
products-professional.html
products-consumer.html
Company changes structure and now does not want to list products as seperate, new page is:
products.html
According to Google Webmaster Tools, some sites have links to our old pages. I've added a redirect on them to point them towards the new page, but the errors still show in Google Webmaster Tools. I don't want errors.
You should:
monitor GWT errors and add missing redirects
try to contact as many as possible users linking to the old url's and ask them to fix it
Since 2'nd point is hard to achieve in 100%, your redirects has to be bulletproof, and even then google can find some weird urls from a year ago and report errors.
I want to find the number of pages of a website. Usually what I look for is a sitemap but I just encountered a site which does not have a sitemap so I am out of ideas of how to find its total pages. I tried to Google the URL but that did not helped much. Is there any other way we can find out the pages of a website?
Thanks in advance.
Ask Google "site:yourdomain.com"
This gives you all indexed pages.
Or use the free tool "Xenu". It crawls the whole site. But it won't find sites which have no internal links pointing to them. You can also export a sitemap with it.
I was about to suggest the same thing :) If this is a website you own, you can also add it to the Google Webmaster tools. It will show you lots of things about your site including number of links, pages, search terms, etc Its very useful and is free of charge.
I have found a better solution myself. You can go to Google Advanced Search and restrict the search results to your domain name. Leave everything else empty. It would give you the list of all pages cached by Google.
You could also try A1 Website Analyzer. But for all link checker software, you will have to make sure you configure them correctly to obey/not-obey (whatever your needs are) e.g robots.txt, noindex and nofollow instructions. (Common source for confusion in my experience.)
The reason I am keen to do this is that we have a wiki which works great, but I would like to store help pages for an internal application in the wiki and link to those pages direct from the app. Although we wouldn't have concerns with people seeing the non-article stuff (i.e. the help pages) when viewing the pages from the rest of the wiki, for it to be streamlined when viewed from the application I thought it would be ideal if I gave it a simplified skin which I would design.
I have already found out that URLs can have the useskin= added (e.g. as is done in the Preview Skin page within the User Preferences pages), but following the links will revert you to your normal chosen skin.
Is there perhaps some way to adjust the skin, so that all the links contain useskin=? (I think this might have issues, since you appear to need the full pagename for useskin to work (e.g. ..../w/index.php?title=blah....&useskin=cologneblue as opposed to the short URLs).
If this isn't a smart way to go, I could consider different approaches (I run the box the wiki is on and could create a distinct wiki perhaps, although there might be disadvantages to this, such as needing to combine the user tables and maybe this would still pick up the user's preferred skin unless I re-coded things).
Any sensible suggestions gratefully received! Let me know if there's any more info you might need or if I need to clarify any points about my objective.
[I did submit this on the MediaWiki.org Support Desk page, but it got no response... I hope my question isn't that bad!!]
You could put all your content in its own namespace, then set the skin for that namespace using this extension (I've used it, it works well enough):
http://www.mediawiki.org/wiki/Extension:SkinPerNamespace
If you don't want to lock them all into a single namespace, you can also use the SkinPerPage extension to mark the pages individually.
Why not change the default skin to the skin you want?