Is it possible to search subpages by subpagename - mediawiki

I'm trying to see if there is a possibility to have CirrusSearch index subpages by their subpagename and not just the fullpagename. Reason is that these subpages are hard to find when just using the little searchbar in the top right corner of mediawiki.
For example a page called folder1/myTool is only found when starting the search with any part of folder1 versus being able to search for myTool.

you could try and set $wgCirrusSearchCompletionProfiles = 'fuzzy-subphrases' in you LocalSettings.php
on Mediawiki.org try to search for 'portal' you can see it is working there as you get the 'VisualEditor/Portal' in the result, here you can see all the config options in use on Mediawiki.org : https://www.mediawiki.org/w/api.php?action=cirrus-config-dump&formatversion=2

Related

How to continue a title if that section bleeds to another page in google docs using app scripts?

I am working with a google docs file with already populated sections and their section headers. I read the file and replace some text.The issue is depending on the size of the text I replace the section content might move to the other page.In that case I want the section title to re-appear at the start of the next page.
For a example :
The first page section title is - Global History
And if that section goes to the other page the title should be - Global History(Cont.)
The main issue I am having is that I can't find the end of a page. I tried using page break
var paras = editBody.getParagraphs()
for(i=0;i<paras.length;i++){
if (paras[i].findElement(DocumentApp.ElementType.PAGE_BREAK) != null) {
Logger.log('page break')
}
}
But this did not work, even though my document already has 5 pages it returned without a single page break.
Finding the end of a page in a Google document it's quite complex as the Google Apps Scirpt Documents Service and the Google Docs API hasn't a method for this, so you should calculate the end of page considering the page margins, line spacing, font sizes, etc.
A simpler approach might be to estimate how many pages will require each section and set the continuation section title in advance, setting these titles with the "Add page break before" option.

"activeStyle" attribute always applying on links to pages\index.js

I'm pretty new to Gatsby/React and web development in general, so this may be a very simple fix, but I can't figure out what the problem could be.
I'm currently working on my header and making links to each of the pages on my website and am having some trouble with the "activeStyle" attribute. So before describing specifics here is a simplified version of what I am trying to do:
<Link to="/" activeStyle={{color: 'gold'}}>Home</Link>
When I place this link on a page other than home it will still highlight the link gold even though it isn't actually the active page. However, if I use the same exact code but instead link to the /about page, it will work correctly and the link will only be gold if I am on the about page. Am I missing something?
I attempted to set the link to="/index", but Gatsby through an error at me saying that "/index" does not exist and gave a list of the pages on my site, one of which was "/". I honestly can't think of what's going on with this.
Thanks!
Link doesnt have activeStyle prop. Instead of using Link you should use NavLink. It has the following props:
<NavLink>
activeClassName: string
activeStyle: object // seems you are looking for this one
exact: bool
strict: bool
isActive: func
location: object
react-router v4 doc might be useful for you

What do content addresses look like in Umbraco?

I was trying to access content through previews. At first, this was fine with both Preview and non-preview views, but I moved some of my code to another branch and noticed issues. I remembered seeing http://localhost:63761/1120 work, but now: I'm not sure if this is the correct form of address for content under 1120 to appear. Is there something I need to check?
Postfixing your url with an id is a quick way to look up the content of a node:
For example the following url works in my environment, but is not user or search engine friendly https://localhost:44392/1141
When I look up the node in my umbraco backend: https://localhost:44392/umbraco#/content/content/edit/1141
Navigate to the Properties tab and look for "Link to document", that's the user friendly url for the node
If I understand your question properly, the urls should be like below
Non-preview mode url -
http://localhost:63761/umbraco#/content/content/edit/1120
Preview mode url -
http://localhost:63761/umbraco/preview/?id=1120#?id=1120
Thanks

Groovy Project (html parsing, file downloading, file creating)

I am considering starting a project so that I can learn more and keep the things I have learned thus far from getting rusty.
A lot of the project will be new things so I thought I would come here and ask for advice on what to do and how to go about doing it.
I enjoy photoshop and toying around with it, so I thought I would mix my project with something like that. So I decided my program will do something along the lines of grab new resources for photoshop put them in their own folder on my computer. (from deviantart for now)
For now I want to focus on a page like this:
http://browse.deviantart.com/resources/applications/psbrushes/?order=9
I'm not fluent with understanding exactly what is going on in the html source so it is a bit hard to see what is going on.
But lets say I am on that page and I have the following options chosen:
Sorted by Popular
Sorted by All Time
Sorted by 24 Items Per Page
My goal is to individually go to each thumbnail and grab the following:
The Author
The Title
The Description
Download the File (create folder based on title name)
Download the Image (place in folder with the file above)
Create text file with the author, title, and description in it
I would like to do that for each of the 24 items on the page and then go to the next page and do the same. (I am thinking of just going through the first five pages as I don't have too much interest in trying out brushes that aren't too popular)
So, I'm posting this for a sense of direction and perhaps some help on how to parse such a page to get what I'm looking for. I'm sure this project will keep me busy for awhile, but I'm hoping it will become useful in teaching me things.
Any help and suggestions are always appreciated.
.
.
EDIT
Each page is made up of 24 of these:
<div class="tt-a" usericon="http://a.deviantart.net/avatars/s/h/shad0w-gfx.gif" collect_rid="1:19982524">
<span class="shad0w" style="background-image: url ("http://sh.deviantart.net/shad0w/x/107/150/logo3.png");">
<a class="t" title="Shad0ws Blood Brush Set by ~Shad0w-GFX, Jun 28, 2005" href="http://Shad0w-GFX.deviantart.com/art/Shad0ws-Blood-Brush-Set-19982524?q=boost%3Apopular+in%3Aresources%2Fapplications%2Fpsbrushes&qo-0">Shad0ws Blood Brush Set</a>
My assumption is, I want to grab all my information from the:
<a class="t" ... >
Since it contains the title, author, and link to where the download url and large image is located.
If this sounds correct, how would one go about getting that info for each object on the page. (24 per page) I would assume by using CyberNeko. I'm just not exactly sure how to get to the proper level where is located and for each of them on the page
.
.
EDIT #2
I have some test code that looks like this:
divs = []
client = new WebClient(BrowserVersion.FIREFOX_3)
client.javaScriptEnabled = false
page = client.getPage("http://browse.deviantart.com/resources/applications/psbrushes/?order=9&offset=0")
divs = page.getByXPath("//html/body/div[2]/div/div/table/tbody/tr/td[2]/div/div[5]/div/div[2]/span/a[#class='t']")
divs.each { println it }
The XPath is correct, but it prints out:
<?xml version="1.0" encoding="UTF-8"?><a href="http://Shad0w-GFX.deviantart.com/
art/Shad0ws-Blood-Brush-Set-19982524?q=boost%3Apopular+in%3Aresources%2Fapplicat
ions%2Fpsbrushes&qo=0" class="t" title="Shad0ws Blood Brush Set by ~Shad0w-G
FX, Jun 28, 2005">Shad0ws Blood Brush Set
Can you explain what I need to do to just get the href out of there? Is there a simple way to do it with HtmlUnit?
Meeting the requirements you've listed above is actually pretty easy. You can probably do it with a simple Groovy script of about 50 lines. Here's how I would go about it:
The URL of the first page is
http://browse.deviantart.com/resources/applications/psbrushes/?order=9&offset=0
To get the next page, simply increase the value of the offset parameter by 24:
http://browse.deviantart.com/resources/applications/psbrushes/?order=9&offset=24
So now you know how to construct the URLs for the pages you need to work with. To download the content of this page use:
def pageUrl = 'http://browse.deviantart.com/resources/applications/psbrushes/?order=9&offset=0'
// get the content as a byte array
byte[] pageContent = new URL(pageUrl).bytes
// or get the content as a String
String pageContentAsString = new URL(pageUrl).text
Now all you need to do is parse out the elements of the content that you're interested in as save it in files. For the parsing, you should use a HTML parser like CyberNeko or Jericho.

WordPress > setting permalink option via script buggy?

My theme's custom options panel has the following code...
`
/* initialize the site options */
if(get_option('permalink_structure')==""){update_option('permalink_structure', '/%postname%/');}
`
This checks the permalink option setting and since the WP default is "" which triggers the site.com/?p=x handler. This way, if the user has not yet set permalinks from the default, my script does it for them, by setting permalink to post name. Or at least that what I thought...
However, I've had a few folks who have my template tell me that upon first install, they were getting 404 errors on pages.
Apparently, the workaround is to physically navigate to the Permalinks page and just click "Save Changes" (even though when you first hit this page, the Permalink comes up as if it's correctly entered into the "custom" field.
Anyone know why this happens? Is their perhaps another setting in the db that determines the permalink in addition to what happens when update_options() is called as in the above code?
Well, this probably happens because you're updating value in database table (permalink_structure), while .htaccess remains the same, and that's why mod_rewrite isn't loaded and users are getting 404-errors on pages.
I believe WordPress also adds rewriting rules into .htaccess in order to enable permalinks when you're clicking "Save Changes" in admin panel. Let me dig it out and find out what WP is doing exactly.
EDIT.
Ok, here is the code that is doing what you're trying to accomplish:
<?php
if (get_option('permalink_structure') == "")
{
// Including files responsible for .htaccess update
require_once(ABSPATH . 'wp-admin/includes/misc.php');
require_once(ABSPATH . 'wp-admin/includes/file.php');
// Prepare WordPress Rewrite object in case it hasn't been initialized yet
if (empty($wp_rewrite) || !($wp_rewrite instanceof WP_Rewrite))
{
$wp_rewrite = new WP_Rewrite();
}
// Update permalink structure
$permalink_structure = '/%postname%/';
$wp_rewrite->set_permalink_structure($permalink_structure);
// Recreate rewrite rules
$wp_rewrite->flush_rules();
}
wp_rewrite does not appear to have any effect. Users still have to manually click "Save Options" on the permalinks screen.
I suppose I will run firebug on that page during the update to see what's getting set that update_options is apparently missing.
This would appear to be a bug in update_options when the option being updated is permalink_structure.
Anyone disagree?