Existing MediaWiki content in plone site - mediawiki

I have an existing MediaWiki site that I want to port into Plone. In the interim, is there a way to have Plone read from the wiki articles and present them within the context of my Plone site? I'd like to have a Plone page reference the URLs of MediaWiki articles and display them as if they were part of the Plone site.

You can use the parse API to get the HTML from your wiki and embed said HTML in your other CMS, either dynamically server side, or with a frame (not so ideal), or with a script import update (depends on frequency of updates on your wiki).

Related

Is it possible to add gatsby blog on a subdomain to pre existing gatsby site?

My question is:
I want to create a gatsby site (the main section of the website). I also want a gatsby run blog with netlify CMS (which is what the site will be hosted with) on the subdomain blog.site.com.
Is this possible? What would I need to research/know to make this happen? For the domain, I will use google domains if that information is needed.
You just need to create two different Gatsby sites:
1 for the main section of your site
1 for your blog
On Gatsby's side, you don't need any extra configuration (not even pathPrefix), is just a regular site. The configuration must be done in the server (Netlify) adding each site to a custom domain.
Regarding the Google domain, you will only need to add the proper DNS in each server. You may find this article insightful: https://medium.com/#jacobsowles/how-to-deploy-a-google-domains-site-to-netlify-c62793d8c95e

Restrict a page in Bolt CMS to a set of users

I'm working with Bolt CMS and I would like to create a new section on our website that is only accessible by a set of authenticated users. These users can only view the pages within this section of the site. Anonymous users would not see this section of the website in the menu, nor would they be able to navigate to it.
I see where I can create content types and assign roles to the types via permissions.yml, but how does that translate to authenticated page views? Is this possible without custom coding?
Backend permissions do not carry over to the front end, they are very deliberately kept separate.
There is an extension (bolt/members) being developed for the upcoming Bolt v3 that will implement this functionality, but still very separately from the backend.

#REDIRECT pages showing in MediaWiki search

I have several pages on a MediaWiki installation that use redirects. According to the MediaWiki Redirect documentation:
After making a redirect at a page, you can no longer get to that page by using its name or by any link using that name; and they do not show up in wiki search results, either.
However, all my redirects are showing in search results:
I've read the page above and tried searching for this issue, but not gotten anywhere. What could be causing this?
I'm using MediaWiki 1.23.5 with the Vector skin. The search engine used is the vanilla search included with MediaWiki.
The default search of MediaWiki includes the Redirect pages, unhappily this can't be configured. The solution is: Use another search engine :) Wikimedia wikis using Lucene and currently being changed to Elasticsearch (using the CirrusSearch Extension). There redirect pages aren't visible as default.
There are also some other full text search engines.

Generating a web site from xsn files

As we all know, the infopath forms service residing on a sharepoint server generates a web site each time we publish an inforpath form template to the sharepoint server.
Here is the question: how does sharepoint do that. Is there any way for us to do that programmatically via some kind of api provided by MS?
In fact, what I need to do is getting all the html, js, css etc. files and applying some kind of operations like deleting some divs or insert some html code into the particular web page. I have come up with two ways to do this.
Generating the web page via sharepoint api and apply those operations at the same time
Extracting the web page files from the IIS server and apply those operations
I am totally new to this kind of work. All in my mind is that each time we right click on a web page in the browser and choose to save the web page, the browser gets some of the files we need to render the web page and makes it possible for us to browse the web page offline.
httrack
WinWSD
and tools like that seems to work fine with extracting html files from online web pages but not that well with js, css files.
Now I am trying to dig into the chromium project for some kind of inspiration, although whether it helps or not is unpredictable.
Any kind of advice will be appreciated.
Best Regards,
Jordan
Infopath xsn files are just zip files with a different extension. you can rename the extension to .zip and extract out the files. you will find a number of files that make up the form. the two main ones are the .xml and .xsl files. the .xsl will have the html to generate when applied to the xml.

Integrate pages

I need to integrate a page from another site on to my own. What are the ways to do it ,,
apart from <object> and <iframe>? What if the site does not provide an RSS feed?
If the site doesn't provide some API to access its contents and you don't want to use iframes your only solution is to use site scraping: have a server side script that will send an HTTP request to the remote site, fetch the HTML, parse the HTML and pass it to your views. Obviously this could raise some copyright concerns so you should make sure that you have read the remote site policy about this. It is also extremely fragile and it might break at the moment the remote site changes its structure because you will be relying on HTML parsing.