Mkdocs site is opened incorrectly - mkdocs

I've got mkdocs.yml file which looks like:
site_name: blabla
pages:
- One page: page2.md
- Second page: page2.md
- Navigation: Navigation.md
When I open it url is like this: http://10.2.0.8/blabla/master/Navigation.md and doesn't work and I get 404 Not Found
nginx/1.14.0 (Ubuntu) error. If I delete .md at the end of url it works fine.
However, on locally it opens like http://127.0.0.1:8000/Navigation/
does anyone know what's the problem with this?

As described in the official mkdocs documentation, the intended behavior is that http://10.2.0.8/blabla/master/Navigation.md does not exist, but rather http://10.2.0.8/blabla/master/Navigation as shortcut for http://10.2.0.8/blabla/master/Navigation/.
What might have gone wrong in your case is the deployment of the HTML & CSS files. Assuming that your markdown sources are in ~/blabla, a mkdocs build --clean within that directory will create a subdirectory ~/blabla/site/ which you then have to deploy in the respective directory, say /var/www/html/blabla/. Under Linux, I suggest a rsync -r --delete-before ~/blabla/site/* /var/www/html/blabbla/. In other words: The issue might also be that you simply have not deployed the whole site or at a different place.

Related

How to use docson widget with Sphinx on ReadTheDocs

I'm working on the documentation (https://global-coffee-data-standard.readthedocs.io) of my JSON schema (https://raw.githubusercontent.com/andrejellema/GlobalCoffeeDataStandard/master/schema/global-coffee-data-standard.schema.json)
The basics I have working (thanks to a lot of help from this forum) but now I would like to include the docson widget to show my code more beautiful (https://global-coffee-data-standard.readthedocs.io/en/latest/explanation.html#id13)
I've read this page https://threesixtygiving-standard.readthedocs.io/en/latest/_static/docson/README/ and I'm wondering how to install docson locally but more important on ReadTheDocs.
Do I need to run npm i docson localy? If so which files do I commit to my _static folder so ReadTheDocs can work with it as well?
Or can I put some magic in conf.py to let Sphinx handle it?
EDIT
I tried adding the docson files to my _static folder and it seems to work when I add this code to my ReST file:
<script src="_static/docson/js/widget.js" data-schema="https://raw.githubusercontent.com/andrejellema/GlobalCoffeeDataStandard/master/schema/global-unique-id.json"></script>
But only when I add just one docson widget. When I add more I get this error in the console:
only one instance of babel-polyfill is allowed.
So I'm assuming this is not the correct workflow.
What is the correct workflow to add multiple docson widgets to my page.

How to include only a single folder in Bamboo build plan

I need Bamboo to build the project automatically when a file in "api" subfolder changes. When a file in any other subfolder changes the bamboo build plan shouldn't run.
Folder structure:
project
- api
- ui
- core
In the Plan Configuration repositories tab, from the "Include / exclude files" dropdown I have selected the following option
Include only changes that matches the following pattern
and I have tried the following patterns:
.*/api/.*
api/
api/*
api\/*
api/**
/api/*
but the build plan isn't running. With "Include / exclude files" dropdown set to None the build plan runs (but does so when a file changes in any other subfolder also)
I can't split the project up to different repositories.
What pattern should I use or is there any other solution for this?
Pattern that ended up working was
api/.*
It's a regular expression from the root of the checkout supposedly, although I have not used this feature. Here are some of their examples:
https://confluence.atlassian.com/display/BAMBOO052/_planRepositoryIncludeExcludeFilesExamples?_ga=2.91083610.1778956526.1502832020-118211336.1443803386
What you might try is let it checkout the whole thing without the include filter set, and don't let it delete the working directory. Look on the filesystem and verify the path from the root of the working directory. Then test your regex against the whole path relative from that working directory.

Create versioned documentation archive

I am in the process of rebuilding a API documentation site for an open source project where we want to keep an archive of previous releases. I am wondering how I can configure Jekyll to generate the right hierarchy?
We have the following directory layout in our current /docs folder (which we would like to reuse in Jekyll somehow):
current/
v1/
v2/
v3/
Whenever we release a new version the current folder gets copied to a new folder (say v4). The contents of each folder is something like this:
introduction.md
testing.md
api-foo.md
api-bar.md
I'd like these to be available under the url domain.com/v3/testing/, domain.com/current/testing/, etc. I see that I could probably employ collections to do this, having one collection per version. To do this I see myself auto-updating the _config.yml as part of a build script (I made an example doing this here), but I am not sure how to progress from here, or if using collections for this is the wrong approach ...
This is too brief of an update to be of real quality, but thought I would mention that we solved this in the end in the Sinon project. Check out the repo at GitHub sinonjs/sinon and see the docs folder as well as the scripts called from package.json.
Feel free to improve on this answer by editing it and adding content and links.

Magento Layoutviewer (Alan Storm) not working after installation

Magento version: 1.9.2.4
I am currently working through this tutorial, and am trying to install the Layoutviewer module.
I following the link on the page to where I could get the layout viewer, and then used the manual install guide on this page to install it.
The module is being detected by magento, and is listed on the Disable Modules Output section (it is enabled).
The directory tree for the module is as follows:
magento1
app
code
local
Magentotutorial
Layoutviewer
I have also made sure that the config file's name and contents are 100% correct.
When I try to use the module (http://127.0.0.1/magento1/helloworld/index/index/?showLayout=page) it doesn't work, and just shows me the screen as it was before.
Is there anything I could be missing, or did I perhaps install the module incorrectly?
edit
I have already found this previous question that is basically identical to mine, but it's very old so I don't want to comment on in - it did not help me solve the problem.
Problem resolved:
I placed the Layoutviewer in the Magentotutorial directory, but it was supposed to be in it's own (Alanstormdotcom) directory.
Both of these solutions worked:
Move the module to the correct directory or
Replace all references to Alanstormdotcom/alanstormdotcom to Magentotutorial/magentotutorial
Found this Googling for the same problem. My problem was that I put Storm's module in /community/Alanstormdotcom/. It won't work from there, it must be in local (/local/Alanstormdotcom/).

How to download HTTP directory with all files and sub-directories as they appear on the online files/folders list?

There is an online HTTP directory that I have access to. I have tried to download all sub-directories and files via wget. But, the problem is that when wget downloads sub-directories it downloads the index.html file which contains the list of files in that directory without downloading the files themselves.
Is there a way to download the sub-directories and files without depth limit (as if the directory I want to download is just a folder which I want to copy to my computer).
Solution:
wget -r -np -nH --cut-dirs=3 -R index.html http://hostname/aaa/bbb/ccc/ddd/
Explanation:
It will download all files and subfolders in ddd directory
-r : recursively
-np : not going to upper directories, like ccc/…
-nH : not saving files to hostname folder
--cut-dirs=3 : but saving it to ddd by omitting
first 3 folders aaa, bbb, ccc
-R index.html : excluding index.html
files
Reference: http://bmwieczorek.wordpress.com/2008/10/01/wget-recursively-download-all-files-from-certain-directory-listed-by-apache/
I was able to get this to work thanks to this post utilizing VisualWGet. It worked great for me. The important part seems to be to check the -recursive flag (see image).
Also found that the -no-parent flag is important, othewise it will try to download everything.
you can use lftp, the swish army knife of downloading if you have bigger files you can add --use-pget-n=10 to command
lftp -c 'mirror --parallel=100 https://example.com/files/ ;exit'
wget -r -np -nH --cut-dirs=3 -R index.html http://hostname/aaa/bbb/ccc/ddd/
From man wget
‘-r’
‘--recursive’
Turn on recursive retrieving. See Recursive Download, for more details. The default maximum depth is 5.
‘-np’
‘--no-parent’
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded. See Directory-Based Limits, for more details.
‘-nH’
‘--no-host-directories’
Disable generation of host-prefixed directories. By default, invoking Wget with ‘-r http://fly.srk.fer.hr/’ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.
‘--cut-dirs=number’
Ignore number directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved.
Take, for example, the directory at ‘ftp://ftp.xemacs.org/pub/xemacs/’. If you retrieve it with ‘-r’, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the ‘-nH’ option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. This is where ‘--cut-dirs’ comes in handy; it makes Wget not “see” number remote directory components. Here are several examples of how ‘--cut-dirs’ option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
...
If you just want to get rid of the directory structure, this option is similar to a combination of ‘-nd’ and ‘-P’. However, unlike ‘-nd’, ‘--cut-dirs’ does not lose with subdirectories—for instance, with ‘-nH --cut-dirs=1’, a beta/ subdirectory will be placed to xemacs/beta, as one would expect.
No Software or Plugin required!
(only usable if you don't need recursive deptch)
Use bookmarklet. Drag this link in bookmarks, then edit and paste this code:
javascript:(function(){ var arr=[], l=document.links; var ext=prompt("select extension for download (all links containing that, will be downloaded.", ".mp3"); for(var i=0; i<l.length; i++) { if(l[i].href.indexOf(ext) !== false){ l[i].setAttribute("download",l[i].text); l[i].click(); } } })();
and go on page (from where you want to download files), and click that bookmarklet.
wget is an invaluable resource and something I use myself. However sometimes there are characters in the address that wget identifies as syntax errors. I'm sure there is a fix for that, but as this question did not ask specifically about wget I thought I would offer an alternative for those people who will undoubtedly stumble upon this page looking for a quick fix with no learning curve required.
There are a few browser extensions that can do this, but most require installing download managers, which aren't always free, tend to be an eyesore, and use a lot of resources. Heres one that has none of these drawbacks:
"Download Master" is an extension for Google Chrome that works great for downloading from directories. You can choose to filter which file-types to download, or download the entire directory.
https://chrome.google.com/webstore/detail/download-master/dljdacfojgikogldjffnkdcielnklkce
For an up-to-date feature list and other information, visit the project page on the developer's blog:
http://monadownloadmaster.blogspot.com/
You can use this Firefox addon to download all files in HTTP Directory.
https://addons.mozilla.org/en-US/firefox/addon/http-directory-downloader/
wget generally works in this way, but some sites may have problems and it may create too many unnecessary html files. In order to make this work easier and to prevent unnecessary file creation, I am sharing my getwebfolder script, which is the first linux script I wrote for myself. This script downloads all content of a web folder entered as parameter.
When you try to download an open web folder by wget which contains more then one file, wget downloads a file named index.html. This file contains a file list of the web folder. My script converts file names written in index.html file to web addresses and downloads them clearly with wget.
Tested at Ubuntu 18.04 and Kali Linux, It may work at other distros as well.
Usage :
extract getwebfolder file from zip file provided below
chmod +x getwebfolder (only for first time)
./getwebfolder webfolder_URL
such as ./getwebfolder http://example.com/example_folder/
Download Link
Details on blog