Local document is different from hosted one for math formulas - read-the-docs

I have my documents for read-the-docs in a github repository. Before pushing to the repo, I always run sphinx locally to produce the html. When I am satisfied I push my changes to the github repo and read-the-docs builds the documentation.
In a most recent change I have used a lot of math. My document looks fine produced locally, but the build by read-the-docs looks different especially for formatting the tabulators &.
Is there a way to produce the document in the same way as read-the-docs does it in order to avoid such surprises?
Thanks for any help.
Helmut

Related

How to configure GitHub Pages to skip a file that breaks the Jekyll build?

I have some systems that automatically write .md to _posts/.
As you know, below the YAML front matter begins the HTML, in these cases it is arbitrary HTML coming from external sources, which sometimes breaks the build. This is all via the GitHub API and at some scale so manually deleting the post is not an option.
Is there a way to configure or programmatically interact with Jekyll or GitHub Pages to ignore files or commits that break the build?
If it helps: This is best effort, stability is more important than integrity. Each post is added in a separate commit, of course Jekyll build does not necessarily run on each commit, but in any case it would be fine to just drop all posts for example by rolling back all commits since the last successful build.
The _drafts folder might be of use to you if you have posts that are unfinished. See the documentation.
In terms of ignoring posts that break the build, is there no way to locally test the Markdown-to-HTML conversion before pushing to GitHub Pages?
You can use a Continuous Integration (CI) service with github (most have free plans).
The idea is to :
Commit your automatic extracts in a development branch.
Push this development branch on github.
Have Continuous Integration service build the branch and take action depending on the build result.
Depending on CI build success/error, you can choose to :
merge in master for publication is everything went ok
send you a mail if a branch build fails
make your site say hello world if result is 42

Tool for creating a static page showing folder structure apache-like (for use in GitLab)

I have some files stored in a web server, which serves the files using apache like this:
Now, I'm planning to move all these folders to a GitLab repository, the problem is that GitLab is slow (please do not offer to use GitHub, I have certainly checked that alternative) and so I don't want to use GitLab's default way of showing files, instead, I want something similar as the apache way of showing files, as a static page, to use it as a GitLab page, so that I would be able to download and see the structure of my repo, but fast.
I wouldn't mind if I have to run a command to produce this static HTML page before each commit I make, but it would be cool to know that it exists, otherwise, I would have to create it (using python, for example), but of course, I would prefer an already created wheel instead of rediscover mine.
Thank you.
You could consider generating a page in markdown, with your Git repo (repository) structure content, and then publish it on the wiki side of your (here GitLab) project.
See "Is there a way to represent a directory tree in a Github README.md?" as an example, except you don't want to use your own repo README.md, but rather the wiki, in order to keep the repo and its representation (on wiki) independent
.

Mercurial common/local files

I'm an hg user since a couple a years and I'm happy about that!
I have to start a project as I never did before.
The idea is to develop a software with a batch mode and an GUI.
So there will be common sources to both batch and GUI mode but each one will also contain specific sources.
And, basically, I would like my coworkers to be able to clone the GUI version, work on it an commit changes.
Then, I'd like to be able to merge their changes on the common files with the batch version.
How can I deal with that?
Since I've been reading a bit on this topic, I would really appreciate any help!!
Thank you.
binoua
As the creator of subrepos, I strongly recommend against using subrepos for this.
While subrepos can be used for breaking up a larger project into smaller pieces, the benefits of this are often outweighed by the additional complexity and fragility that subrepos involve. Unless your project is going to be really large, you should just stick to one project repo for simplicity.
So what are subrepos for, then? Subrepos are best for managing collections of otherwise independent projects. For instance, let's say you're building a large GUI tool that wraps around an existing SCM. I'd recommend you structure it something like this:
scm-gui-build/ <- master build repo with subrepos:
scm-gui/ <- independent repo for all the code in your GUI tool
scm/ <- repo for the third-party SCM itself
gui-toolkit/ <- a third-party GUI toolkit you depend on
extensions/ <- some third-party extension to bundle
extension-foo/
Here you do all your work in a plain old repo (scm-gui), but use a master repo at a higher level to manage building/packaging/versioning/tagging/releasing the whole collection. The master scm-gui-build repo is just a thin wrapper around other normal repos, which means that if something breaks (like one of the repo's URLs goes offline) you can keep working in your project without problems.
(see also: https://www.mercurial-scm.org/wiki/Subrepository#Recommendations)

Mercurial (Hg) and Binary Files

I am writing a set of django apps and would like to use Hg for version control. I would like each app to be independent of the others so in each app there may be a directory for static media that contains images that I would not want under version control. In other words, the binary files would not all be in one central location
I would like to find a way to clone the repository that would include copies of the image files. It also would be great if when I did a merge, if there were an image file in one repo and not another, that there would be some sort of warning.
Currently I use a python script to find images and other binary files that are in one repo, but not the other. But a lot of people must face this problem, so there must be a more robust and elegant solution.
One one other thing...for reasons I do not want to go into, usually one of my repos is on a windows machine, and the other is on Linux. So a crossplatform solution would be nice.
Since Mercurial 2.0 the extension largefiles is now included in the main distribution. That extension keeps and manages large files outside of the "normal" repository in a way that you get the benefit of DCVS but without the benefit of exponential size and processing time growth.
Other extension that work along similar lines are SnapExtension and BigFilesExtension. However, those two are not distributed with Mercurial (you have to get them manually).
Mercurial can track any kind of file, for binary files if something changes then the whole file gets replaced not just the changes.
On the getting a warning if one repo doesn't contain a file, that's kind of the point of a DVCS is that the repos are related but are autonomous. You could always check and see what files were added during a synch or merge operation.
The current Mercurial book (by Bryan O'Sullivan) says, that Mercurial stores diffs also for binary files. How efficient this is, obviously depends on the nature of changes to binary files.

How can I retrieve only a subdirectory from a Mercurial Repository?

I'm trying to sell our group on using Mercurial as a source repository rather than VSS. In the process of updating our build scripts, I'm running into an issue trying to retrieve files from the Hg repository.
Our builds are automated with NAnt and currently work for local builds or builds from VSS (ie, pull the source as needed from VSS). I'm trying to update them to work with Mercurial as well.
Basically, when I'm working with single files, I don't have any issues since I can just use NAnt's 'get' task (after getting the appropriate revision hash) to retrieve the individual file.
The problem that I'm having is when I need to work with a directory (and subdirectories) of files that aren't at the root of the repository. I can't seem to figure out the proper commands to retrieve/copy a subdirectory from the repository to my 'working' directory for the builds. I've spent basically the whole afternoon trying to figure out how to do this with the mercurial executables (so I can use a NAnt 'exec' task), and have basically hit a wall so I figured I'd try posting here.
Can someone confirm whether this is possible, and provide some suggestions as to how I might be able to do this? I realize that Mercurial tracks changes by files and not directories, but it seems odd to me that this isn't available out of the box (from what I can tell).
If it's just not possible, the only workarounds I see are either maintaining NAnt fileset lists of expected files to work with (ugh!), or cloning the entire repository to a temporary directory and then just copying the files from that source as needed (this feels like a cludge to me).
I realize that I could simply create another repository for the directory that I want to work with, but I'd prefer to not go that route since I think that would increase the complexity of what I'm trying to do by a significant amount (I would have to apply this a large number of times for all of the different libraries that we build..).
Mercurial doesn't let you get only part of a repository. You have to get the whole tree. It's much more whole-repo focused than svn is.
You could try and segment your repository into multiple repos and manage them using the subrepos feature. Then you can pull the subdirectories independently.