how to get source code of html template of read the docs ?
I get sphinx package from git hub but i don't know python or how to proceed with that package
I simply want HTML and css files of that template then i will modify according to my requirement
If you want to use the ReadTheDocs theme locally with Sphinx you can clone/fork the code from the Github repository below.
Source: https://github.com/snide/sphinx_rtd_theme
If you're also building your Docs using readthedocs.org you'll need to enter the following to your conf.py to avoid issues with the RTD build process:
# on_rtd is whether we are on readthedocs.org, this line of code
grabbed from docs.readthedocs.org
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
if not on_rtd: # only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
Otherwise, readthedocs.org uses their theme by default, so no need to specify it.
Related
I am using the Sphinx autodoc and napoleon extensions to generate the documentation for my project (Qtools). This works well on my local machines. I am using Sphinx 3.1.2 (or higher). However, when I build the documentation on Read the Docs (RTD), only text added directly to the reStructuredText files that form the source of the documentation is processed. The docstrings that are supposed to be pulled in by autodoc do not appear in the HTML documentation generated by RTD. So for example in docs\source\section2_rsdoc.rst I have:
Response spectra
================
The response spectrum class
---------------------------
.. autoclass:: qtools.ResponseSpectrum
:members:
Response spectrum creation
--------------------------
.. autofunction:: qtools.calcrs
.. autofunction:: qtools.calcrs_cmp
.. autofunction:: qtools.loadrs
See also :func:`qtools.convert2rs` (converts a power spectrum into a response spectrum).
This results in:
Response spectra
The response spectrum class
Response spectrum creation
See also qtools.convert2rs (converts a power spectrum into a response spectrum).
In other words, all directives are apparently ignored, and hyperlinks to other functions are not added. I have examined several basic guidance documents such as this one, but I cannot figure out what I am doing wrong. RTD builds the documentation without any errors or warnings. In RTD advanced settings I have:
Documentation type: Sphinx HTML
Requirements file: requirements.txt
Python interpreter: CPython 3.x
Install Project: no
Use system packages: no
Python configuration file: blank
Enable PDF build: no
Enable EPUB build: no
I haven't touched any other settings.
In conf.py I have tried the following variations of line 15: sys.path.insert(0, os.path.abspath('.')), sys.path.insert(0, os.path.abspath('../..')) and the current sys.path.insert(0, os.path.abspath('../../..')). None of those made any difference.
I would be grateful for any help!
RTD builds the documentation without any errors or warnings
This is slightly incorrect. As you can see in the build logs, autodoc is emitting numerous warnings like this one:
WARNING: autodoc: failed to import class 'ResponseSpectrum' from module 'qtools'; the following exception was raised:
No module named 'qtools'
This has happened for all your variations of sys.path.insert, as you can see in some past builds.
Trying to make it work this way is tricky, since Read the Docs does some magic to guess the directory where your documentation is located, and also the working directory changes between commands.
Instead, there are two options:
Locate where the conf.py is located (see How do you properly determine the current script directory?) and work out a relative package from there.
Invest some time into making your code installable using up-to-date Python packaging standards, for example putting all your sources inside a qtools directory, and creating an appropriate pyproject.toml file using flit.
I'm working on the documentation (https://global-coffee-data-standard.readthedocs.io) of my JSON schema (https://raw.githubusercontent.com/andrejellema/GlobalCoffeeDataStandard/master/schema/global-coffee-data-standard.schema.json)
The basics I have working (thanks to a lot of help from this forum) but now I would like to include the docson widget to show my code more beautiful (https://global-coffee-data-standard.readthedocs.io/en/latest/explanation.html#id13)
I've read this page https://threesixtygiving-standard.readthedocs.io/en/latest/_static/docson/README/ and I'm wondering how to install docson locally but more important on ReadTheDocs.
Do I need to run npm i docson localy? If so which files do I commit to my _static folder so ReadTheDocs can work with it as well?
Or can I put some magic in conf.py to let Sphinx handle it?
EDIT
I tried adding the docson files to my _static folder and it seems to work when I add this code to my ReST file:
<script src="_static/docson/js/widget.js" data-schema="https://raw.githubusercontent.com/andrejellema/GlobalCoffeeDataStandard/master/schema/global-unique-id.json"></script>
But only when I add just one docson widget. When I add more I get this error in the console:
only one instance of babel-polyfill is allowed.
So I'm assuming this is not the correct workflow.
What is the correct workflow to add multiple docson widgets to my page.
I am developing a web component using Polymer v3, and need to include some custom elements defined in legacy Polymer 2 components in the template HTML of my new component.
Since HTML imports are no longer supported in Polymer 3, what approach should I take to include them? If I was using Polymer 2 I could just add the following in my component's HTML file:
<link rel="import" href="../my-legacy-component.html">
I have tried adding the above link into the template HTML of my component, but it appears that doesn't work. I have also tried various import commands to reference the JS files inside the legacy component directly, but received various inscrutable JS errors so I'm not sure if that is the correct way to go either.
I can't believe there isn't a simple way to do this - would the Polymer team really introduce a new version of the library that is completely incompatible with all the components created using older versions?
Did you try to use polymer-modulizer?
Modulizer performs many different upgrade tasks, like:
Detects which .html files are used as HTML Imports and moves them to .js
Rewrites in HTML to import in JS.
Removes "module wrappers" - IIFEs that scopes your code.
Converts bower.json to package.json, using the corresponding packages on npm.
Converts "namespace references" to the proper JS module import, ie: Polymer.Async.timeOut to timeOut as imported from #polymer/polymer/lib/util/async.
Creates exports for values assigned to namespace referencs. ie, Foo.bar = {...} becomes export const bar = {...}
Rewrites namespace objects - an object with many members intended to be used as a module-like object, to JS modules.
Moves Polymer element templates from HTML into a JS template string.
Removes s if they only contained a template.
Moves other generic HTML in the document into a JS string and creates it when the module runs.
more on github
I have ran into the same problem with the module js-yaml earlier. I don't have enough reputation for a comment yet so I just write it down here.
Run this sudo npm install -g js-yaml -> This will install the missing package for the tool
Then at the root of your project, run modulizer --import-style name --out . -> This will convert your component from Polymer 2 to Polymer 3. The option --import-style name tells the tool to use package name instead of path. --out will make the tool writes those files to the directory.
After that, if no error prompts. Try to serve it with polymer serve --module-resolution=node -> Since we are using node modules now, we have to provide the --module-resolution=node option.
I am setting far-future expires headers for my CSS/Javascript so that the browsers don't ever ask for the files again once they get cached. I also have a simple versioning mechanism so that if the files change, the clients will know.
Basically I have a template tag and I do something like
<script type="text/javascript" src="{{ MEDIA_URL }}{% versioned "javascript/c/c.js" %}"></script>
which will become
<script type="text/javascript" src="http://x.com/media/javascript/c/c.min.js?123456"></script>.
The template tag opens a file javascript/c/c.js.v where it finds the version number and appends it to the query string. The version is generated by a shell script (run manually for now, will probably add pre-commit hook) which checks whether the file has changed (using git diff).
This is all working fine, EXCEPT:
I want to implement the same kind of versioning for images as well. But images can be referenced from CSS - which is a static file (served by nginx) - so no template tag there.
What is a better approach for file versioning?
Alternatively, I am thinking about replacing the template tag with a middleware which changes all links before returning the response. That is better than the template tag, which can be mistakenly omitted. But still doesn't solve the issue of images referenced from CSS.
Also, I'm aware that having the version as part of the query string might cause trouble with certain proxies not caching the file - so I consider making the version part of the filename - for example javascript/c/c.123456.js.
Note: It looks like there is no way to solve this issue using Django (obviously - since I don't even serve the CSS through Django). But there has to be a solution, perhaps involving some nginx tricks.
Stylesheet Assets
For your stylesheet referenced assets, you're much better off using Sass & Compass. Compass has a mixin that will automatically add version query parameters on the end of static assets referenced within a stylesheet. The version number only changes when you rebuild the stylesheet (which is trivial with compass watch while you develop locally).
Template Assets
For other files, I would actually use a post-pull hook of some kind that rewrites a python module whose sole purpose is to contain the current version.
/var/www/aweso.me/
./files/
./private-files/
./static/
./project/
./manage.py
./fabfile.py
./.gitignore
./base/
./__init__.py
./wsgi.py
./settings/
./__init__.py
./modules
./__init__.py
./users.py
./email.py
./beta.py
./redis.py
./haystack.py
./version.py
./default.py
./local.py
./live.py
Your post pull hook would create :
/var/www/aweso.me/project/base/settings/version.py
Which would contain the latest (or previous) git commit hash :
__version__ = "0763j34bf"
Then with a simple from .version import __version__ as ApplicationVersion in your settings.live, your template tag can simply use from settings import ApplicationVersion to write that query parameter as teh cache buster.
We are using this simple templatetag to generate version number based on file modification time:
import os
import posixpath
import stat
import urllib
from django import template
from django.conf import settings
from django.contrib.staticfiles import finders
register = template.Library()
#register.simple_tag
def staticfile(path):
normalized_path = posixpath.normpath(urllib.unquote(path)).lstrip('/')
absolute_path = finders.find(normalized_path)
if not absolute_path and getattr(settings, 'STATIC_ROOT', None):
absolute_path = os.path.join(settings.STATIC_ROOT, path)
if absolute_path:
return '%s%s?v=%s' % (settings.STATIC_URL, path, os.stat(absolute_path)[stat.ST_MTIME])
return path
For pre 1.3 Django there is even simpler version of this tag:
#register.simple_tag
def staticfile(path):
file_path = os.path.join(settings.MEDIA_ROOT, path)
url = '%s%s?v=%s' % (settings.MEDIA_URL, path, os.stat(file_path)[stat.ST_MTIME])
return url
Usage:
<link rel="stylesheet" href="{% staticfile "css/style.css" %}" type="text/css" media="screen" />
Will add another step to my pre-commit script to replace all direct links with links to versioned files in the minimized CSS.
Seems there is no better way to do it. If you think of any, let me know and I'll consider marking that one as accepted answer.
Thanks for your comments!
This might help as well: http://www.fanstatic.org/
I think a simple solution might be:
Write your css files as Django templates.
Write a Django command to render your css-templates (and store them in somewhere accessible)
In your deployment script call this command.
I see the suggestion of using Mylyn WikiText to convert wiki pages to html from this question except I'm not sure if its what I'm looking for from reading the front page of the site alone. I'll look into it further. Though I would prefer it being a Trac plug-in so I could initiate the conversion from within the wiki options but all the plugins at Trac-Hacks export single pages only whereas I want to dump all formatted pages in one go.
So is there an existing Trac plug-in or stand-alone application that'll meet my requirements? If not where would you point me to start looking at implementing that functionality myself?
You may find some useful information in the comments for this ticket on trac-hacks. One user reports using the wget utility to create a mirror copy of the wiki as if it was a normal website. Another user reports using the XmlRpc plugin to extract HTML versions of any given wiki page, but this method would probably require you to create a script to interface with the plugin. The poster didn't provide any example code, unfortunately, but the XmlRpc Plugin page includes a decent amount of documentation and samples to get you started.
If you have access to a command line on the server hosting Trac, you can use the trac-admin command like:
trac-admin /path/to/trac wiki export <wiki page name>
to retrieve a plain-text version of the specified wiki page. You would then have to parse the wiki syntax to HTML, but there are tools available to do that.
For our purposes, we wanted to export each of the wiki pages individually without the header/footer and other instance-specific content. For this purpose, the XML-RPC interface was a good fit. Here's the Python 3.6+ script I created for exporting the whole of the wiki into HTML files in the current directory. Note that this technique doesn't rewrite any hyperlinks, so they will resolve absolutely to the site.
import os
import xmlrpc.client
import getpass
import urllib.parse
def add_auth(url):
host = urllib.parse.urlparse(url).netloc
realm = os.environ.get('TRAC_REALM', host)
username = getpass.getuser()
try:
import keyring
password = keyring.get_password(realm, username)
except Exception:
password = getpass.getpass(f"password for {username}#{realm}: ")
if password:
url = url.replace('://', f'://{username}:{password}#')
return url
def main():
trac_url = add_auth(os.environ['TRAC_URL'])
rpc_url = urllib.parse.urljoin(trac_url, 'login/xmlrpc')
trac = xmlrpc.client.ServerProxy(rpc_url)
for page in trac.wiki.getAllPages():
filename = f'{page}.html'.lstrip('/')
dir = os.path.dirname(filename)
dir and os.makedirs(dir, exist_ok=True)
with open(filename, 'w') as f:
doc = trac.wiki.getPageHTML(page)
f.write(doc)
__name__ == '__main__' and main()
This script requires only Python 3.6, so download and save to a export-wiki.py file, then set the TRAC_URL environment variable and invoke the script. For example on Unix:
$ TRAC_URL=http://mytrac.mydomain.com python3.6 export-wiki.py
It will prompt for a password. If no password is required, just hit enter to bypass. If a different username is needed, also set the USER environment variable. Keyring support is also available but can be disregarded.