I'm using Read The Docs for a project. Everything seems to be working well online. However, when I download the HTML for offline use, I find that the documentation is all crammed into a single HTML file (index.html). Is it possible to download the documentation so that it has the same look and feel as the online docs with separate, linked pages?
I tried changing the documentation type from the RTD Admin > Settings page between the three options (Sphinx Html, Sphinx HtmlDir, and Sphinx Single Page HTML), but none of these seem to visibly change either the online content or the downloaded HTML structure.
Python's documentation from generated from Sphinx does have separate HTML files. Yet Read The Docs's own documentation also downloads as a single HTML file.
Am I missing something, or is this a limitation of Read The Docs?
My Read The Docs site is here: http://kiva.readthedocs.org/en/docs/
My GitHub Repository is here: https://github.com/nealkruis/kiva/tree/docs/docs
HTTrack does a good job of mirroring Read the Docs:
httrack 'http://doc.scrapy.org/en/1.0/' -O scrapy-1.0
Answer from ericholscher on #readthedocs IRC:
correct, we build our downloadable HTML as a single page by default
there's currently no setting to change that
Related
I've a set of pre-generated html documentation files (provided via an external mechanism). These are fully standalone in their own right, but I'd like to integrate these files into an existing portal.
Ideally, I'd like the existing site to take care of the (common) layout, and simply embed the existing html into this layout. I've been trying to get it to work over the last few hours to no avail.
Problems I've encountered (no specific order):
The pre-generated content already contains html/body/etc. tags (as mentioned, it is standalone documentation in its own right).
Redirection is no use, as it bypasses the view mechanism, losing the common layout.
I'm not really sure how to proceed, as I seem to have exhausted my googling ability on this matter. I'd appreciate any tips or pointers on concepts or terminology surrounding what I'm trying to do - I'm happy to do the leg work investigation as required.
have you tried putting your html files in the wwwroot folder and in the Startup.cs Configure method, add the line app.UseStaticFiles();
If you really need to static *.html use
var htmlString = System.IO.File.ReadAllLines(#"patch/to/your.html");
And then pass it to view and render by #Html.Raw() but i not reccomend this way. Better create partial view and then simply use by #Html.Partial() (official docs)
So there is a lot out there about creating anchors in markdown, and creating internal table-of-contents-type anchors in a notebook. What I need though is the ability to access an anchor in my notebook on Github from an external source, e.g.:
https://github.com/.../mynotebook.ipynb#thiscell
I've got a number of interactive tutorials hosted this way, and a single manual that I want to be able to link to sections of the notebooks for. I can add the anchor tags into markdown cells just fine, using:
<a id='thiscell'></a>
but when I try using the link as I wrote above, it just loads the notebook at the top, as if there was no reference to an anchor.
GitHub renders notebooks using a separate domain, render.githubusercontent.com, and integrates the output in a nested frame. This means that any anchors on the GitHub URL won't work, because the framed document is a different URL entirely.
Moreover, the framed content is not easily re-usable, as the result is a cached rendering of the notebook with a limited lifetime. You can't rely on it sticking around for later linking!
So if you need to be able to link to sections in a notebook, you'd be far better off using the Jupyter notebook viewer service, https://nbviewer.jupyter.org/. It supports showing notebooks from any public URL including GitHub-hosted repositories and GitHub gists. You can also just enter your GitHub user name (or username/repository) for quick access.
This notebook viewer is far more feature-rich than the one GitHub uses. GitHub kills all embedded JavaScript, and strips almost all HTML attributes. Any embedded animations are right out. But the Jupyter nbviewer service supports those directly out of the box.
E.g. compare these two notebooks on nbviewer:
https://nbviewer.jupyter.org/github/mjpieters/adventofcode/blob/master/2018/Day%2020.ipynb
https://nbviewer.jupyter.org/github/mjpieters/adventofcode/blob/master/2018/Day%2021.ipynb
with the same notebooks on GitHub:
https://github.com/mjpieters/adventofcode/blob/master/2018/Day%2020.ipynb
https://github.com/mjpieters/adventofcode/blob/master/2018/Day%2021.ipynb
The first one contains an animation at the end, the second has a complicated table made easier to read by use of some HTML styling and anchor links.
I had the same problem. As a workaround, I have delegated the rendering of my notebook to http://nbviewer.jupyter.org. It's just a matter of providing its GitHub public url and clicking Go!
Of course, the internal links still don't work under GitHub, but I have now a functioning notebook somewhere on the web, which is what I actually wanted in the first place.
I hope this applies to your case too.
I am trying to edit the following page: http://tktruck.com/contact.aspx in order to get rid of the cat photos.
Apparently there is no contact.aspx file in the FTP, so I am having trouble figuring out how to edit this page's content.
Some additional information:
I have access to the back-end (FTP files). I have searched the FTP for contact.aspx, and I cannot find the file. I have tried searching the entire website for tags with the appropriate sources, as well. I found some code with the image tags, and removed those tags. When I uploaded the code to the server, the images were still there (and still are).
Does anyone know what I have to do to edit an aspx file, or at least have an idea on how to remove these photos?
You need to get access to the server in which the website is hosted.
I managed to collect the behavior of a complex web site into a webarchive. Thereafter I would like to turn that webarchive into an html set of nested directory. Yet, when I did it both with Waf and with a commercial software bought on the the Apple store, what I get is just the nested directory with the html page at the bottom and no images, nor css nor working links.
If you are interested the webarchive document is at:
http://www.miafoto.it/it/GiroMilano.webarchive
while the weak product of the extraction is at:
http://www.miafoto.it/it/Giromilano/Pagine/default.aspx
and the empty directories above.
In addition to the different look, the webarchive displays the same behavior as the official web site - when a listbox vales is selected and then the button pushed - while the extracted version produces a page with no contents by loading itself rather than the official page.
As you may see the webarchive is over 1MB while the extraction just little over 1 KB.
What is wrong with it and how may I perform such an apparently trivial business with usable results?
Thanks,
textutil -convert html example.webarchive
Be careful — html with files is created in the same folder as webarchive!
Also, I had to open .html with text editor and replace "file:///image.tiff" links (replace "file:///" with "") so they point to relative path.
Also, not all browsers display .tiff images.
Who knew we have Stack Overflow wiki?
I find that this WebArchiveExtractor.app works on my Mac (Mojave OS) –
https://robrohan.github.io/WebArchiveExtractor/
I managed the issue by finding all parameters being submitted in the page and submitting them too in my script, ignoring the webarchive.
To save HTML pages on mac, I use chrome. Download and install it and save your page as HTML. Safari will save the web pages with webarchiveformat and for me, it's very hard to deal with it.
Using HTML5 File API I am able to read text and XML files without any problems. I have tried to read the .docx/.doc file with the same code and that was not working. In my chrome extension I need to open a .doc/.docx file in editable mode in Google chrome. I am really waiting to know all the possible ways to achieve this. I found some extensions like Google docs viewer etc.. But they are opening files in preview mode. Please help me on this
The .DOC file is binary, and DOCX is a zip file containing a whole collection of XML files that make up a Word document, so neither can easily be read by your straight XML reader.
I don't think there are any native extensions or bits of code for Chrome to edit DOC or DOCX files, so you'd have to write your own - presumably, that's what the extension you're considering would do. You can use the Google docs viewer as a jumping off point - there's no difference between "preview mode" and "edit mode" other than one writes back to the file and the other doesn't. And you'd need to add the controls to modify the document on screen, which may be the larger hurdle.
If you can give some detail on where exactly you're stuck, that might help the community point you towards a solution, but a general "nothing does this for me" is likely to result in a little less help.
Good luck!
you can use jquery for this.
you can use typewith me which is generated in jquery where you can import/export docx,doc.pdf,etc.. files check type with me and private pad
you can use its jquery code for your use as it is opensource.