One common use-case while writing documentation is to have examples of command output. Some tools also product ANSI (colored) output, so there is a real need to show output using their original colors.
Still I was not able to get command output into code blocks in mkdocs, something that worked quite fine with Sphinx via command-output extension.
Any idea on how this can be achieved? I really want to avoid the screenshot route.
Related
I want to make a button in HTML that would do the equivalent of entering a specific command into the Chrome console. How would you go about doing this?
Before you read this know that: Using such methods and specially eval() ARE NOT recommended at all due to a massive range of huge problems including major security holes and performance issues (please read everything wrong with eval and eval docs as well). therefore please consider other methods or share more details so we may suggest better solutions.
For your case most of the codes should be executable with the example below and the output will be shown in console. (document.querySelector('botton') can be replaced with your desired code)
function exec(code){
console.log(Function(`return (${code})`)())
}
<button onclick="exec(`document.querySelector('button')`)">Test</button>
As far as I know the code above is extremely vulnerable
I have an external site from which I want to download a zipped CSV file. Currently, I'm downloading it unzipped, saving it to disk, then unzipping it, saving the unzipped file to disk, then reading the unzipped file with the CSV reader. Lots of useless steps in the process can be trimmed out, and I went on my way to do so.
This amazing answer helped me to get myself going. I tried to use the first option linked there (GZIPInputStream), but I get a "Not GZIP format" error, so I suppose I have to go to the second option.
This is my current code, and it does what I want it to do:
(defn download-zipped-stream!
(:body (clj-http.client/get "www.example.com" {:as :stream})))
(with-open
[stream (ZipInputStream. download-zipped-stream!)]
(.getNextEntry stream)
(doall (clojure.data.csv/read-csv (clojure.java.io/reader stream) :separator \;)))
I literally got to this by trial and error. There are mainly three things I'd like to change / understand about this code.
Ideally, I would want to break my code in two parts: one to download and unzip the content, returning a stream - the reason being that I want to decide later whether I want to read it as a csv directly, or write to disk (I don't want to lose this option, because, during development, it is much easier to read a pre-downloaded csv file than downloading the big content every single time). Turns out that, if I try to access the stream outside of the with-open call, I get a "stream closed" error (which, from what I understand, makes total sense).
On the above code, I have to call this .getNextEntry, or I get an empty list. As someone who is striving to write functional code, this bothers me, because, from what I can understand, I'm dealing with states here - my stream object looks mutable, which is something I really don't want. Isn't there a way to work around this step and straight-up not have it there?
I tried to call the read-csv method directly on the stream object, but the read-csv doesn't really know how to handle ZipInputStreams, apparently. Seeing this, I simply and hopefully throwed an io/reader call in between, and it worked. I don't know if this is the best approach, though. Is it correct?
I'm quite new to Clojure, and I'm completely clueless about Java in general, so, as you can see, my knowledge about those stream objects is pretty limited. I tried to read something about it in Java, but I quitted because I was not sure about how much of it could be useful for someone learning Clojure, so any pointers are also appreciated.
I think you are on the right approach. Suggestions to consider:
Consider using wget to manually download the *.csv.gz file to your local disk. Then, just open that local file instead of using clj-http.client/get.
I haven't played much with ZipInputStream, but if using .getNextEntry() seems to be required, just go with it.
The examples for read-csv show using a Reader to give access to the input file, so this is the expected behavior.
This template project shows how I like to organize a Clojure project & source code. Be sure to peruse the list of documentation provided.
Don't forget to utilize cljdoc.org for looking up Clojure library API docs. For example, see the API docs for data.csv.
Update
You may also want to review this answer.
Use https://github.com/techascent/tech.ml.dataset optionally with https://scicloj.github.io/tablecloth/index.html (a dplyr like api for TMD)
Also has advantage of being extremely fast and able to handle datasets that can't fit in memory, talks SQL, Arrow, et. al. Join conversation about it here:
https://clojurians.zulipchat.com/#narrow/stream/151924-data-science/topic/tech.2Eml.2Edataset
Objective
Scrap HTML table from warframe wikia.
Background
I am trying to get the information of a table in warframe, the Mods List table. To achieve this objective I read the HTML-parser on Node.js topic and concluded that using YQL was my best option.
Code
By using Google Chrome Dev Tools, and two chrome extensions called CSS and XPath checker and XPath Helper, I was able to pin point the exact location of the table I am looking for with the following XPath query:
//*[#id="mw-content-text"]/div[33]/div/div[1]/table/tbody
Now, Chrome says this is the correct path, and the plugins I am using suggest it as well.
Problem
The problem is that when I use YQL, the result in Json is something utterly and completely different from the talbe I am expecting. In fact, it returns a different table together with misc data.
I am baffled to why this is happening. The wikia is a simple HTML page with little to no dynamic information whatsoever, so I really can't understand why I am getting erroneous results.
What could the problem be?
Unfortunately, YQL does not work properly with pages that are loaded over time, as is the case wit the wikia.
So, even then the XPath is correct, when Yahoo makes the first (and only) request, it receives an incomplete HTML, and never completes it.
To fix the issue, I decided instead to locally parse the HTML in my nodejs server using the npm-request and npm-cheerio packages.
The first package downloads the full page HTML, and the second parses it for the information I am looking for.
An effective solution that instead of relying on a third party tool, transfers all the work to my server.
Hope this helps someone, in the future !
I work for a middleware company. We would like to integrate Cppcheck into our build system to help preventing errors and issues in our code. Our code is big, and it's distributed in several modules (each module in a different folder). These modules have many dependencies between them.
When running cppcheck, we want to run it only once over the whole code to give the whole view to the tool. However, some modules are not related to the core ones, and we want to skip those modules from the analysis. Besides, we have implemented APIs for different languages. So for example, we have some modules for C++ that we would like to analyze separately from the C modules.
We have basically two options: 1) call cppcheck with a list of the modules that we want to analyze, or 2) call cppcheck from the top level folder of the code, and use -i options to ignore all the modules that shouldn't be analyzed.
Both approaches worked fine up to the point of creating the XML report. The problem appears when calling cppcheck-htmlreport. We observed that no index.html or stats.html were generated. Besides, only some of the results appearing in the XML were translated into HTML reports. For many results, the HTML pages were not generated.
Any memory problem can be discarded. We already verified this. Besides, the tool doesn't start creating HTML reports from the XML results consecutively and then it stops. Actually, what happens is that the HTML reports go jumping. I mean, the HTML report for error number 1 in the XML is created, then maybe next one is number 5, and so on.
We called cppcheck-htmlreport with --source-code option pointing to the top level folder of the code. I think the problem may be caused by this. I tried to call cppcheck just from the top level folder, with no -i options, and then the HTML reports were generated without issues. So it looks like the XML created by using -i options cannot be correctly understood by cppcheck-htmlreport.
Is there a way to provide -i options to cppcheck-htmlreport as well? I think this could solve the problem...
I have also noticed that the problem only seems to appear when many modules and code is analyzed. When analyzing only a few modules the HTML report was correct, although we still called cppcheck-htmlreport providing the top level folder as ---source-dir.
Is this a known issue in cppcheck HTML generator? Is there any way to solve this?
Any advice is very much appreciated.
Thanks,
Sonia
I want to be able to compare the results i get from running an OCR on the same document three times. Are there any tools out there that i can use to make this happen?
I would like compare the three documents and based on what characters are the same 3/3 times or 2/3 times, create a fourth document with the output of this decision. I am using Abby Fine reader which has given me great results, but i am trying to do everything i can to get to 100%.
I know microsoft word has a "compare documents" function, and i would like to be able to do this type of analysis on a larger scale with a robust algorithm.
any ideas?
Thanks for your time!
If the output is a simple text file, you could use the bash diff command and a simple shell script to compare them. You could probably then use a slightly more complicated shell script to parse through the output file and create a final document.