Looking for a way to exclude files used by geninfo/genhtml - html

We are trying to use geninfo and genhtml (alternative to gcovr, see here) to produce an html page using coverage provided by gcov.
geninfo creates lcov-tracefiles from gcov's *.gcda files
genhtml generates html files from the above tracefiles
However, the end result includes not only our code, but also files from /usr/include.
Does anyone know of a way to exclude these?
I tried looking at the man page but could not find anything http://linux.die.net/man/1/geninfo

If you're just looking to ignore files from /usr/include, a better option is probably "--no-external", which is intended for exactly this purpose.
lcov --no-external -d $(BLD_DIR) --capture -o .coverage.run

You can use the lcov -r option to remove those files you aren't interested in.
lcov -r <input tracefile> /usr/include/\* -o <output tracefile>

Related

Format HTML automatically from command line (Similar to eslint)

Is it possible to format HTML automatically with a tool in a similar way that eslint formats javascript? Why does it seem that there isn't many customizable options that you can integrate as part of your development pipeline?
I would wish to format HTML in the following way automatically with a command ran from the terminal:
<input
class="input-style"
placeholder="Replace me!"
/>
So for example I could npm run html-lint and it would fix the syntax in html files and warn about cases it cant fix.
js-beautify also works on HTML.
npm install js-beautify
js-beautify --type html file.html
Notice all this beautifying makes the file size increase substantially. The indentation is great for revision and editing, not so much for hosting. For that reason you might find html-minifier equally useful.
I personally think tidy is a fantastic options for tidying up HTML files. Checkout Tidy
maybe what you are looking for is prettier, this also supports CLI, even you can also make config, see the complete documentation here. Prettier CLI
I hope this helps.
I Googled for "Package json pretty print html" and got the following:
https://www.npmjs.com/package/pretty
(It's not clear whether this can be included in package.json)
There's also this (appears to be a command-line tool):
https://packagecontrol.io/packages/HTML-CSS-JS%20Prettify

LCOV to exclude entire packages from code coverage analysis

I'm using LCOV as my graphical means of code coverage to tell me how much of my code I've tested, however it's including folders of code which I do not care about and it's making my coverage lower than it should actually be.
Is there a way exclude entire directories where I can ignore a bunch of cpp files which I don't care about? I know about --remove but this doesn't seem to work for this purpose. I want to exclude all folders following this pattern:
Src/GeneralSubSystems/GA/ except for Iterators
Here is the directories I want to ignore
**Src/GeneralSubSystems/GA/Iterators** I want to include this but exclude everything else
Src/GeneralSubSystems/GA/Converters
Src/GeneralSubSystems/GA/Utils
Src/GeneralSubSystems/GA/Models
Src/GeneralSubSystems/GA/Collapse
Src/GeneralSubSystems/GA/Interview
Src/GeneralSubSystems/GA/Misc1
Src/GeneralSubSystems/GA/Misc2
Src/GeneralSubSystems/GA/Misc3
Src/GeneralSubSystems/GA/Misc4
Src/GeneralSubSystems/GA/Misc5
Here is my current usage:
lcov --gcov-tool /usr/bin/gcov --capture --no-checksum --directory /jenkins/workspace/TCONVENGINE-INSTRUMENTED-BUILD/TCONV/Main/targs/Src --directory /jenkins/workspace/TCONVENGINE-INSTRUMENTED-BUILD/TCONV/Main/targs/Component --output-file ${WORKSPACE}/tps_coverage.info
lcov --remove ${WORKSPACE}/tconv_coverage.info '*/ThrdPrty/*' '*/Src/Low/*' '*/Src/TCCP-C/*' '*/Src/Tool/*' '*/zinAttInterviewDisassembler.*' '/usr/*' -o ${WORKSPACE}/tconv_coverage.info
genhtml --prefix /jenkins/workspace/TCONVENGINE-INSTRUMENTED-BUILD/TCONV/Main --title "TCONV Engine Coverage Analysis" --output-directory ${WORKSPACE}/lcov --num-spaces 3 ${WORKSPACE}/tps_coverage.info
Any help or assistance would be much appreciated, thanks in advance everyone
It might help to add two backslashes before * in the remove list.
E.g. instead of
'Src/GeneralSubSystems/GA/Utils/*'
use
'Src/GeneralSubSystems/GA/Utils/\\*'

Doxygen RTF disable Index (at end of document)?

Using doxygen 1.8.4 on Ubuntu 12.04
Generating for C/C++ source into an RTF file.
I'd like to disable the generation of the Index at the end of the document.
There are many hits for DISABLE_INDEX but this is the index at the top of HTML pages, not the main index at the end of the file. I've also searched the documentation for configuration for "index" and none of the hits seem to be about that particular index.
Update: This is also set to NO:
ALPHABETICAL_INDEX = NO
I looked in the DoxygenLayout file and there doesn't seem to be anything specific about the Index section. There are sub-indexes for namespaces, classes, and files. But nothing that I can see for the Index section. I'm not even sure if the DoxygenLayout file is used for RTF files, because of this comment:
<!-- Navigation index tabs for HTML output -->
Any help or pointers will be greatly appreciated!
TIA
John
Well this isn't an answer for this exact post. But it does indicate how to disable the generation of indexes and table of contents for Latex/PDFs.
set GENERATE_LATEX=YES and MAKEINDEX_CMD_NAME = echo
run doxygen to generate the latex file refman.tex
cd into the output directory named in LATEX_OUTPUT (typically "latex")
copy the Makefile to another directory and edit it:
remove all calls to "pdflatex refman" except the first
remove the loop
remove all calls to "echo refnam.idx"
It will look something like:
all: refman.pdf
pdf: refman.pdf
refman.pdf: clean refman.tex
pdflatex refman
clean:
rm -f *.ps *.dvi *.aux *.toc *.idx *.ind *.ilg *.log *.out *.brf *.blg *.bbl refman.pdf
cd into the output directory again and invoke the modified Makefile
cd latex
make -f ../../doc/Makefile
take a look at refman.pdf. The Table of Contents is gone, the Index is gone.
Caveat: So this works in latex output but does not work for RTF.
For my project, I've converted back to using latex, and so is a solution for me...

Want to convert all <tt> tags to <code> in a large hierarchy of HTML files

I have nearly 100 HTML files that uses the <tt> tag to markup inline code which I'd like to change to the more meaningful <code> tag. I was thinking of doing something on the order of a massive sed -i 's/<tt>/<code>/g' command but I'm curious if there's a more appropriate industrial mechanism for changing tag types on a large HTML tree.
The nicest thing you may do is to use
xmlstartlet:
xml ed -r //b -v code
It is freaky powerful. See http://xmlstar.sourceforge.net/, http://www.ibm.com/developerworks/library/x-starlet.html
If your are on a linux environment then sed is very easy, short, and fast way to do it.
Corrected command :
SAVEIFS=$IFS
IFS="\n"
for f in `find . -name "*.htm"` do sed -i 's/tt>/code>/g' "$f" ;done
IFS=$SAVEIFS
Some text editors or IDE also allow you to do a search and replace in directories with a filter on filename.
For one time performance of such tasks I use UltraEdit on Windows. UE has a find and replace in files function that works great for this. I point it at the top of the directory tree containing the files I want to change, tell it to process sub-directories, give it the extension of the files I want to change, tell it what to change and what to change it to and go.
If you have to script this in linux, then I think the sed solution or a perl / php script will work great.

What to use to check html links in large project, on Linux?

I have directory with > 1000 .html files, and would like to check all of them for bad links - preferably using console. Any tool you can recommend for such task?
you can use wget, eg
wget -r --spider -o output.log http://somedomain.com
at the bottom of the output.log file, it will indicate whether wget has found broken links. you can parse that using awk/grep
I'd use checklink (a W3C project)
You can extract links from html files using Lynx text browser. Bash scripting around this should not be difficult.
Try the webgrep command line tools or, if you're comfortable with Perl, the HTML::TagReader module by the same author.