astyle: problems excluding files and directories using the "--exclude" option - astyle

I recently hit a usage problem with astyle that I have been unable to figure out. I am not sure if this is a bug, or I am simply using the astyle tool incorrectly.
I am attempting to use the "--exclude" option to omit files and directories from processing, but continue to get an "unmatched" exclude error and astyle terminates:
bwallace$ ls -l foo.c
-rw-r--r-- 1 bwallace 1767304860 22 Aug 1 21:36 foo.c
bwallace$ astyle ./foo.c --exclude=./foo.c -v
Artistic Style 2.04 08/03/2014
Exclude (unmatched) ./foo.c
Artistic Style has terminated
When I pass the "-i" (ignore exclude errors) astyle processes the file as expected. Hence, it seems to be a problem with the "exclude" statement.
bwallace$ astyle ./foo.c --exclude=./foo.c -v -i
Artistic Style 2.04 08/03/2014
Exclude (unmatched) ./foo.c
Unchanged ./foo.c
0 formatted 1 unchanged 0.00 seconds 2 lines
Is this a bug? Am I using astyle incorrectly? Any help would be appreciated.

Excluding a directory is done using simple string contains matching rather than matching actual directories. I've been having the same issue and figured it out by looking at the source here.
Adding a lot of options is a bit tedious. I've found it's easiest to create an options file. There are instructions on the astyle website about where to put it.
To exclude multiple files or directories you need to have multiple "--exclude" options in the file:
--exclude=dir/subdir1
--exclude=dir/subdir2

Try this: astyle "*.c" --exclude=foo.c - that should do the trick.
The . in your exclude statement is one of the issues. Using a wildcard for Astyle's input ("*.c") also seems to be required.
This is definitely weird behaviour on Astyle's side.
An unmatched exclude flag results in an "exclude error" and AStyle terminates.
When you add --ignore-exclude-errors, AStyle continues despite this error. I usually add this flag to my options files.
For the record - I'm using AStyle 3.1, so it could be that this improved in the meantime.

Related

Admin import - Group not found

I am trying to load multiple csv files into a new db using the neo4j-admin import tool on a machine running Debian 11. To try to ensure there's no collisions in the ID fields, I've given every one of my node and relationship files.
However, I'm getting this error:
org.neo4j.internal.batchimport.input.HeaderException: Group 'INVS' not found. Available groups are: [CUST]
This is super frustrating, as I know that the INV group definitely exists. I've checked every file that uses that ID Space and they all include it.Another strange thing is that there are more ID spaces than just the CUST and INV ones. It feels like it's trying to load in relationships before it finishes loading in all of the nodes for some reason.
Here is what I'm seeing when I search through my input files
$ grep -r -h "(INV" ./import | sort | uniq
:ID(INVS),total,:LABEL
:START_ID(INVS),:END_ID(CUST),:TYPE
:START_ID(INVS),:END_ID(ITEM),:TYPE
The top one is from my $NEO4J_HOME/import/nodes folder, the other two are in my $NEO4J_HOME/import/relationships folder.
Is there a nice solution to this? Or have I just stumbled upon a bug here?
Edit: here's the command I've been using from within my $NEO4J_HOME directory:
neo4j-admin import --force=true --high-io=true --skip-duplicate-nodes --nodes=import/nodes/\.* --relationships=import/relationships/\.*
Indeed, such a thing would be great, but i don't think it's possible at the moment.
Anyway it doesn't seems a bug.
I suppose it may be a wanted behavior and / or a feature not yet foreseen.
In fact, on the documentation regarding the regular expression it says:
Assume that you want to include a header and then multiple files that matches a pattern, e.g. containing numbers.
In this case a regular expression can be used
while on the description of --nodes command:
Node CSV header and data. Multiple files will be
logically seen as one big file from the
perspective of the importer. The first line must
contain the header. Multiple data sources like
these can be specified in one import, where each
data source has its own header.
So, it appears that the neo4j-admin import considers the --nodes=import/nodes/\.* as a single .csv with the first header found, hence the error.
Contrariwise with more --nodes there are no problems.

Extracting CREATE TABLE definitions from MySQL dump?

I have a MySQL dump file over 1 terabyte big. I need to extract the CREATE TABLE statements from it so I can provide the table definitions.
I purchased Hex Editor Neo but I'm kind of disappointed I did. I created a regex CREATE\s+TABLE(.|\s)*?(?=ENGINE=InnoDB) to extract the CREATE TABLE clause, and that seems to be working well testing in NotePad++.
However, the ETA of extracting all instances is over 3 hours, and I cannot even be sure that it is doing it correctly. I don't even know if those lines can be exported when done.
Is there a quick way I can do this on my Ubuntu box using grep or something?
UPDATE
Ran this overnight and output file came blank. I created a smaller subset of data and the procedure is still not working. It works in regex testers however, but grep is not liking it and yielding an empty output. Here is the command I'm running. I'd provide the sample but I don't want to breach confidentiality for my client. It's just a standard MySQL dump.
grep -oP "CREATE\s+TABLE(.|\s)+?(?=ENGINE=InnoDB)" test.txt > plates_schema.txt
UPDATE
It seems to not match on new lines right after the CREATE\s+TABLE part.
You can use Perl for this task... this should be really fast.
Perl's .. (range) operator is stateful - it remembers state between evaluations.
What it means is: if your definition of table starts with CREATE TABLE and ends with something like ENGINE=InnoDB DEFAULT CHARSET=utf8; then below will do what you want.
perl -ne 'print if /CREATE TABLE/../ENGINE=InnoDB/' INPUT_FILE.sql > OUTPUT_FILE.sql
EDIT:
Since you are working with a really large file and would probably like to know the progress, pv can give you this also:
pv INPUT_FILE.sql | perl -ne 'print if /CREATE TABLE/../ENGINE=InnoDB/' > OUTPUT_FILE.sql
This will show you progress bar, speed and ETA.
You can use the following:
grep -ioP "^CREATE\s+TABLE[\s\S]*?(?=ENGINE=InnoDB)" file.txt > output.txt
If you can run mysqldump again, simply add --no-data.
Got it! grep does not support matching across multiple lines. I found this question helpul and I ended up using pcregrep instead.
pcregrep -M "CREATE\s+TABLE(.|\n|\s)+?(?=ENGINE=InnoDB)" test.txt > plates.schema.txt

How to annote a large file with mercurial?

i'm trying to find out who did the last change on a certain line in a large xml file ( ~100.00 lines ).
I tried to use the log file viewer of thg but it does not give me any results.
Using hg annote file.xml -l 95000 .. took forever and eventually died with an error message.
Is there a way to annote a single line in a large file that does not take forever ?
You can use hg grep to dig into a file if you have interest in very specific text:
hg grep "text-to-search" file.txt
You will likely need to add the --all switch to get every change that matches and then limit your results to a specific changeset range with -r firstchage:lastchange syntax.
I don't have a file on hand of the size you are working with, so this may also have trouble past a certain point, particularly if the search string matches many many lines in the file. But if you can get as specific as possible with your search string, you should be able to track it down.

dpkg-shlibdeps: error: no dependency information found for

I'm compiling a deb package and when I run dpkg-buildpackage I get:
dpkg-shlibdeps: error: no dependency information found for /usr/local/lib/libopencv_highgui.so.2.3
...
make: *** [binary-arch] Error 2
This happens because I installed the dependency manually. I know that the problem will be fixed if I install the dependency (or use checkinstall), and I want to generate the package anyway because I'm not interested on dependency checking. I know that I can give to dpkg-shlibdeps the option --ignore-missing-info which prevents a fail if dependency information can't be found. But I don't know how to pass this option to dpkg-shlibdeps since I'm using dpkg-buildpackage and dpkg-buildpackage calls dpkg-shlibdeps...
I have already tried:
sudo dpkg-buildpackage -rfakeroot -d -B
And with:
export DEB_DH_MAKESHLIBS_ARG=--ignore-missing-info
as root.
Any ideas?
use:
override_dh_shlibdeps:
dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info
if your rule file hasn't the dh_shlibdeps call in it. That's usually the case if you've
%:
dh $#
as only rule in it ... in above you must use a tab and not spaces in front of the dh_shlibdeps
If you want it to just ignore that flag, change the debian/rules line from:
dh_shlibdeps
to:
dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info
Yet another way, without modifying build scripts, just creating one file.
You can specify local shlib overrides by creating debian/shlibs.local with the following format: library-name soname-version dependencies
For example, given the following (trimmed) ldd /path/to/binary output
libevent-2.0.so.5 => /usr/lib/libevent-2.0.so.5 (0x00007fc9e47aa000)
libgcrypt.so.20 => /usr/lib/libgcrypt.so.20 (0x00007fc9e4161000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fc9e3b1a000)
The contents of debian/shlibs.local would be:
libevent-2.0 5 libevent-2.0
libgcrypt 20 libgcrypt
libpthread 0 libpthread
The "dependencies" list (third column) doesn't need to be 100% accurate - I just use the library name itself again.
Of course this isn't needed in a sane debian system which has this stuff defined in /var/lib/dpkg/info (which can be used as inspiration for these overrides). Mine isn't a sane debian system.
Instead of merely ignoring the error, you might also want to fix the source of the error, which is usually either a missing or an incorrect package.shlibs or package.symbols file in package which contains the shared library triggering the error.
[1] documents how dpkg-shlibdeps uses the package.shlibs resp. package.symbols, files, [2] documents the format of the package.shlibs and package.symbols files.
[1] https://manpages.debian.org/jessie/dpkg-dev/dpkg-shlibdeps.1.en.html
[2] https://www.debian.org/doc/debian-policy/ch-sharedlibs.html
You've just misspelled your export. It should be like this:
export DEB_DH_SHLIBDEPS_ARGS_ALL=--dpkg-shlibdeps-params=--ignore-missing-info
dpkg-buildpackage uses make to process debian/rules. in this process, dpkg-buildpackage it might call dpkg-shlibdeps.
thus, the proper way to pass modify a part of the package building process is to edit debian/rules.
it's hard to give you any more hints, without seeing the actual debian/rules.
Finally I did it in the brute way:
I edited the script /usr/bin/dpkg-shlibdeps, changing this :
my $ignore_missing_info = 0;
to
my $ignore_missing_info = 1;
You can use this:
dh_makeshlibs -a -n
exactly after dh_install

DIFF utility works for 2 files. How to compare more than 2 files at a time?

So the utility Diff works just like I want for 2 files, but I have a project that requires comparisons with more than 2 files at a time, maybe up to 10 at a time. This requires having all those files side by side to each other as well. My research has not really turned up anything, vimdiff seems to be the best so far with the ability to compare 4 at a time.
My question: Is there any utility to compare more than 2 files at a time, or a way to hack diff/vimdiff so it can do multiple comparisons? The files I will be comparing are relatively short so it should not be too slow.
Displaying 10 files side-by-side and highlighting differences can be easily done with Diffuse. Simply specify all files on the command line like this:
diffuse 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt
Vim can already do this:
vim -d file1 file2 file3
But you're normally limited to 4 files. You can change that by modifying a single line in Vim's source, however. The constant DB_COUNT defines the maximum number of diffed files, and it's defined towards the top of diff.c in versions 6.x and earlier, or about two thirds of the way down structs.h in versions 7.0 and up.
diff has built-in option --from-file and --to-file, which compares one operand to all others.
--from-file=FILE1
Compare FILE1 to all operands. FILE1 can be a directory.
--to-file=FILE2
Compare all operands to FILE2. FILE2 can be a directory.
Note: argument name --to-file is optional.
e.g.
# this will compare foo with bar, then foo with baz .html files
$ diff --from-file foo.html bar.html baz.html
# this will compare src/base-main.js with all .js files in git repo,
# that has 'main' in their filename or path
$ git ls-files :/*main*.js | xargs diff -u --from-file src/base-main.js
Checkout "Beyond Compare": http://www.scootersoftware.com/
It lets you compare entire directories of files, and it looks like it runs on Linux too.
if your running multiple diff's based off one file you could probably try writing a script that has a for loop to run through each directory and run the diff. Although it wouldn't be side by side you could at least compare them quickly. hope that helped.
Not answering the main question, but here's something similar to what Benjamin Neil has suggested but diffing all files:
Store the filenames in an array, then loop over the combinations of size two and diff (or do whatever you want).
files=($(ls -d /path/of/files/some-prefix.*)) # Array of files to compare
max=${#files[#]} # Take the length of that array
for ((idxA=0; idxA<max; idxA++)); do # iterate idxA from 0 to length
for ((idxB=idxA + 1; idxB<max; idxB++)); do # iterate idxB + 1 from idxA to length
echo "A: ${files[$idxA]}; B: ${files[$idxB]}" # Do whatever you're here for.
done
done
Derived from #charles-duffy's answer: https://stackoverflow.com/a/46719215/1160428
There is a simple an good way to do this = GREP.
Depending on the size of the text you can copy and paste it, or you can redirect the input of the file to the grep command. If you make a grep -vir /path to make a reverse search or a grep -ir /path. This is my way for certification exams.