Going n folders up with a terminal command? - cd

cd .. goes one folder up.
Is there a (one-line) command to go n folders up?

You sure can define a function to do that:
$ go_up() { for i in $(seq $1); do cd ..; done }
$ go_up 3 # go 3 directories up

I am not aware of any command that does that, but it is easy to create one yourself. For example, just add
cdn() {
for ((i=0;i<${1-0};i++))
do
cd ..
done
}
in your ~/.bashrc file, and after you create a new shell you can just run
cdn N
and you will move up by N directories

All right, another really funny answer, that is really a one-liner, to go up 42 parent directories:
cd $(yes ../|head -42|tr -d \\n)
Same as gniourf_gniourf's other answer, it's cd - friendly (and it's just a couple characters longer than the shortest answer).
Replace 42 with your favorite number.
Now that you understood the amazing power of the wonderful command yes, you can join the dark side and use the evil command eval, and while we're at it we can use the terrible backticks:
eval `yes 'cd ..;'|head -42`
This is so far the shortest one-liner, but it's really bad: it uses eval, backticks and it's not cd - friendly. But hey, it works really well and it's funny!

you can use a singleline for loop:..
for i in {1..3}; do cd ../; done
replace the 3 with your n
for example:
m#mariachi:~/test/5/4/3/2/1$ pwd
/home/m/test/5/4/3/2/1
m#mariachi:~/test/5/4/3/2/1$ for i in {1..3}; do cd ../; done
m#mariachi:~/test/5/4$ pwd
/home/m/test/5/4
...however I don't think it will be much faster than typing cd and .. then hitting tab for each level you want to go up!! :)

How often do you go up more than five levels? If the answer is Not too often I suggest you place these cd - friendly aliases in your profile:
alias up2='cd ../..'
alias up3='cd ../../..'
alias up4='cd ../../../..'
alias up5='cd ../../../../..'
Advantages
No bashims, no zshisms, no kshisms.
Works with any shell supporting aliases
As readable and understandable as it gets.

A funny way:
cdupn() {
local a
[[ $1 =~ ^[[:digit:]]+$ ]] && printf -v a "%$1s" && cd "${a// /../}"
}
How does it work?
We first check that the argument is indeed a number (a chain of digits).
We populate the variable a with $1 spaces.
We perform the cd where each space in a has been replaced with ../.
Use as:
cdupn 42
to go up to forty-second parent directory.
The pro of this method is that you'll still be able to cd - to come back to previous directory, unlike the methods that use a loop.
Absolutely worth putting in your .bashrc. Or not.

Related

Fuzzing command line arguments [argv]

I have a binary I've been trying to fuzz with AFL, the only thing is AFL only fuzzes STDIN, and File inputs and this binary takes input through its arguments pass_read [input1] [input2]. I was wondering if there are any methods/fuzzers that allow fuzzing in this manner?
I don't not have the source code so making a harness is not really applicable.
Michal Zalewski, the creator of AFL, states in this post:
AFL doesn't support argv fuzzing, because TBH, it's just not horribly useful in
practice. There is an example in experimental/argv_fuzzing/ showing how to do it
in a general case if you really want to.
Link to the mentioned example on GitHub: https://github.com/google/AFL/tree/master/experimental/argv_fuzzing
There are some instructions in the file argv-fuzz-inl.h (haven't tried myself).
Bash only Solution
As an example, lets generate 10 random strings and store them in a file
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 10 > string-file.txt
Next, lets read 2 lines from string-file and pass it into our application
exec handle< string-file.txt
while read string1 <&handle ; do
read string2 <&handle
pass_read $line1 $line2 >> crash_file.txt
done
exec handle<&-
We then have any crashes stored within crash_file.txt for further analysis.
This may not be the most elegant solution, but perhaps you gives you an idea of some other possibilities if no tool necessarily fulfills the current requirements
I looked at the AFLplusplus repo on GitHub. Inside AFLplusplus/utils/argv_fuzzing/, there is a Makefile. If you run it, you will get a .so file (a shared library) that you can use to do argv fuzzing, even if you only have the binary. Obviously, you must use AFL_PRELOAD. You can read more in the README.

.hgignore regex syntax to ignore a specific file (e.g. "core") anywhere

Suppose I have a working directory like this:
t.c
core
multicore
test1/core
I want to ignore all "core" files.
If I use "/core$" (4) will get ignored but not (2).
If I use "^core$" (2) will get ignored but not (4)
If I use "core$" (2) and (4) will get ignored but so will (3) which is not what I want.
How do you do this?
planetmaker's answer, "use glob syntax", is simpler and is what I would usually recommend. There is, however, a regexp answer, and a minor flaw in the glob syntax version.
Mercurial uses Python regular expressions, so we have the (alt1|alt2|...) syntax available. Note that these are grouped.1 We can and should use (?:...) to avoid grouping when required, but for .hgignore, the grouping is irrelevant, so it is simpler (and much more readable) to just use the parentheses, and I do so where possible below.
We could just write:
^core$
/core$
to ignore the file core with nothing coming before it (first pattern) and to ignore a file with a name like test1/core (second pattern). This is a fine, but we can compress it a bit more using the alternation syntax. The leading ^ works even in an alternate within a group, as long as it is still, in effect, leading, so:
(^|/)core$
means the same thing and accomplish the job using regexp syntax.
Annoyingly, all of these patterns ignore all files in any directory named core (whether or not we use regexp vs glob syntax):
$ rm core
$ mkdir core
$ touch core/keepme
$ cat .hgignore
syntax: glob
core
$ hg status -A
? .hgignore
? multicore
? t.c
I core/keepme
I test1/core
The problem is that as soon as we say ignore (some pattern that matches a directory named core), if there are files in that directory that are currently untracked, Mercurial ignores them too. You can forcibly add the file—as with Git, once a file is tracked, any ignore-file pattern that matches it becomes irrelevant—but this does not help with additional files we stick into the directory:
$ hg add core/keepme
$ touch core/keep-me-too
$ hg status -A
A core/keepme
? .hgignore
? multicore
? t.c
I core/keep-me-too
I test1/core
Here, regular expressions can prove to be the answer. Python (and Perl) regexps allow "negative lookbehind", i.e., you can say "as long as some pattern does not appear". Hence we can replace the existing .hgignore contents with:
$ cat .hgignore
(?<!^core/).*/core$
and now we have this status:
$ hg status -A
A core/keepme
? .hgignore
? core/keep-me-too
? multicore
? t.c
I test1/core
This particular regular expression depends on the wanted core directory being named core at the top level (^core). If we wanted to keep core directories named core (top level) and a/subsys/core, we would write:
(?<!(^core|^a/subsys/core)/).*/core$
as our regular expression.
Constructing these regexps is something of an art form, and rarely worth a lot of effort. Glob syntax is almost always simpler, and as long as it suffices, I prefer it. It was once significantly slower than regexp syntax but this was fixed back around Mercurial 3.1.
1Grouped, here, means that in Python code, we may use the .groups() method to obtain the parts of the string matched by these parts of the regular expressions. Non-grouped (?:...) expressions do not affect the way .groups() gathers the parts of the strings. As in the paragraph to which this is a footnote, this is more a concern when writing Python (or Perl, or whatever) code, not when using these patterns in .hgignore or other parts of Mercurial.
Try to give the filename using glob syntax:
syntax: glob
core
It gives:
~/hg-test$ hg st -A
M .hgignore
? multicore
I core
I dir1/core

mysql startup shell problems

since met so many startup errors,I decide to analyze mysql startup shell.while some code fragment I cannot understand clearly.
version:
mysql Ver 14.14 Distrib 5.5.43, for osx10.8 (i386) using readline 5.1
368 #
369 # First, try to find BASEDIR and ledir (where mysqld is)
370 #
372 if echo '/usr/local/mysql/share' | grep '^/usr/local/mysql' > /dev/null
373 then
374 relpkgdata=echo '/usr/local/mysql/share' | sed -e 's,^/usr/local/mysql,,' -e 's,^/,,' -e 's,^,./,'
375 else
376 # pkgdatadir is not relative to prefix
377 relpkgdata='/usr/local/mysql/share'
378 fi
what's the purpose of line 372? a little weird
any help will be appreciated.
At first glance, this is very strange indeed... but here's a solution to this mystery.
372: if echo '/usr/local/mysql/share' | grep '^/usr/local/mysql' > /dev/null
373: then
grep returns true if it matches and false if it doesn't, so this is testing whether the string /usr/local/mysql/share begins with (^) /usr/local/mysql. Output goes /dev/null because we don't need to see it, we just want to compare it.
"Well," you interject, "that's obvious enough. The question is why?" Stick with me.
If it matches:
374: relpkgdata=echo '/usr/local/mysql/share' | sed -e 's,^/usr/local/mysql,,' -e 's,^/,,' -e 's,^,./,'
Beginning with /usr/local/mysql/share, strip off the beginning /usr/local/mysql, then strip off the beginning / then prepend ./.
So /usr/local/mysql/share becomes ./share.
Otherwise, use the string /usr/local/mysql/share.
375: else
376: # pkgdatadir is not relative to prefix
377: relpkgdata='/usr/local/mysql/share'
"That's all fine, too," I hear you say, "but why go through all these gyrations to (apparently) compare and massage two fixed literal strings?? We already know the answer, so what's up with all the tests and substitution?"
It's a fair question.
My first suspicion was that there was some sort of magic bash hackery going on that I didn't recognize, but no, this code is really all too simple to be something along those lines.
My second suspicion, since this is notably absent from MySQL 5.0.96 (which I am not running but keep on hand for reference), was that this was an abandoned attempt to introduce some new magical behavior into mysqld_safe which was never finished and replaced with actual variables, the testing and massaging of which would have made a lot more sense than doing the same thing to literal strings.
But, no. When the only tool you have is a hammer, everything looks like a nail. What this is, is an example of doing something simple... the hard way. At least that's what it looks like to me. There actually is a somewhat rational explanation. To find it answer, you have to look into the source code (not binary) distribution.
MySQL has a lot of "hard-coded" defaults. This turns out to be an example of these.
In the source file scripts/mysqld_safe.sh, the snippet above looks very different:
if echo '#pkgdatadir#' | grep '^#prefix#' > /dev/null
then
relpkgdata=`echo '#pkgdatadir#' | sed -e 's,^#prefix#,,' -e 's,^/,,' -e 's,^,./,'`
else
# pkgdatadir is not relative to prefix
relpkgdata='#pkgdatadir#'
fi
Ah, source munging. Pattern substitution.
When you're compiling MySQL from source, the file scripts/Makefile contains instruction that use sed to replace things like #prefix# and #pkgdatadir# with the literal values. The same thing, of course, happens when Oracle or the Linux disto maintainers compile their binary distribution from source. These paths get hard-coded into many, many, many places in the code, including this script... resulting in the otherwise incomprehensible comparison of two literal strings that somebody should already have known the answer to.
Instead of testing at build time, whether one path is an anchored substring of the other, and the "relpkgdata" value should be expressed relative to the current directory and modifying this script accordingly, that logical test is actually deferred until runtime, comparing two literals that were substituted in for their placeholders at build time.
I've gone to this amount of detail, not because it will help you troubleshoot, because I suspect it won't. It was, however, just bizarre enough to warrant some further investigation.
If you are having difficulty getting MySQL Server running... well, you shouldn't be, because it's a well-established system and it should work. If /bin/sh on your system isn't a symlink to /bin/bash, you might want to change mysqld_safe's shebang line from #!/bin/sh to #!/bin/bash, but beyond that, I suspect you are sniffing down the wrong rabbit hole by looking at mysqld_safe to get to the bottom of your issue. As convoluted as mysqld_safe is, it can't be said that it isn't time-tested. As they say, "the problem is somewhere else."
If I may, I'll suggest that you familiarize yourself with some of our other communities where you're likely to find the answer you need, particularly Ask Ubuntu, Super User, Server Fault, and Database Administrators. Familiarize yourself with each site's community, scope, and the level of existing expertise that each community expects on the part of those who ask questions there, and search the sites for the specific problem you're encountering. It's very likely someone has seen it and we've fixed it on one of them, if not here on SO.

How can I wrap qrsh?

I'm trying to write a wrapper for qrsh, the Oracle Grid Engine equivalent to rsh, and am having trouble identifying the command given to it. Consider the following example:
qrsh -cwd -V -now n -b y -N cvs -verbose -q some.q -p -98 cvs -Q log -N -S -d2012-04-09 14:02:08 GMT<2012-04-11 21:53:41 GMT -b
The command in this case starts from cvs. My wrapper needs to be general purpose so I can't look specifically for cvs. Any ideas on how to identify it? One thought is to look for executable commands starting from the end backwards, which will work in this case but won't be robust as "cvs" could appear in an option to itself. The only robust option that I can come up with is to fully implement the qrsh option parser but I'm not thrilled about it since it will need to be updated with qrsh updates and is complicated.
One option is to set QRSH_WRAPPER to echo and run qrsh once. However, this then requires two jobs to be issued instead of one, adding latency and wasting a slot.

DIFF utility works for 2 files. How to compare more than 2 files at a time?

So the utility Diff works just like I want for 2 files, but I have a project that requires comparisons with more than 2 files at a time, maybe up to 10 at a time. This requires having all those files side by side to each other as well. My research has not really turned up anything, vimdiff seems to be the best so far with the ability to compare 4 at a time.
My question: Is there any utility to compare more than 2 files at a time, or a way to hack diff/vimdiff so it can do multiple comparisons? The files I will be comparing are relatively short so it should not be too slow.
Displaying 10 files side-by-side and highlighting differences can be easily done with Diffuse. Simply specify all files on the command line like this:
diffuse 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt
Vim can already do this:
vim -d file1 file2 file3
But you're normally limited to 4 files. You can change that by modifying a single line in Vim's source, however. The constant DB_COUNT defines the maximum number of diffed files, and it's defined towards the top of diff.c in versions 6.x and earlier, or about two thirds of the way down structs.h in versions 7.0 and up.
diff has built-in option --from-file and --to-file, which compares one operand to all others.
--from-file=FILE1
Compare FILE1 to all operands. FILE1 can be a directory.
--to-file=FILE2
Compare all operands to FILE2. FILE2 can be a directory.
Note: argument name --to-file is optional.
e.g.
# this will compare foo with bar, then foo with baz .html files
$ diff --from-file foo.html bar.html baz.html
# this will compare src/base-main.js with all .js files in git repo,
# that has 'main' in their filename or path
$ git ls-files :/*main*.js | xargs diff -u --from-file src/base-main.js
Checkout "Beyond Compare": http://www.scootersoftware.com/
It lets you compare entire directories of files, and it looks like it runs on Linux too.
if your running multiple diff's based off one file you could probably try writing a script that has a for loop to run through each directory and run the diff. Although it wouldn't be side by side you could at least compare them quickly. hope that helped.
Not answering the main question, but here's something similar to what Benjamin Neil has suggested but diffing all files:
Store the filenames in an array, then loop over the combinations of size two and diff (or do whatever you want).
files=($(ls -d /path/of/files/some-prefix.*)) # Array of files to compare
max=${#files[#]} # Take the length of that array
for ((idxA=0; idxA<max; idxA++)); do # iterate idxA from 0 to length
for ((idxB=idxA + 1; idxB<max; idxB++)); do # iterate idxB + 1 from idxA to length
echo "A: ${files[$idxA]}; B: ${files[$idxB]}" # Do whatever you're here for.
done
done
Derived from #charles-duffy's answer: https://stackoverflow.com/a/46719215/1160428
There is a simple an good way to do this = GREP.
Depending on the size of the text you can copy and paste it, or you can redirect the input of the file to the grep command. If you make a grep -vir /path to make a reverse search or a grep -ir /path. This is my way for certification exams.