gnu parallel not recognize user-defined functions - function

I cannot get the gnu parallel function to implement a custom function that I built.
My function is:
function run_cuffLinks() {
inputBAM="${HOME}/Analyses/P_miniata/CleanUpPipeline/TH_${1}/${1}.realigned.bam"
if [[ ! -f $inputBAM ]]; then echo -e "$inputBAM could not be found\nexit 1" ; fi
WORKING_DIR="${HOME}/data/CuffLinks/TH_$1"
if [[ ! -d $WORKING_DIR ]]; then mkdir -p $WORKING_DIR; fi
REF="${HOME}/ReferenceSequences/GATK_pmin.scaf.fa"
if [[ ! -f $REF ]]; then echo -e "$inputBAM could not be found\nexit 1" ; exit 1; fi
GTF_FILE="${HOME}/ReferenceSequences/genes.sorted.gff3"
if [[ ! -f $GTF_FILE ]]; then echo -e "$inputBAM could not be found\nexit 1" ; exit 1; fi
cufflinks \
--output-dir $WORKING_DIR \
--num-threads 2 \
--frag-len-mean 100 \
--GTF-guide $GTF_FILE \
--frag-bias-correct $REF \
-L "HH" \
$inputBAM ;
}
When I enter:
parallel --no-notice -j+2 run_cuffLinks {} ::: sample1 sample2 sample3
I get the output:
/bin/bash: run_cuffLinks: command not found
/bin/bash: run_cuffLinks: command not found
/bin/bash: run_cuffLinks: command not found
If I include a '$' symbol in front of the function name, I get:
/bin/bash: sample1: command not found
/bin/bash: sample2: command not found
/bin/bash: sample3: command not found
I have also tried using the -pipe --recend and --rrs options, but without a positive result.
Is GNU parallel not able to process user-defined functions?

You do not write whether you have walked through the tutorial (man parallel_tutorial). In that it shows that you must export -f the function, and since you do not write that, I believe you might have forgotten that:
export -f run_cuffLinks
parallel ...
Since version 20180522 you can also use env_parallel:
env_parallel --session
[define functions and variables here that you want parallel to see]
# Use env_parallel like you would parallel
env_parallel run_cuffLinks ...
PS: Use --bibtex once to avoid --no-notice in the future.

Related

variable expansion in subshell in makefile fails

I have a makefile function inside a makefile (myfunction.mk):
.ONESHELL:
define call_script
set +x
mkdir -p $$(dirname $(2))
if [ ! -f $(2) ]; then
echo "" > $(2)
fi
REDIRECT='| tee -a'
echo '>> $(1)'
($(1) ???????? $(2))
RET_CODE=$$?
echo "exit_code is: $$RET_CODE"
if [ ! $$RET_CODE = 0 ]; then
echo "$(3) terminated with error $$RET_CODE"
exit $$RET_CODE
else
if [ ! -z "$(strip $(3))" ]; then
echo "$(3) done"
fi
fi
endef
this function call a script and append result to a log (which is created with its folder if non existing), the result of the script is append only if the makefile variable given as the 4th ($(4)) argument is equal to 'yes'.
you call it like this:
include myfunction.mk
OUTPUT_ENABLED ?= yes
target:
$(call call_script, echo "test", reports/mylog.log, "doing test", OUTPUT_ENABLED)
This works for the most part:
if i replace '????????' by '| tee -a', it works.
if i replace '????????' by $(REDIRECT), it fails.
if i replace '????????' by $$REDIRECT, it fails.
why?
note: running it from a shell /bin/sh: symbolic link to dash
note: of course i want to add a ifeq that allows me to check for $(4) and replace | tee -a by &>>
I'll assume that you use call in a recipe, not flat in your Makefile. There are few problems with your shell script. First, if you try the following on the command line:
mkdir -p reports
REDIRECT='| tee -a'
echo '>> echo "test"'
(echo "test" $REDIRECT reports/mylog.log)
you'll see that echo considers:
"test" $REDIRECT reports/mylog.log
as its arguments. They are expanded and echoed, which prints:
test | tee -a reports/mylog.log
on the standard output, not the effect you expected, I guess. You could, for instance, use eval. On the command line:
eval "echo "test" $REDIRECT reports/mylog.log"
Which, in your Makefile, would become:
eval "$(1) $$REDIRECT $(2)"
Next you should not quote the third parameter of call because the quotes will be passed unmodified and your script will be expanded by make as:
echo " "doing test" terminated with error $RET_CODE"
Again probably not what you want.
Third, you should avoid useless spaces in the parameters of call because they are preserved too (as you can see above between the first 2 double quotes):
.PHONY: foo
foo:
$(call call_script,echo "test",reports/mylog.log,doing test,OUTPUT_ENABLED)
And for your last desired feature, it would be slightly easier to pass the value of OUTPUT_ENABLED to call instead of its name, but let's go this way:
$ cat myfunction.mk
define call_script
set +x
mkdir -p $$(dirname $(2))
if [ ! -f $(2) ]; then
echo "" > $(2)
fi
if [ "$($(4))" = "yes" ]; then
REDIRECT='| tee -a'
else
REDIRECT='&>>'
fi
echo '>> $(1)'
eval "$(1) $$REDIRECT $(2)"
RET_CODE=$$?
echo "exit_code is: $$RET_CODE"
if [ ! $$RET_CODE = 0 ]; then
echo "$(3) terminated with error $$RET_CODE"
exit $$RET_CODE
else
if [ ! -z "$(strip $(3))" ]; then
echo "$(3) done"
fi
fi
endef
$ cat Makefile
.ONESHELL:
include myfunction.mk
OUTPUT_ENABLED ?= yes
target:
$(call call_script,echo "test",reports/mylog.log,doing test,OUTPUT_ENABLED)
Note that I moved the .ONESHELL: in the main Makefile because it is probably better to not hide it inside an included file. Up to you.
The most problematic issue here is that if you pipe your commands, the exit code is the exit code of the last command in a pipe, e.g false | tee foo.log will exit with 0 as tee will most probably succeed. Note also that pipe only redirects stdout, so your log will not contain any stderr messages unless explicitly redirected.
Considering that piping commands influence exit code and lack of portability of $PIPESTATUS (most specifically not being supported in dash), I would try to avoid piping commands and use a temporary file for gathering output, i.e.:
$ cat Makefile
# $(1) - script to execute
# $(2) - log file
# $(3) - description
define call_script
echo '>> $(1)'
$(if $(OUTPUT_ENABLED), \
$(1) > $#.log 2>&1; RET_CODE=$$?; mkdir -p $(dir $(2)); cat $#.log >> $(2); cat $#.log; rm -f $#.log, \
$(1); RET_CODE=$$? \
); \
echo "EXIT_CODE is: $${RET_CODE}"; \
if [ $${RET_CODE} -ne 0 ]; then $(if $(3),echo "$(3) terminated with error $${RET_CODE}";) exit $${RET_CODE}; fi; \
$(if $(3), echo "$(3) done.")
endef
good:
$(call call_script,echo "test",reports/mylog.log,doing test)
bad:
$(call call_script,mkdir /root/foo,reports/mylog.log,intentional fail)
ugly:
$(call call_script,bad_command,reports/mylog.log)
Regular call will not create the logs and will stop on errors:
$ make good bad ugly
echo '>> echo "test"'
>> echo "test"
echo "test"; RET_CODE=$? ; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "doing test terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "doing test done."
test
EXIT_CODE is: 0
doing test done.
echo '>> mkdir /root/foo'
>> mkdir /root/foo
mkdir /root/foo; RET_CODE=$? ; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "intentional fail terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "intentional fail done."
mkdir: cannot create directory ‘/root/foo’: Permission denied
EXIT_CODE is: 1
intentional fail terminated with error 1
make: *** [Makefile:19: bad] Error 1
Note that ugly was not built due to failure on bad. Now the same with the log:
$ make good bad ugly OUTPUT_ENABLED=1
echo '>> echo "test"'
>> echo "test"
echo "test" > good.log 2>&1; RET_CODE=$?; mkdir -p reports/; cat good.log >> reports/mylog.log; cat good.log; rm -f good.log; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "doing test terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "doing test done."
test
EXIT_CODE is: 0
doing test done.
echo '>> mkdir /root/foo'
>> mkdir /root/foo
mkdir /root/foo > bad.log 2>&1; RET_CODE=$?; mkdir -p reports/; cat bad.log >> reports/mylog.log; cat bad.log; rm -f bad.log; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then echo "intentional fail terminated with error ${RET_CODE}"; exit ${RET_CODE}; fi; echo "intentional fail done."
mkdir: cannot create directory ‘/root/foo’: Permission denied
EXIT_CODE is: 1
intentional fail terminated with error 1
make: *** [Makefile:19: bad] Error 1
$ cat reports/mylog.log
test
mkdir: cannot create directory ‘/root/foo’: Permission denied
Note that this time ugly was also not run. But if run later, it will correctly append to the log:
$ make ugly OUTPUT_ENABLED=1
echo '>> bad_command'
>> bad_command
bad_command > ugly.log 2>&1; RET_CODE=$?; mkdir -p reports/; cat ugly.log >> reports/mylog.log; cat ugly.log; rm -f ugly.log; echo "EXIT_CODE is: ${RET_CODE}"; if [ ${RET_CODE} -ne 0 ]; then exit ${RET_CODE}; fi;
/bin/sh: 1: bad_command: not found
EXIT_CODE is: 127
make: *** [Makefile:22: ugly] Error 127
$ cat reports/mylog.log
test
mkdir: cannot create directory ‘/root/foo’: Permission denied
/bin/sh: 1: bad_command: not found
Personally I am not fan of implementing logging in this way. It is complicated and it only logs output of commands, not make output itself, and only of those commands which are explicitly called to do so. I'd rather keep Makefile clean and simple and just run make 2>&1 | tee log instead to have the output logged.

Bourne shell function return variable always empty

The following Bourne shell script, given a path, is supposed to test each component of the path for existence; then set a variable comprising only those components that actually exist.
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
paths=/usr/share/man:\
/usr/X11R6/man:\
/usr/local/man
MANPATH=`test_path $paths`
echo $MANPATH
When run, it always prints nothing. The trace using set -x is:
+ paths=/usr/share/man:/usr/X11R6/man:/usr/local/man
++ test_path /usr/share/man:/usr/X11R6/man:/usr/local/man
++ path=
++ echo /usr/share/man:/usr/X11R6/man:/usr/local/man
++ tr : '\012'
++ read component
++ '[' -d /usr/share/man ']'
++ '[' -z '' ']'
++ path=/usr/share/man
++ read component
++ '[' -d /usr/X11R6/man ']'
++ read component
++ '[' -d /usr/local/man ']'
++ '[' -z /usr/share/man ']'
++ path=/usr/share/man:/usr/local/man
++ read component
++ echo ''
+ MANPATH=
+ echo
Why is the final echo $path empty? The $path variable within the while loop was incrementally set for each iteration just fine.
The pipe runs all commands involved in sub-shells, including the entire while ... loop. Therefore, all changes to variables in that loop are confined to the sub-shell and invisible to the parent shell script.
One way to work around that is putting the while ... loop and the echo into a list that executes entirely in the sub-shell, so that the modified variable $path is visible to echo:
test_path()
{
echo "$1" | tr ':' '\n' | {
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then
path="$component"
else
path="$path:$component"
fi
fi
done
echo "$path"
}
}
However, I suggest using something like this:
test_path()
{
echo "$1" | tr ':' '\n' |
while read dir
do
[ -d "$dir" ] && printf "%s:" "$dir"
done |
sed 's/:$/\n/'
}
... but that's a matter of taste.
Edit: As others have said, the behaviour you are observing depends on the shell. The POSIX standard describes pipelined commands as run in sub-shells, but that is not a requirement:
Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment.
Bash runs them in sub-shells, but some shells run the last command in the context of the main script, when only the preceding commands in the pipeline are run in sub-shells.
This should work in a Bourne shell that understands functions (and would work in Bash and other shells too):
test_path() {
echo $1 | tr ':' '\012' |
{
path=""
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
}
The inner set of braces groups the commands into a unit, so path is only set in the subshell but is echoed from the same subshell.
Why is the final echo $path empty?
Until recently, Bash would give all components of a pipeline their own process, separate from the shell process in which the pipeline is run.
Separate process == separate address space, and no variable sharing.
In ksh93 and in recent Bash (may need a shopt setting), the shell will run the last component of a pipeline in the calling shell, so any variables changed inside the loop are preserved when the loop exits.
Another way to accomplish what you want is to make sure that the echo $path is in the same process as the loop, using parentheses:
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | ( while read component
do
[ -d "$component" ] || continue
path="${path:+$path:}$component"
done
echo "$path"
)
}
Note: I simplified the inner if. There was no else so the test can be replaced with a shortcut. Also, the two path assignments can be combined into one, using the S{var:+ ...} parameter substitution trick.
Your script works just fine with no change under Solaris 11 and probably also most commercial Unix like AIX and HP-UX because under these OSes, the underlying implementation of /bin/sh is provided by ksh. This would be also the case if /bin/sh is backed by zsh.
It doesn't work for you likely because your /bin/sh is implemented by one of bash, dash, mksh or busybox sh which all process each component of a pipeline in a subshell while ksh and zsh both keep the last element of a pipeline in the current shell, saving an unnecessary fork.
It is possible to "fix" your script for it to work when sh is provided by bash by adding this line somewhere before the pipeline:
shopt -s lastpipe
or better, if you wan't to keep portability:
command -v shopt > /dev/null && shopt -s lastpipe
This will keep the script working for ksh, and zsh but still won't work for dash, mksh or the original Bourne shell.
Note that both bash and ksh behaviors are allowed by the POSIX standard.

How to write a bash function to wrap another command?

I am trying to write a function wrapper for the mysql command
If .my.cnf exists in the pwd, I would like to automatically attach --defaults-file=.my.cnf to the command
Here's what I'm trying
function mysql {
if [ -e ".my.cnf" ]; then
/usr/local/bin/mysql --defaults-file=.my.cnf "$#"
else
/usr/local/bin/mysql "$#"
fi
}
The idea is, I want to be able to use the mysql command exactly as I was before, only, if the .my.cnf file is present, attach it as an argument
Question: Will I run into any trouble with this method? Is there a better way to do it?
If I specify --defaults-file=foo.cnf manually, that should be used instead of .my.cnf.
Your function as written is perfectly fine. This is a touch DRYer:
function mysql {
if [ -e ".my.cnf" ]; then
set -- --defaults-file=.my.cnf "$#"
fi
/usr/local/bin/mysql "$#"
}
That set command puts your my.cnf argument at the beginning of the command line arguments
Only if the option is not already present:
function mysql {
if [[ -e ".my.cnf" && "$*" != *"--defaults-file"* ]]; then
set -- --defaults-file=.my.cnf "$#"
fi
/usr/local/bin/mysql "$#"
}

What's a Good Way to Commit All MySQL Settings Needed to Get a Django App Running?

I'm in the middle of making my first django app, and I'd like to commit it to git in such a way that someone can clone it down and start working on it with the least amount of trouble. One of the things I needed to do to get things up and running was to create a new db in my local mysql installation and create a new user there. I'd love to let someone clone things down and have that done automatically for them. Is there a good way to do this?
Use mysql-python and write a python script to create the database or alter my shell script to make it suit your needs.
#!/bin/bash
function pre_checks() {
if [[ "$1" -ne 3 ]]; then
echo "Usage: $0 [DATABASE NAME] [USERNAME] [HOST]"
return 1
fi
if ! command -v /usr/bin/mysql >/dev/null 2>&1; then
echo "Mysql is not installed."
return 1
fi
echo -n "Create the database '${2}' and the user '${3}' now? (y/n) "
read ANSWER
case "$ANSWER" in
"y"|"Y")
echo -n "Password for ${3}: "
read -s USER_PW
echo
return 0 ;;
"n"|"N"| *)
echo "Bye."
return 1 ;;
esac
}
function create_db() {
Q1="CREATE DATABASE IF NOT EXISTS ${1} CHARACTER SET utf8;"
Q2="GRANT ALL ON *.* TO '${2}'#'${3}' IDENTIFIED BY '$USER_PW';"
Q3="FLUSH PRIVILEGES;"
Q4="SHOW DATABASES;"
SQL="${Q1} ${Q2} ${Q3} ${Q4}"
echo "Query:"
echo "${SQL}"
echo -n "Run query now? (y/n) "
read ANSWER
case "$ANSWER" in
"y" | "Y" )
/usr/bin/mysql -uroot -p -e "$SQL" || echo "Failure."
;;
"n" | "N" | *)
echo "Bye."
return 1
;;
esac
}
pre_checks "$#" "$1" "$2" && create_db "$1" "$2" "$3"

How do I find files that do not end with a newline/linefeed?

How can I list normal text (.txt) filenames, that don't end with a newline?
e.g.: list (output) this filename:
$ cat a.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
and don't list (output) this filename:
$ cat b.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
Use pcregrep, a Perl Compatible Regular Expressions version of grep which supports a multiline mode using -M flag that can be used to match (or not match) if the last line had a newline:
pcregrep -LMr '\n\Z' .
In the above example we are saying to search recursively (-r) in current directory (.) listing files that don't match (-L) our multiline (-M) regex that looks for a newline at the end of a file ('\n\Z')
Changing -L to -l would list the files that do have newlines in them.
pcregrep can be installed on MacOS with the homebrew pcre package: brew install pcre
Ok it's my turn, I give it a try:
find . -type f -print0 | xargs -0 -L1 bash -c 'test "$(tail -c 1 "$0")" && echo "No new line at end of $0"'
If you have ripgrep installed:
rg -l '[^\n]\z'
That regular expression matches any character which is not a newline, and then the end of the file.
Give this a try:
find . -type f -exec sh -c '[ -z "$(sed -n "\$p" "$1")" ]' _ {} \; -print
It will print filenames of files that end with a blank line. To print files that don't end in a blank line change the -z to -n.
If you are using 'ack' (http://beyondgrep.com) as a alternative to grep, you just run this:
ack -v '\n$'
It actually searches all lines that don't match (-v) a newline at the end of the line.
The best oneliner I could come up with is this:
git grep --cached -Il '' | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
This uses git grep, because in my use-case I want to ensure files commited to a git branch have ending newlines.
If this is required outside of a git repo, you can of course just use grep instead.
grep -RIl '' . | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
Why I use grep? Because you can easily filter out binary files with -I.
Then the usual xargs/tail thingy found in other answers, with the addition to exit with 1 if a file has no newline. So this can be used in a pre-commit githook or CI.
This should do the trick:
#!/bin/bash
for file in `find $1 -type f -name "*.txt"`;
do
nlines=`tail -n 1 $file | grep '^$' | wc -l`
if [ $nlines -eq 1 ]
then echo $file
fi
done;
Call it this way: ./script dir
E.g. ./script /home/user/Documents/ -> lists all text files in /home/user/Documents ending with \n.
This is kludgy; someone surely can do better:
for f in `find . -name '*.txt' -type f`; do
if test `tail -c 1 "$f" | od -c | head -n 1 | tail -c 3` != \\n; then
echo $f;
fi
done
N.B. this answers the question in the title, which is different from the question in the body (which is looking for files that end with \n\n I think).
Most solutions on this page do not work for me (FreeBSD 10.3 amd64). Ian Will's
OSX solution does almost-always work, but is pretty difficult to follow : - (
There is an easy solution that almost-always works too : (if $f is the file) :
sed -i '' -e '$a\' "$f"
There is a major problem with the sed solution : it never gives you the
opportunity to just check (and not append a newline).
Both the above solutions fail for DOS files. I think the most
portable/scriptable solution is probably the easiest one,
which I developed myself : - )
Here is that elementary sh script which combines file/unix2dos/tail. In
production, you will likely need to use "$f" in quotes and fetch tail output
(embedded into the shell variable named last) as \"$f\"
if file $f | grep 'ASCII text' > /dev/null; then
if file $f | grep 'CRLF' > /dev/null; then
type unix2dos > /dev/null || exit 1
dos2unix $f
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
unix2dos $f
else
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
fi
fi
Hope this helps someone.
This example
Works on macOS (BSD) and GNU/Linux
Uses standard tools: find, grep, sh, file, tail, od, tr
Supports paths with spaces
Oneliner:
find . -type f -exec sh -c 'file -b "{}" | grep -q text' \; -exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; -print
More readable version
Find under current directory
Regular files
That 'file' (brief mode) considers text
Whose last byte (tail -c 1) is not represented by od's named character "nl"
And print their paths
#!/bin/sh
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print
Finally, a version with a -f flag to fix the offending files (requires bash).
#!/bin/bash
# Finds files without final newlines
# Pass "-f" to also fix those files
fix_flag="$([ "$1" == "-f" ] && echo -true || echo -false)"
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print \
$fix_flag \
-exec sh -c 'echo >> "{}"' \;
Another option:
$ find . -name "*.txt" -print0 | xargs -0I {} bash -c '[ -z "$(tail -n 1 {})" ] && echo {}'
Since your question has the perl tag, I'll post an answer which uses it:
find . -type f -name '*.txt' -exec perl check.pl {} +
where check.pl is the following:
#!/bin/perl
use strict;
use warnings;
foreach (#ARGV) {
open(FILE, $_);
seek(FILE, -2, 2);
my $c;
read(FILE,$c,1);
if ( $c ne "\n" ) {
print "$_\n";
}
close(FILE);
}
This perl script just open, one per time, the files passed as parameters and read only the next-to-last character; if it is not a newline character, it just prints out the filename, else it does nothing.
This example works for me on OSX (many of the above solutions did not)
for file in `find . -name "*.java"`
do
result=`od -An -tc -j $(( $(ls -l $file | awk '{print $5}') - 1 )) $file`
last_char=`echo $result | sed 's/ *//'`
if [ "$last_char" != "\n" ]
then
#echo "Last char is .$last_char."
echo $file
fi
done
Here another example using little bash build-in commands and which:
allows you to filter for extension (e.g. | grep '\.md$' filters only the md files)
pipe more grep commands for extending the filter (like exclusions | grep -v '\.git' to exclude the files under .git
use the full power of grep parameters to for more filters or inclusions
The code basically, iterates (for) over all the files (matching your chosen criteria grep) and if the last 1 character of a file (-n "$(tail -c -1 "$file")") is not not a blank line, it will print the file name (echo "$file").
The verbose code:
for file in $(find . | grep '\.md$')
do
if [ -n "$(tail -c -1 "$file")" ]
then
echo "$file"
fi
done
A bit more compact:
for file in $(find . | grep '\.md$')
do
[ -n "$(tail -c -1 "$file")" ] && echo "$file"
done
and, of course, the 1-liner for it:
for file in $(find . | grep '\.md$'); do [ -n "$(tail -c -1 "$file")" ] && echo "$file"; done