Sun Grid Engine - print task I am on in my array job - sungridengine

I've been using http://wiki.gridengine.info/wiki/index.php/Simple-Job-Array-Howto as my reference.
How do I print the number of the task I am currently on? So far I have this:
#!/bin/sh
# Grid Engine options (lines prefixed with #$)
#$ -t 1-2
#$ -N test$SGE_TASK_ID.txt
#$ -cwd
#$ -l h_rt=24:00:00
#$ -l h_vmem=10G
# Initialise the environment modules
. /etc/profile.d/modules.sh
echo $SGE_TASK_ID
This gives me nothing.

Related

Link clang++ modules: Function exported using C++20 modules is not visible (clang++): Cannot compile executable file that uses module

I am exporting a function in the following C++20 module. Although the main program can import the module, but it cannot see the exported function:
f1.hpp
export module f1_module;
export void f1() {
}
f1_demo.cpp
import f1_module;
int main() {
f1();
return 0;
}
The build script is:
#!/bin/bash
mkdir -p ./target
FLAGS="-std=c++20 -stdlib=libc++ -fmodules -fbuiltin-module-map"
clang++ $FLAGS \
-fprebuilt-module-path=./target \
-Xclang -emit-module-interface \
-c \
f1.hpp \
-o ./target/f1.module.o
clang++ $FLAGS \
-fprebuilt-module-path=./target \
-fmodule-file=f1_module=./target/f1.module.o \
f1_demo.cpp \
-o ./target/f1_demo.o
The output error: bash build.bash:
ld: error: undefined symbol: f1()
>>> referenced by f1_demo.cpp
>>> /tmp/f1_demo-286314.o:(main)
clang-14: error: linker command failed with exit code 1 (use -v to see invocation)
The exported function f1() is not visible in the main program. How to make it work?
I can provide the output to -v.
I am running above build script in conanio/clang14 docker container:
docker run -it --rm -v $(pwd):/sosi conanio/clang14-ubuntu16.04:latest bash
Update: Cannot link
I tried #DavisHerring 's suggestion, adding -c to the second command, however, it does not generate executable. What I have is a regular object file, and a compiled module file. (The outcome of of -c is not executable). But I need an executable: I cannot link them. When tried to link using clang++, the problem is about linking:
I cannot link an object file with a compiled module file:
clang++ $FLAGS \
-Xclang -emit-module-interface \
-c \
f1_module.cpp \
-o ./target/f1_module.o
clang++ $FLAGS \
-fmodule-file=f1_module=./target/f1_module.o \
-c \
f1_demo.cpp \
-o ./target/f1_demo.o
clang \
-v \
./target/f1_module.o \
./target/f1_demo.o \
-o ./target/exe.o
output:
"/usr/local/bin/ld" -z relro --hash-style=gnu --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o ./target/exe.e /usr/lib/x86_64-linux-gnu/crt1.o /usr/lib/x86_64-linux-gnu/crti.o /usr/local/lib/clang/14.0.0/lib/linux/clang_rt.crtbegin-x86_64.o -L/usr/local/bin/../lib/gcc/x86_64-linux-gnu/10.3.0 -L/usr/local/bin/../lib/gcc/x86_64-linux-gnu/10.3.0/../../../../lib64 -L/lib/x86_64-linux-gnu -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu -L/usr/local/bin/../lib -L/usr/local/bin/../lib64 -L/lib -L/usr/lib ./target/f1_module.module ./target/f1_demo.o /usr/local/lib/clang/14.0.0/lib/linux/libclang_rt.builtins-x86_64.a --as-needed -l:libllvm-unwind.so --no-as-needed -lc /usr/local/lib/clang/14.0.0/lib/linux/libclang_rt.builtins-x86_64.a --as-needed -l:libllvm-unwind.so --no-as-needed /usr/local/lib/clang/14.0.0/lib/linux/clang_rt.crtend-x86_64.o /usr/lib/x86_64-linux-gnu/crtn.o
followed by error: unclosed quote
ld: error: ./target/f1_module.o:1427: unclosed quote
clang-14: error: linker command failed with exit code 1 (use -v to see invocation)
Note: I am running it on MacBook M1 (arm64) with MacOS Monterey 12.4 and run clang++ via Docker 4.9.1 ( Engine: 20.10.16 ).
You want to compile a module twice. Once to emit a module interface file (which should not have the .o suffix BTW; the customary suffix is .pcm, for "precompiled module") and once to emit an object file (with the customary .o suffix).
clang++ -std=c++20 -c f1_module.cpp -o target/f1_module.o # plain old object
clang++ -std=c++20 -Xclang -emit-module-interface \
-c f1_module.cpp -o target/f1_module.pcm # module interface
Now your module (consisting of two files) is ready, you can use it.
You need to compile the main file against the .pcm file and link the resulting objects against the .o file to produce an executable.
clang++ -std=c++20 -fprebuilt-module-path=./target \
-c f1_demo.cpp -o target/f1_demo.o
clang++ -std=c++20 target/f1_demo.o target/f1_module.o -o target/f1.exe
This is not strictly necessary. With clang, the precompiled module can be used as an object file. The linker however won't recognize it, so clang will need to convert .pcm to .o and feed the temporary .o to the linker, each time you link. This is somewhat of a waste so you may want to eliminate this conversion step by building a separate .o file as above. If you choose not to, you still need to mention the module file on the link line. The simplified process is like this:
clang++ -std=c++20 -Xclang -emit-module-interface \
-c f1_module.cpp -o target/f1_module.pcm # compile module once
clang++ -std=c++20 -fprebuilt-module-path=./target \
-c f1_demo.cpp -o target/f1_demo.o # compile main just as before
clang++ -std=c++20 target/f1_demo.o target/f1_module.pcm \
-o target/f1.exe # mention .pcm explicitly
As far as I know (and I don't know very much), there is currently no way to have clang find out automatically which modules to link against.

Singularity container - deepvariant binding directories to $PATH?

I am trying to use a deepvariant singularity container on my HPC.
However there is something wrong with the way that I am using the container and where it is binding.
This is my code:
#!/bin/bash --login
#SBATCH -J AmyHouseman_deepvariant
#SBATCH -o %x.stdout.%J.%N
#SBATCH -e %x.stderr.%J.%N
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH -p c_compute_wgp
#SBATCH --account=scw1581
#SBATCH --mail-type=ALL # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=HousemanA#cardiff.ac.uk # Where to send mail
#SBATCH --array=1-33
#SBATCH --time=02:00:00
#SBATCH --time=072:00:00
#SBATCH --mem-per-cpu=32GB
module purge
module load singularity
module load parallel
# Set bash error trapping to exit on first error.
set -eu
WDPATH=/scratch/$USER/$SLURM_ARRAY_JOB_ID
CONTAINER_FILE=deepvariant_1.3.0.sif
MY_CONTAINER_PATH=/scratch/$USER/containers
CONTAINER=$MY_CONTAINER_PATH/$CONTAINER_FILE
if [ "$SLURM_ARRAY_TASK_ID" == "1" ]
then
mkdir -p $MY_CONTAINER_PATH
[ -f "$CONTAINER" ] || ssh cl1 wget -O $CONTAINER https://wotan.cardiff.ac.uk/containers/$CONTAINER_FILE
mkdir $WDPATH
fi
while [ ! -d $WDPATH ]
do
sleep 10
done
cd /scratch/c.c21087028/
sed -n "${SLURM_ARRAY_TASK_ID}p" Polyposis_Exome_Analysis/fastp/All_fastp_input/List_of_33_exome_IDs | parallel -j 1 "singularity run $CONTAINER --model_type=WES \
-ref=Polyposis_Exome_Analysis/bwa/index/HumanRefSeq/GRCh38_latest_genomic.fna \
--reads=Polyposis_Exome_Analysis/samtools/index/indexed_picardbamfiles/{}PE_markedduplicates.bam \
--output_vcf=Polyposis_Exome_Analysis/deepvariant/vcf/{}PE_output.vcf.gz \
--output_gvcf=Polyposis_Exome_Analysis/deepvariant/gvcf/{}PE_output.vcf.gz \
--intermediate_results_dir=Polyposis_Exome_Analysis/deepvariant/intermediateresults/{}PE_output_intermediate"
The error message I get is set:
invalid option: "--"
FATAL: "--model_type": executable file not found in $PATH
I find it a bit strange because I haven't had a problem with any of the other singularity containers I've used before, so not really sure how to go ahead - I've tried adding in the bit in their manual singularity run -B /usr/lib/locale/:/usr/lib/locale/
but I'm still unsure as to why I would need to do this step when I haven't previously on my other tools like bwa, and samtools.
I also know I can get rid of parallel bits as its not actually running in parallel, so I am aware of that.
I hope this makes sense!
Thank you!
Amy

Making libtool use another file/directory pattern

How to make libtool don't use (in compile mode) .libs folder for generated PIC object files but another one or even change the name of PIC file, for example: object-file.lo, object-file.o, object-file-pic.o -> these three in one folder.
Standard procedure:
$ libtool --mode=compile gcc -O -o call.o -c called.c
libtool: compile: gcc -O -c called.c -fPIC -DPIC -o .libs/call.o
libtool: compile: gcc -O -c called.c -o call.o >/dev/null 2>&1

Serial section within SGE script

Is it possible to force a 'serial' section within an SGE script?
#$ -S /bin/bash
#$ -N example
#$ -v MPI_HOME
#$ -q all.q
#$ -pe ompi 40
#$ -j yes
#$ -o example.log
$MPI_HOME/bin/mpirun example.exe
# now do some serial commands
grep 'success' example.log
mv example.out /archive
Currently, I split these types of job into two scripts, and make one dependent on the other. It would be much simpler to maintain and schedule if I could keep everything in one script.
You can do this but the job will hold on to all the slots it is using while you do it. As the serial code is invoked directly in the job script rather than via mpirun it will only be run once on the head node of the job. For quick stuff like your example it doesn't matter too much but if you have a long running serial section then it is a more efficient use of resources to split them into two jobs as you are doing.

BATCH: grep equivalent

I need some help what ith the equivalent code for grep -v Wildcard and grep -o in batch file.
This is my code in shell.
result=`mysqlshow --user=$dbUser --password=$dbPass sample | grep -v Wildcard | grep -o sample`
The batch equivalent of grep (not including third party tools like GnuWin32 grep), will be findstr.
grep -v finds lines that don't match the pattern. The findstr version of this is findstr /V
grep -o shows only the part of the line that matches the pattern. Unfortunately, there's no equivalent of this, but you can run the command and then have a check along the lines of
if %errorlevel% equ 0 echo sample