I realized that as I run makefiles from my main makefile, if they child makefiles fail, the parent continues and does not return with an error exit code.
I've tried to add the exception handling...but it does not work. Any ideas?
MAKE_FILES := $(wildcard test_*.mak)
compile_tests:
#echo "Compiling tests.$(MAKE_FILES)."
#for m in $(MAKE_FILES); do\
$(MAKE) -f "$$m"; || $(error Failed to compile $$m)\
done
You cannot use make functions like $(error ...) in your recipe, because all make variables and functions are expanded first, before the shell is invoked. So the error function will happen immediately when make tries to run that recipe, before it even starts.
You have to use shell constructs to fail, not make constructs; something like:
compile_tests:
#echo "Compiling tests.$(MAKE_FILES)."
#for m in $(MAKE_FILES); do \
$(MAKE) -f "$$m" && continue; \
echo Failed to compile $$m; \
exit 1
done
However, even this is not really great, because if you use -k it will still stop immediately. Better is to take advantage of what make does well, which is run lots of things:
compile_tests: $(addprefix tests.$(MAKE_FILES))
$(addprefix tests.$(MAKE_FILES)): tests.%:
$(MAKE) -f "$*"
One note, if you enable -j these will all run in parallel. Not sure if that's OK with you or not.
Related
I'm a CS student that just learned basic mips for class (Patterson & Hennessy + spim), and I'm attempting to find a mips debugging solution that allows arbitrary instruction execution during the debugging process.
Attempt with gdb (so you know why not to suggest this)
The recommended mips cross compilation tool chain is qemu and gdb, see mips.com docs and related Q/A.
gdb's compile code command does not support mips-linux-gnu-gcc as far as I can tell, see gdb docs ("Relocating the object file") and related Q/A. I get errors for malloc, mmap, and invalid memory errors (something appears to be going wrong with the ad-hoc linking gdb performs) when attempting to use compile code with mips-linux-gnu-gcc, even after filtering the hard coded compilation arguments that mips-linux-gnu-gcc doesn't recognize.
Actual question
lldb has a similar command called expression, see lldb docs, and I'm interested in using lldb in conjunction with qemu. The expression command also relies on clang as opposed to gcc, but cross compilation in clang is relatively simple (clang -target mips-linux-gnu "just works"). The only issue is that qemu-mips -g launches gdbserver, and I can find no option for launching lldb-server.
I have read lldb docs on remote debugging, and there is an option to select remote-gdb-server as the platform. I can't find much in the way of documentation for remote-gdb-server, but the name seems to imply that lldb can be compatible with gdbserver.
Here is my attempt to make this work:
qemu-mips -g 1234 test
lldb test
(lldb) platform select remote-gdb-server
Platform: remote-gdb-server
Connected: no
(lldb) platform connect connect://localhost:1234
Platform: remote-gdb-server
Hostname: (null)
Connected: yes
(lldb) b main
Breakpoint 1: where = test`main + 16 at test.c:4, address = 0x00400530
(lldb) c
error: invalid process
Is there a way to either
use lldb with gdbserver, or to
launch lldb-server from qemu-mips as opposed to gdbserver
so that I can execute instructions while debugging mips code?
Note: I understand that I could instead use qemu system emulation to be able to just run lldb-server on the remote. I have tried to virtualize debian mips, using this guide, but the netinstaller won't detect my network card. Based on numerous SO Q/A and online forums, it looks like solving this problem is hard. So for now I am trying to avoid whole system emulation.
YES
Use LLDB with QEMU
LLDB supports GDB server that QEMU uses, so you can do the same thing with the previous section, but with some command modification as LLDB has some commands that are different than GDB
You can run QEMU to listen for a "GDB connection" before it starts executing any code to debug it.
qemu -s -S <harddrive.img>
...will setup QEMU to listen on port 1234 and wait for a GDB connection to it. Then, from a remote or local shell:
lldb kernel.elf
(lldb) target create "kernel.elf"
Current executable set to '/home/user/osdev/kernel.elf' (x86_64).
(lldb) gdb-remote localhost:1234
Process 1 stopped
* thread #1, stop reason = signal SIGTRAP
frame #0: 0x000000000000fff0
-> 0xfff0: addb %al, (%rax)
0xfff2: addb %al, (%rax)
0xfff4: addb %al, (%rax)
0xfff6: addb %al, (%rax)
(Replace localhost with remote IP / URL if necessary.) Then start execution:
(lldb) c
Process 1 resuming
To set a breakpoint:
(lldb) breakpoint set --name kmain
Breakpoint 1: where = kernel.elf`kmain, address = 0xffffffff802025d0
for your situation:
qemu-mips -s -S test;
lldb test
gdb-remote localhost:1234
here is my,you can reference:
############################################# gdb #############################################
QEMU_GDB_OPT := -S -gdb tcp::10001,ipv4
# 调试配置:-S -gdb tcp::10001,ipv4
qemudbg:
ifeq ($(BOOT_MODE),$(BOOT_LEGACY_MODE))
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT)
else
ifeq ($(BOOT_MODE),$(BOOT_GRUB2_MODE))
ifeq ($(EFI_BOOT_MODE),n)
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT) -cdrom $(KERNSRC)/$(OS_NAME).iso
else
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT) -bios $(BIOS_FW_DIR)/IA32_OVMF.fd -cdrom $(KERNSRC)/$(OS_NAME).iso
endif
endif
endif
# 连接gdb server: target remote localhost:10001
gdb:
$(GDB) $(KERNEL_ELF)
############################################# lldb #############################################
QEMU_LLDB_OPT := -s -S
LLDB := lldb
qemulldb:
ifeq ($(BOOT_MODE),$(BOOT_LEGACY_MODE))
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT)
else
ifeq ($(BOOT_MODE),$(BOOT_GRUB2_MODE))
ifeq ($(EFI_BOOT_MODE),n)
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT) -cdrom $(KERNSRC)/$(OS_NAME).iso
else
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT) -bios $(BIOS_FW_DIR)/IA32_OVMF.fd -cdrom $(KERNSRC)/$(OS_NAME).iso
endif
endif
endif
lldb:
$(LLDB) $(KERNEL_ELF)
I am using a .do file which is used by GUI and by .tcl in command line (vsim -c) for simulating in Modelsim 10.3c
exec vsim -c -do DoFile.do
What I need is:
If an error happens, modelsim should be quit and return to the .tcl. Otherwise it should simulate the project.
If I put the line onerror { quit -f } in my .do file the GUI is quit at the first error. So, this is not comfortable.
I didn't manage to use onerror (warning: onerror command for use within macro) or $error (unknown variable) inside tcl
I would need further information about your DO and Tcl script but you can use try-catch on vcom (for compiling VHDL) or vlog (for compiling Verilog/SystemVerilog).
Here a short a example how to use it:
# set variable for compile error
set comperror ""
# compile files
catch "vcom -quiet -93 -work work name.vhd" comperror
catch "vcom -quiet -93 -work work name2.vhd" comperror
# ... and futher files..
if [expr {${comperror}!=""}] then {
# quit modelsim or do anything else
} else {
# do simulation or execute further commands
}
You can compile ALL files and if a error occurs you can quit it. If it's successful you can run your simulation.
I have found a work around. I create a file in the .tcl and put the following lines into the .do scripts:
if [file exists ../Command_Line_Enable.txt] {
onerror { quit -f }
}
So, if that file is not generated the GUI will not exit.
I'm trying to compile zlib from the command line, and I'm getting this message when using -Wall -Wextra -Wconversion (full cross-compile script is below):
Compiler error reporting is too harsh for ./configure (perhaps remove
-Werror).
Here's the configure test that's generating the line:
cat > $test.c << EOF
int foo() { return 0; }
EOF
echo "Checking for obsessive-compulsive compiler options..." >> configure.log
if try $CC -c $CFLAGS $test.c; then
:
else
echo "Compiler error reporting is too harsh for $0 (perhaps remove -Werror)." | tee -a configure.log
leave 1
fi
Its not clear to me what exactly is being judged too harsh (especially since -Werror is not present). I also don't quite understand what the sample program used in the test is doing, so its not clear to me what the criteria is for judging the compiler warnings "too harsh".
What is zlib complaining is too harsh?
#! /bin/sh
export PATH="/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:$PATH"
export CC=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang
export CXX=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++
export LD=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld
export AR=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ar
export RANLIB=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib
export CFLAGS="-Wall -Wextra -Wconversion --sysroot="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdks""
export CXXFLAGS="-Wall -Wextra -Wconversion --sysroot="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk""
I had the exact same problem on a newly built machine, and I found the cause was that I didn't actually have the appropriate GNU C compilers installed (reference). Therefore it's complaining that the compiler is too harsh simply because there is no compiler.
Try running:
sudo apt-get install build-essential
and then try running your ./configure again.
My problem was:
cc1: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
Search for details in configure.log.
mine was failing because it tried to use cc (non existent) instead of gcc
is a old question, but i just had this problem compiling zlib 1.2.11 and to bypass this needed to force sudo previous to configure
sudo ./configure --prefix=path
I have two versions of a function in an application, one implemented in CUDA and the other in standard C. They're in separate files, let's say cudafunc.h and func.h (the implementations are in cudafunc.cu and func.c). I'd like to offer two options when compiling the application. If the person has nvcc installed, it'll compile the cudafunc.h. Otherwise, it'll compile func.h.
Is there anyway to check if a machine has nvcc installed in the makefile and thus adjust the compiler accordingly?
Thanks a bunch,
You could try a conditional, like
ifeq (($shell which nvcc),) # No 'nvcc' found
func.o: func.c func.h
HEADERS += func.h
else
func.o: cudafunc.cu cudafunc.h
nvcc -o $# -c $< . . .
CFLAGS += -DUSE_CUDA_FUNC
HEADERS += cudafunc.h
endif
And then in the code that will call this function, it can test #if USE_CUDA_FUNC to decide which header to include and which interface to call.
This should work, included in your Makefile:
NVCC_RESULT := $(shell which nvcc 2> NULL)
NVCC_TEST := $(notdir $(NVCC_RESULT))
ifeq ($(NVCC_TEST),nvcc)
CC := nvcc
else
CC := g++
endif
test:
#echo $(CC)
For GNU make, the recipe line(s) (after test:) actually start with tab characters.
With the above preamble, you can use conditionals based on the CC variable to control the remainder of your Makefile.
You would change the test: target to whatever target you want to be built conditionally.
As a test, just run the above with make. You should get output of nvcc or g++ based on whatever was detected.
I am using TCL to control a traffic generator. When the traffic received, I want to use shark command to convert the .pcap file to a .txt file, and then I can do some other job.
But when run the exec in the program the following info print out:
while executing
"exec tshark -Vxr /var/tmp/PCRF/create_req.pcap"
("eval" body line 1)
invoked from within
"eval exec {tshark -Vxr /var/tmp/PCRF/create_req.pcap}"
(file "./tcp_test.tcl" line 7)
The following is the TCL script:
# Radius accounting request start packets
# Version 1.0
# Date 2014/4/16 16:38
puts "\n Begin to decode the capture file\n"
#source /var/tmp/PCRF/convert_pcap.tcl
eval exec {tshark -Vxr /var/tmp/PCRF/create_req.pcap}
puts "\n end of the file decode and the result is rrr\n"
Tcl's exec can throw an error for two reasons. It does so if either:
The subprocess returns a non-zero exit code.
The subprocess writes to standard error.
The first one can be annoying, but it is genuinely how programs are supposed to indicate real errors. It can be a problem if the program also uses it in other ways (e.g., to say that nothing was found, like grep does) but that. However, in this case you get the error message child process exited abnormally so it is easy to at least figure out what happened.
More problematic is when something is written to standard error; the message written there is used as the error message after newline stripping. Even just a newline will trigger a failure, and that failure overwrites any chance of getting the real out from the program (and in the case of just a newline, is highly mysterious). Here's a (possibly) reproduction of your problem:
% exec /bin/sh -c {echo >&2}
% set errorInfo
while executing
"exec /bin/sh -c {echo >&2}"
How to fix this? Well, you can try the -ignorestderr option (if supported by your version of Tcl, which you don't mention):
exec -ignorestderr tshark -Vxr /var/tmp/PCRF/create_req.pcap
Or you can try merging the standard output and standard error channels with 2>#1, so that the error messages are part of the overall output stream (and picked up as a normal result):
exec tshark -Vxr /var/tmp/PCRF/create_req.pcap 2>#1
Or you could use a merge-pipe (needed for older Tcl versions):
exec tshark -Vxr /var/tmp/PCRF/create_req.pcap |& cat
Or you can even direct the subprocess's standard error to the main standard error: the user (or exterior logger) will see the “error” output, but Tcl won't (and the exec won't fail):
exec tshark -Vxr /var/tmp/PCRF/create_req.pcap 2>#stderr
More elaborate things are possible with Tcl 8.6, which can make OS pipes with chan pipe, and you can (in all Tcl versions) run subprocesses as pipelines with open |… but these are probably overkill.
You don't need eval exec {stuff…}; plain old exec stuff… is precisely equivalent, shorter to write and a bit easier to get correct.