I'm a CS student that just learned basic mips for class (Patterson & Hennessy + spim), and I'm attempting to find a mips debugging solution that allows arbitrary instruction execution during the debugging process.
Attempt with gdb (so you know why not to suggest this)
The recommended mips cross compilation tool chain is qemu and gdb, see mips.com docs and related Q/A.
gdb's compile code command does not support mips-linux-gnu-gcc as far as I can tell, see gdb docs ("Relocating the object file") and related Q/A. I get errors for malloc, mmap, and invalid memory errors (something appears to be going wrong with the ad-hoc linking gdb performs) when attempting to use compile code with mips-linux-gnu-gcc, even after filtering the hard coded compilation arguments that mips-linux-gnu-gcc doesn't recognize.
Actual question
lldb has a similar command called expression, see lldb docs, and I'm interested in using lldb in conjunction with qemu. The expression command also relies on clang as opposed to gcc, but cross compilation in clang is relatively simple (clang -target mips-linux-gnu "just works"). The only issue is that qemu-mips -g launches gdbserver, and I can find no option for launching lldb-server.
I have read lldb docs on remote debugging, and there is an option to select remote-gdb-server as the platform. I can't find much in the way of documentation for remote-gdb-server, but the name seems to imply that lldb can be compatible with gdbserver.
Here is my attempt to make this work:
qemu-mips -g 1234 test
lldb test
(lldb) platform select remote-gdb-server
Platform: remote-gdb-server
Connected: no
(lldb) platform connect connect://localhost:1234
Platform: remote-gdb-server
Hostname: (null)
Connected: yes
(lldb) b main
Breakpoint 1: where = test`main + 16 at test.c:4, address = 0x00400530
(lldb) c
error: invalid process
Is there a way to either
use lldb with gdbserver, or to
launch lldb-server from qemu-mips as opposed to gdbserver
so that I can execute instructions while debugging mips code?
Note: I understand that I could instead use qemu system emulation to be able to just run lldb-server on the remote. I have tried to virtualize debian mips, using this guide, but the netinstaller won't detect my network card. Based on numerous SO Q/A and online forums, it looks like solving this problem is hard. So for now I am trying to avoid whole system emulation.
YES
Use LLDB with QEMU
LLDB supports GDB server that QEMU uses, so you can do the same thing with the previous section, but with some command modification as LLDB has some commands that are different than GDB
You can run QEMU to listen for a "GDB connection" before it starts executing any code to debug it.
qemu -s -S <harddrive.img>
...will setup QEMU to listen on port 1234 and wait for a GDB connection to it. Then, from a remote or local shell:
lldb kernel.elf
(lldb) target create "kernel.elf"
Current executable set to '/home/user/osdev/kernel.elf' (x86_64).
(lldb) gdb-remote localhost:1234
Process 1 stopped
* thread #1, stop reason = signal SIGTRAP
frame #0: 0x000000000000fff0
-> 0xfff0: addb %al, (%rax)
0xfff2: addb %al, (%rax)
0xfff4: addb %al, (%rax)
0xfff6: addb %al, (%rax)
(Replace localhost with remote IP / URL if necessary.) Then start execution:
(lldb) c
Process 1 resuming
To set a breakpoint:
(lldb) breakpoint set --name kmain
Breakpoint 1: where = kernel.elf`kmain, address = 0xffffffff802025d0
for your situation:
qemu-mips -s -S test;
lldb test
gdb-remote localhost:1234
here is my,you can reference:
############################################# gdb #############################################
QEMU_GDB_OPT := -S -gdb tcp::10001,ipv4
# 调试配置:-S -gdb tcp::10001,ipv4
qemudbg:
ifeq ($(BOOT_MODE),$(BOOT_LEGACY_MODE))
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT)
else
ifeq ($(BOOT_MODE),$(BOOT_GRUB2_MODE))
ifeq ($(EFI_BOOT_MODE),n)
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT) -cdrom $(KERNSRC)/$(OS_NAME).iso
else
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT) -bios $(BIOS_FW_DIR)/IA32_OVMF.fd -cdrom $(KERNSRC)/$(OS_NAME).iso
endif
endif
endif
# 连接gdb server: target remote localhost:10001
gdb:
$(GDB) $(KERNEL_ELF)
############################################# lldb #############################################
QEMU_LLDB_OPT := -s -S
LLDB := lldb
qemulldb:
ifeq ($(BOOT_MODE),$(BOOT_LEGACY_MODE))
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT)
else
ifeq ($(BOOT_MODE),$(BOOT_GRUB2_MODE))
ifeq ($(EFI_BOOT_MODE),n)
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT) -cdrom $(KERNSRC)/$(OS_NAME).iso
else
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT) -bios $(BIOS_FW_DIR)/IA32_OVMF.fd -cdrom $(KERNSRC)/$(OS_NAME).iso
endif
endif
endif
lldb:
$(LLDB) $(KERNEL_ELF)
Related
I'm currently trying to extract SIFT Features with the following package:
https://github.com/Celebrandil/CudaSift
It comes with a CMakeLists.txt, which I modified, here it is:
cmake_minimum_required(VERSION 2.6)
project(cudaSift)
set(cudaSift_VERSION_MAJOR 2)
set(cudaSift_VERSION_MINOR 0)
set(cudaSift_VERSION_PATCH 0)
set(CPACK_PACKAGE_VERSION_MAJOR "${cudaSift_VERSION_MAJOR}")
set(CPACK_PACKAGE_VERSION_MINOR "${cudaSift_VERSION_MINOR}")
set(CPACK_PACKAGE_VERSION_PATCH "${cudaSift_VERSION_PATCH}")
set(CPACK_GENERATOR "ZIP")
include(CPack)
find_package(OpenCV REQUIRED)
find_package(CUDA)
if (NOT CUDA_FOUND)
message(STATUS "CUDA not found. Project will not be built.")
endif(NOT CUDA_FOUND)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O2 -msse2 ")
list(APPEND CUDA_NVCC_FLAGS "-lineinfo;-ccbin;/usr/bin/gcc-7;--compiler-options;-O2;-D_FORCE_INLINES;-DVERBOSE_NOT; -arch=sm_75")
cuda_add_library(cudaSift SHARED
src/cudaImage.cu
src/cudaSiftH.cu
src/matching.cu
src/geomFuncs.cpp
src/mainSift.cpp
)
target_link_libraries(cudaSift ${CUDA_cudadevrt_LIBRARY} ${OpenCV_LIBS})
set(PUBLIC_HEADERS include/cudaImage.h include/cudaSift.h)
set_target_properties(cudaSift PROPERTIES PUBLIC_HEADER
"${PUBLIC_HEADERS}"
)
include(GNUInstallDirs)
install(TARGETS cudaSift
LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}"
PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
)
configure_file(cudaSift.pc.in cudaSift.pc #ONLY)
install(FILES ${CMAKE_BINARY_DIR}/cudaSift.pc DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/pkgconfig)
My GPU is a GeForce RTX 2060, driver version 430.5, and after running:
mkdir build && cd build
cmake ..
sudo make -j
sudo make install
sudo ldconfig
-in order to build the package, I try to run my code and get the following error:
safeCall() Runtime API error in file </path/to/CudaSift/src/cudaImage.cu>, line 24 : out of memory.
Precisions:
I run the exact same code on another computer which has a GeForce GTX 1050, only changing in CMakeLists.txt -arch=sm_75 to arch=sm_61 and it executes just fine.
From previous questions, I think this is a compilation problem, linked to the arch=sm_** value, but I changed it and it still doesn't work.
The objects i'm passing to my GPU are images, which I'm sure aren't too big, since it works on my other computer which GPU has less memory
UPDATE:
I found the problem, the package was actually compiled properly.
Actually A tensorflow model was loaded in the code, but after deleting it, the error didn't happen again.
I don't know why though, maybe it reserved a lot of GPU memory ?
I am trying to boot QEMU in a terminal.
qemu-system-x86_64 -hda ubuntu.img -cdrom ubuntu-16.04.6-server-amd64.iso -m 2048 -boot d -nographic
ubuntu.img is a blank disk image to install Ubuntu on and the -cdrom is downloaded from the Ubuntu website.
I see the following output:
WARNING: Image format was not specified for 'ubuntu.img' and probing guessed raw.
Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
Specify the 'raw' format explicitly to remove the restrictions.
warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
And then nothing else.
Is this a fatal error? Do I need to do something else to boot Ubuntu? I can access the monitor (C-a c), but I'm not sure where to go from there.
I recently tried to build my https://github.com/eyalroz/cuda-api-wrappers/ library's examples after switching to another Linux distribution on the same machine. Strangely enough, I encountered a linking issue. The command:
/usr/bin/c++ -Wall -std=c++11 -g CMakeFiles/device_management.dir/examples/by_runtime_api_module/device_management.cpp.o -o examples/bin/device_management -rdynamic lib/libcuda-api-wrappers.a -Wl,-Bstatic -lcudart_static -Wl,-Bdynamic -lpthread -ldl -lrt
fails to find the CUDA runtime library, and I get:
CMakeFiles/device_management.dir/examples/by_runtime_api_module/device_management.cpp.o: In function `cuda::device::peer_to_peer::get_attribute(cudaDeviceP2PAttr, int, int)':
/home/eyalroz/src/mine/cuda-api-wrappers/src/cuda/api/device.hpp:38: undefined reference to `cudaDeviceGetP2PAttribute'
collect2: error: ld returned 1 exit status
but if I add -L/usr/local/cuda/lib64 it builds fine. This didn't use to happen before; and it doesn't happen on another machine I've checked on, nor does it even happen to other targets using the CUDA runtime in the same CMakeLists.txt (like version_managament).
FindCUDA seems to be finding everything, as the value of ${CUDA_LIBRARIES} is /usr/local/cuda/lib64/libcudart_static.a;-lpthread;dl;/usr/lib/x86_64-linux-gnu/librt.so. And the target lines in CMakeLists.txt are:
add_executable(device_management EXCLUDE_FROM_ALL examples/by_runtime_api_module/device_management.cpp)
target_link_libraries(device_management cuda-api-wrappers ${CUDA_LIBRARIES})
as is suggested in answers to other related questions (e.g. here). Why is this happening? Should I "manually" add the -L switch?
Edit: Following #RobertCrovella's suggestion, here are the ld search paths:
$ gcc -print-search-dirs | sed '/^lib/b 1;d;:1;s,/[^/.][^/]*/\.\./,/,;t 1;s,:[^=]*=,:;,;s,;,; ,g' | tr \; \\012 | tr ':' "\n" | tail -n +3
/usr/local/cuda/lib64/x86_64-linux-gnu/5/
/usr/local/cuda/lib64/x86_64-linux-gnu/
/usr/local/cuda/lib/
/usr/lib/gcc/x86_64-linux-gnu/5/
/usr/x86_64-linux-gnu/lib/x86_64-linux-gnu/5/
/usr/x86_64-linux-gnu/lib/x86_64-linux-gnu/
/usr/x86_64-linux-gnu/lib/
/usr/lib/x86_64-linux-gnu/5/
/usr/lib/x86_64-linux-gnu/
/usr/lib/
/lib/x86_64-linux-gnu/5/
/lib/x86_64-linux-gnu/
/lib/
/usr/lib/x86_64-linux-gnu/5/
/usr/lib/x86_64-linux-gnu/
/usr/lib/
/usr/local/cuda/lib64/
/usr/x86_64-linux-gnu/lib/
/usr/lib/
/lib/
/usr/lib/
$ ld --verbose | grep SEARCH_DIR | tr -s ' ;' \\012
SEARCH_DIR("=/usr/local/lib/x86_64-linux-gnu")
SEARCH_DIR("=/lib/x86_64-linux-gnu")
SEARCH_DIR("=/usr/lib/x86_64-linux-gnu")
SEARCH_DIR("=/usr/local/lib64")
SEARCH_DIR("=/lib64")
SEARCH_DIR("=/usr/lib64")
SEARCH_DIR("=/usr/local/lib")
SEARCH_DIR("=/lib")
SEARCH_DIR("=/usr/lib")
SEARCH_DIR("=/usr/x86_64-linux-gnu/lib64")
SEARCH_DIR("=/usr/x86_64-linux-gnu/lib")
Notes:
Yes, I know the CMakeLists.txt there is ugly.
TL;DR:
After the FindCUDA invocation, add the lines:
get_filename_component(CUDA_LIBRARY_DIR ${CUDA_CUDART_LIBRARY} DIRECTORY)
set(CMAKE_EXE_LINKER_FLAGS ${CMAKE_EXE_LINKER_FLAGS} "-L${CUDA_LIBRARY_DIR}")
and building should succeed on both systems.
Discussion:
(Paraphrasing #RobertCrovella and myself in the comments:)
OP was expecting, that if the following hold:
FindCUDA succeeds
${CUDA_LIBRARIES} includes a valid full path to either the static or the dynamic CUDA runtime library
the library dependency is indicated using target_link_libraries(relevant_target ${CUDA_LIBRARIES})
... then the CMake-based build he was attempting should succeed on a variety of valid CUDA installations. That is (unfortunately) not the case, since while FindCUDA does locate the CUDA library path, it does not actually make your linker search that path. So a failure should actually be expected. The build had worked on OP's old system due to a "fluke", or rather, due to OP having added the CUDA library directory to the linker's search path, somehow, apriori.
The linking command must be issued with the -L/path/to/cuda/libraries switch, so that the linker knows where to looks for the (unspecified-path) libraries referred to be the CUDA-related -l switches (in OP's case, -lcudart_static).
This answer discusses how to do that in CMake for different kinds of targets. You might also want to have a look at man gcc (the GCC manual page, also available here) regarding the -l and -L options, if you are not familiar with them.
How do you terminate a run in SBT without exiting?
I'm trying CTRL+C but it exits SBT. Is there a way to only exit the running application while keeping SBT open?
From sbt version 0.13.5 you can add to your build.sbt
cancelable in Global := true
It is defined as "Enables (true) or disables (false) the ability to interrupt task execution with CTRL+C." in the Keys definition
If you are using Scala 2.12.7+ you can also cancel the compilation with CTRL+C. Reference https://github.com/scala/scala/pull/6479
There are some bugs reported:
https://github.com/sbt/sbt/issues/1442
https://github.com/sbt/sbt/issues/1855
In the default configuration, your runs happen in the same JVM that sbt is running, so you can't easily kill them separately.
If you do your run in a separate, forked JVM, as described at Forking, then you can kill that JVM (by any means your operating system offers) without affecting sbt's JVM:
run / fork := true
I've found the following useful when I have control over the main loop of the application being run from sbt.
I tell sbt to fork when running the application (in build.sbt):
fork in run := true
I also tell sbt to forward stdin from the sbt shell to the application (in build.sbt):
connectInput in run := true
Finally, in the main thread of the application, I wait for end-of-file on stdin and then shutdown the JVM:
while (System.in.read() != -1) {}
logger.warn("Received end-of-file on stdin. Exiting")
// optional shutdown code here
System.exit(0)
Of course, you can use any thread to read stdin and shutdown, not just the main thread.
Finally, start sbt, optionally switch to the subproject you want to run, run.
Now, when you want to stop the process, close its stdin by typing CTRL-D in the sbt shell.
Consider using sbt-revolver. We use it in our company and it's really handy.
For what you're asking can be done with:
reStart
reStop
Without need to configure build.sbt file.
Your can use this plugin by adding:
addSbtPlugin("io.spray" % "sbt-revolver" % "0.9.1")
To your project/plugins.sbt
I realized that as I run makefiles from my main makefile, if they child makefiles fail, the parent continues and does not return with an error exit code.
I've tried to add the exception handling...but it does not work. Any ideas?
MAKE_FILES := $(wildcard test_*.mak)
compile_tests:
#echo "Compiling tests.$(MAKE_FILES)."
#for m in $(MAKE_FILES); do\
$(MAKE) -f "$$m"; || $(error Failed to compile $$m)\
done
You cannot use make functions like $(error ...) in your recipe, because all make variables and functions are expanded first, before the shell is invoked. So the error function will happen immediately when make tries to run that recipe, before it even starts.
You have to use shell constructs to fail, not make constructs; something like:
compile_tests:
#echo "Compiling tests.$(MAKE_FILES)."
#for m in $(MAKE_FILES); do \
$(MAKE) -f "$$m" && continue; \
echo Failed to compile $$m; \
exit 1
done
However, even this is not really great, because if you use -k it will still stop immediately. Better is to take advantage of what make does well, which is run lots of things:
compile_tests: $(addprefix tests.$(MAKE_FILES))
$(addprefix tests.$(MAKE_FILES)): tests.%:
$(MAKE) -f "$*"
One note, if you enable -j these will all run in parallel. Not sure if that's OK with you or not.