I am using tmux (I connect to linux with exceed over ssh)
I added the magical set -g terminal-overrides 'xterm*:smcup#:rmcup#' which enable mouse scrolling and that works very nicely !
Though I have a problem, when I scroll up with the middle button for some reason the cursor get soon back down the page (like if I was entering q).
I don't know if it is a refresh thing or...
I add my .tmux.conf file in any case
# change prefix key to C-a like screen and also C-a-a to send it into
# a session within a session
unbind C-b
set -g prefix C-a
bind-key a send-prefix
# toggle last like screen
bind-key C-a last-window
bind-key C-c new-window
# a readable status line
set -g status-bg blue
set -g status-right "%Y-%m-%d %H:%M:%S"
set -g status-interval 1
# misc tweaks
#set -g display-time 3000
set -g history-limit 5000
#set -g bell-action any
#set -g visual-activity on
#set -g visual-bell on
# Sane scrolling
set -g terminal-overrides 'xterm*:smcup#:rmcup#'
# Bellow allow scrolling with mouse (enter normal browsing mode automatically)
#set -g mode-mouse on
set -g status-right "%Y-%m-%d %H:%M:%S"
This config makes tmux update status line every seconds.
Related
The task I want to complete: I need to run a python package inside of a singularity-container that is asking to open at least some 9704 files. This is the first I have heard of it and searching around this has something to do with a system’s ulimit.
What I currently have is the following def file.
I am setting the * hard nofile flag and the * soft nofile flag to 15 thousand. The sed line does edit the conf file but within the singularity shell my ulimit is still the default 1024.
Bootstrap: docker
From: fedora
%post
dnf -y update
dnf -y install nano pip wget libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
wget -c https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
/bin/bash Anaconda3-2020.02-Linux-x86_64.sh -bfp /usr/local
conda config --file /.condarc --add channels defaults
conda config --file /.condarc --add channels conda-forge
conda update conda
sed -i '2s/#/\n* hard nofile 15000\n* soft nofile 15000\n\n#/g' /etc/security/limits.conf
bash
%runscript
python /Users/lamsal/count_of_monte_cristo/orthofinder_run/OrthoFinder_source/orthofinder.py -f /Users/lamsal/count_of_monte_cristo/orthofinder_run/concatanated_FAs/
I am following the “official” instuctions to change the ulimits for a RHEL based system from IBM’s webpage here: https://www.ibm.com/docs/en/rational-clearcase/9.0.2?topic=servers-increasing-number-file-handles-linux-workstations
Is the sed line not the right way to change ulimits for a singularity image?
Short answer:
Change the value on the host OS.
Long answer:
In this instance, running a singularity container is best thought of as any other binary you're executing in your host OS. It creates its own separate environment, but otherwise it follows the rules and restrictions of the user running it. Here, the ulimit is taken from the host kernel and completely ignores any configs that may exist in the container itself.
Compare the output from the following:
# check the ulimit on the host
ulimit -n
# check the ulimit in the singularity container
singularity exec -e image.sif ulimit -n
# docker only cares about container config settings
docker run --rm fedora:latest ulimit -n
# change your local ulimit
ulimit -n 4096
# verify it has changed
ulimit -n
# singularity has changed
singularity exec -e image.sif ulimit -n
# ... but docker hasn't
docker run --rm fedora:latest ulimit -n
To have a persistent fix, you'll need to modify the setting on your host OS. Assuming you're on MacOS this answer should take care of that.
If you don't have root privs or you're only using this intermittently you can run ulimit by before running singularity. Alternatively, you could use a wrapper script to run the image and set it in there.
How do you tail openshift log files? I issued the following command:
rhc tail myapp
It seems to show first error line and then stops, but doesn't exit. If I press ctrl+C it asks whether to stop batch or not. How can I display last few errors and may be browse page by page? Is there page down/ page up shortcuts?
The 'rhc tail' command reads the last few lines of each of your log files and continues to feed subsequent log messages to your console. To view the entire log file, please review:
https://www.openshift.com/faq/how-to-troubleshoot-application-issues-using-logs
you can see by running:
rhc tail -a yourappname -l youremail -p yourpassword
Adding -a option fix this issue for me.
rhc tail -a {app_name}
Openshift place logs in different files, so if you want get logs of a specific file then you can add -f file/address/and/name
Example :
rhc tail -f app-root/logs/nodejs.log -a myAppName
also you can ask for specific number of lines by adding -o "-n 40" in command. Above command will get last 40 lines.
Example :
rhc tail -f app-root/logs/nodejs.log -o "-n 40" -a myAppName
You can also download them:
$ scp SHA#APP-DOMAIN.rhcloud.com:/var/lib/openshift/SHA/app-root/\
logs/APP.log "~/upstream.jbossas.log"
Feasible also in windows directly in git bash.
I'm trying to get CMake to build into a directory 'build', as in project/build, where the CMakeLists.txt is in project/.
I know I can do:
mkdir build
cd build
cmake ../
but that is cumbersome. I could put it in a script and call it, but then it's unpleasant to provide different arguments to CMake (like -G "MSYS Makefiles"), or I would need to edit this file on each platform.
Preferably I would do something like SET(CMAKE_OUTPUT_DIR build) in the main CMakeLists.txt. Please tell me that this is possible, and if so, how? Or some other out of source build method that makes it easy to specify different arguments?
CMake 3.13 or newer supports the command line options -S and -B to specify source and binary directory, respectively.
cmake -S . -B build -G "MSYS Makefiles"
This will look for the CMakeLists.txt in the current folder and create a build folder (if it does not yet exist) in it.
For older versions of CMake, you can use the undocumented CMake options -H and -B to specify the source and binary directory upon invoking cmake:
cmake -H. -Bbuild -G "MSYS Makefiles"
Note that there must not be a space character between the option and the directory path.
A solution that I found recently is to combine the out-of-source build concept with a Makefile wrapper.
In my top-level CMakeLists.txt file, I include the following to prevent in-source builds:
if ( ${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR} )
message( FATAL_ERROR "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there. You may need to remove CMakeCache.txt." )
endif()
Then, I create a top-level Makefile, and include the following:
# -----------------------------------------------------------------------------
# CMake project wrapper Makefile ----------------------------------------------
# -----------------------------------------------------------------------------
SHELL := /bin/bash
RM := rm -rf
MKDIR := mkdir -p
all: ./build/Makefile
# $(MAKE) -C build
./build/Makefile:
# ($(MKDIR) build > /dev/null)
# (cd build > /dev/null 2>&1 && cmake ..)
distclean:
# ($(MKDIR) build > /dev/null)
# (cd build > /dev/null 2>&1 && cmake .. > /dev/null 2>&1)
#- $(MAKE) --silent -C build clean || true
#- $(RM) ./build/Makefile
#- $(RM) ./build/src
#- $(RM) ./build/test
#- $(RM) ./build/CMake*
#- $(RM) ./build/cmake.*
#- $(RM) ./build/*.cmake
#- $(RM) ./build/*.txt
ifeq ($(findstring distclean,$(MAKECMDGOALS)),)
$(MAKECMDGOALS): ./build/Makefile
# $(MAKE) -C build $(MAKECMDGOALS)
endif
The default target all is called by typing make, and invokes the target ./build/Makefile.
The first thing the target ./build/Makefile does is to create the build directory using $(MKDIR), which is a variable for mkdir -p. The directory build is where we will perform our out-of-source build. We provide the argument -p to ensure that mkdir does not scream at us for trying to create a directory that may already exist.
The second thing the target ./build/Makefile does is to change directories to the build directory and invoke cmake.
Back to the all target, we invoke $(MAKE) -C build, where $(MAKE) is a Makefile variable automatically generated for make. make -C changes the directory before doing anything. Therefore, using $(MAKE) -C build is equivalent to doing cd build; make.
To summarize, calling this Makefile wrapper with make all or make is equivalent to doing:
mkdir build
cd build
cmake ..
make
The target distclean invokes cmake .., then make -C build clean, and finally, removes all contents from the build directory. I believe this is exactly what you requested in your question.
The last piece of the Makefile evaluates if the user-provided target is or is not distclean. If not, it will change directories to build before invoking it. This is very powerful because the user can type, for example, make clean, and the Makefile will transform that into an equivalent of cd build; make clean.
In conclusion, this Makefile wrapper, in combination with a mandatory out-of-source build CMake configuration, make it so that the user never has to interact with the command cmake. This solution also provides an elegant method to remove all CMake output files from the build directory.
P.S. In the Makefile, we use the prefix # to suppress the output from a shell command, and the prefix #- to ignore errors from a shell command. When using rm as part of the distclean target, the command will return an error if the files do not exist (they may have been deleted already using the command line with rm -rf build, or they were never generated in the first place). This return error will force our Makefile to exit. We use the prefix #- to prevent that. It is acceptable if a file was removed already; we want our Makefile to keep going and remove the rest.
Another thing to note: This Makefile may not work if you use a variable number of CMake variables to build your project, for example, cmake .. -DSOMEBUILDSUSETHIS:STRING="foo" -DSOMEOTHERBUILDSUSETHISTOO:STRING="bar". This Makefile assumes you invoke CMake in a consistent way, either by typing cmake .. or by providing cmake a consistent number of arguments (that you can include in your Makefile).
Finally, credit where credit is due. This Makefile wrapper was adapted from the Makefile provided by the C++ Application Project Template.
This answer was originally posted here. I thought it applied to your situation as well.
Based on the previous answers, I wrote the following module that you can include to enforce an out-of-source build.
set(DEFAULT_OUT_OF_SOURCE_FOLDER "cmake_output")
if (${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
message(WARNING "In-source builds not allowed. CMake will now be run with arguments:
cmake -H. -B${DEFAULT_OUT_OF_SOURCE_FOLDER}
")
# Run CMake with out of source flag
execute_process(
COMMAND ${CMAKE_COMMAND} -H. -B${DEFAULT_OUT_OF_SOURCE_FOLDER}
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR})
# Cause fatal error to stop the script from further execution
message(FATAL_ERROR "CMake has been ran to create an out of source build.
This error prevents CMake from running an in-source build.")
endif ()
This works, however I already noticed two downsides:
When the user is lazy and simply runs cmake ., they will always see a FATAL_ERROR. I could not find another way to prevent CMake from doing any other operations and exit early.
Any command line arguments passed to the original call to cmake will not be passed to the "out-of-source build call".
Suggestions to improve this module are welcome.
I compiled my library (specifically protbuf-2.3.0) using -g -O0 on a SunOS 5.10.
A sample line in the make log is this:
/bin/bash ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I.. -D_REENTRANT -pthreads -Wall -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare -g -O0 -MT text_format.lo -MD -MP -MF .deps/text_format.Tpo -c -o text_format.lo `test -f 'google/protobuf/text_format.cc' || echo './'`google/protobuf/text_format.cc
libtool: compile: g++ -DHAVE_CONFIG_H -I. -I.. -D_REENTRANT -pthreads -Wall -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare -g -O0 -MT text_format.lo -MD -MP -MF .deps/text_format.Tpo -c google/protobuf/text_format.cc -fPIC -DPIC -o .libs/text_format.o
And then, I attached my gdb using the following steps:
Run my application (in this case, my web server which starts up a java web app which uses a library via jni during startup).
I attached my gdb to that process via gdb -p XXX (where XXX is the pid I got from ps).
And then I loaded my library from the gdb using file libprotobuf.so from the gdb prompt.
But I can't see my function names from bt. My GDB backtrace command shows something like this:
(gdb) bt
#0 0xf8f98914 in ?? ()
#1 0xf8f98830 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
I also tried doing #1 & #2 only, #1 & #3 only, and #1 & gdb libprotobuf.so -p XXX.
Aside from those, I also tried running my jvm on debug mode and added a breakpoint on the System.loadLibrary(..) command, and after stepping over that command, I then did the gdb attachment process again....but still nothing.
However, I am able to put breakpoints given function names and list the contents of a function via list. But then again, I can place breakpoints but they don't stop as well on those function names (I know it went to that function because it's in the jvm hs_err_pid report after every jvm crash).
Any ideas come it's not showing me my function names?
The problem is most likely in that GDB does not know how to figure out full executable path for the given PID. If it did know full path, you wouldn't need to do step #3 -- GDB would have added it automatically.
You can verify whether GDB deduced executable name correctly with (gdb) info file command.
If my guess is correct, help GDB by invoking it like this:
gdb /path/to/java <PID>
That should immediately solve all of your problems.
Additionally, be sure that the executable that uses your library isn't getting stripped somewhere.
I think that is linking problem. Can you check your command which is executed at the time of linking. Hope this will help.
The sun grid engine defaults to csh, and you have to put this: #$ -S /bin/sh into scripts to avoid it. What global configuration setting would change this default?
qconf -sql (to list your queues)
qconf -mq YOUR_QUEUE_NAME
(change shell to /bin/sh)