The task I want to complete: I need to run a python package inside of a singularity-container that is asking to open at least some 9704 files. This is the first I have heard of it and searching around this has something to do with a system’s ulimit.
What I currently have is the following def file.
I am setting the * hard nofile flag and the * soft nofile flag to 15 thousand. The sed line does edit the conf file but within the singularity shell my ulimit is still the default 1024.
Bootstrap: docker
From: fedora
%post
dnf -y update
dnf -y install nano pip wget libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
wget -c https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
/bin/bash Anaconda3-2020.02-Linux-x86_64.sh -bfp /usr/local
conda config --file /.condarc --add channels defaults
conda config --file /.condarc --add channels conda-forge
conda update conda
sed -i '2s/#/\n* hard nofile 15000\n* soft nofile 15000\n\n#/g' /etc/security/limits.conf
bash
%runscript
python /Users/lamsal/count_of_monte_cristo/orthofinder_run/OrthoFinder_source/orthofinder.py -f /Users/lamsal/count_of_monte_cristo/orthofinder_run/concatanated_FAs/
I am following the “official” instuctions to change the ulimits for a RHEL based system from IBM’s webpage here: https://www.ibm.com/docs/en/rational-clearcase/9.0.2?topic=servers-increasing-number-file-handles-linux-workstations
Is the sed line not the right way to change ulimits for a singularity image?
Short answer:
Change the value on the host OS.
Long answer:
In this instance, running a singularity container is best thought of as any other binary you're executing in your host OS. It creates its own separate environment, but otherwise it follows the rules and restrictions of the user running it. Here, the ulimit is taken from the host kernel and completely ignores any configs that may exist in the container itself.
Compare the output from the following:
# check the ulimit on the host
ulimit -n
# check the ulimit in the singularity container
singularity exec -e image.sif ulimit -n
# docker only cares about container config settings
docker run --rm fedora:latest ulimit -n
# change your local ulimit
ulimit -n 4096
# verify it has changed
ulimit -n
# singularity has changed
singularity exec -e image.sif ulimit -n
# ... but docker hasn't
docker run --rm fedora:latest ulimit -n
To have a persistent fix, you'll need to modify the setting on your host OS. Assuming you're on MacOS this answer should take care of that.
If you don't have root privs or you're only using this intermittently you can run ulimit by before running singularity. Alternatively, you could use a wrapper script to run the image and set it in there.
On a server running containers with Podman I just realised, there are many containers with "Exited" status and wanted to remove all of them in one go.
How can I do it with Podman?
According to the official documentation there is a specific command for just that purpose:
Remove all stopped containers from local storage:
podman container prune
After searching for some time I found a quick and easy one liner to get my exited containers cleaned.
One option is:
podman rm -f $(podman ps -a -f "status=exited" -q)
The second option is:
podman ps -f status=exited --format "{{.ID}}" | xargs podman rm -f
A big thanks to dannotdaniel for the second option. This saved me at least an hour. :)
The openshift command
rhc tail -a rentapp
works fine.
However
rhc tail -a rentapp > xxx
is stuck forever
I think it has something to do with detection of standard output in rhc utility. Whether it's terminal or not.
All you have to do is to trick rhc it has terminal on standard output.
You may try to play with script utility, I was quite successful with it now:
script -q -e -c "rhc tail -a rentapp" /dev/null > xxx
How do I set the hostname of an instance in GCE permanently? I can set it via hostname,but after reboot it is gone again.
I tried to feed in metadata (hostname:f.q.d.n), but that did not do the job. But it should work via metadata (https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/google-startup-scripts).
Anybody an idea?
The most simple way to achieve it is creating a simple script and that's what I have done.
I have stored the hostname in the instance metadata and then I retrieve it every time the system restarts in order to set the hostname using a cron job.
$ gcloud compute instances add-metadata <instance> --metadata hostname=<new_hostname>
$ sudo crontab -e
And this is the line that must be appended in crontab
#reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
After these steps, every time you restart your instance it will have the hostname <new_hostname>.
You can check it in the prompt or with the command: hostname
You need to remove the file /etc/dhcp/dhclient.d/google_hostname.sh
rm -rf /etc/dhcp/dhclient.d/google_hostname.sh
rm -rf /etc/dhcp/dhclient-exit-hooks.d/google_set_hostname
It's worth noting that this script is needed in order to run gcloud beta compute instances create with the --hostname flag. If this script is absent on a base image, new VM instances will preserve the source hostname/FQDN!
Edit rc.local
sudo nano /etc/rc.local
Add your line under the rest:
hostname *your.hostname.com*
Make sure to run the following after for the script to be executed
chmod +x /etc/rc.d/rc.local
Reboot, and profit.
That isn't possible. Please take a look at this answer. The following article explains that the "hostname" is part of the default metadata entries and it is not possible to manually edit any of the default metadata pairs. As such, you would need to use a script or something else to change the hostname every time the system restarts, otherwise it will automatically get re-synced with the metadata server on every reboot.
You can find information on startup scripts for GCE in this article. You can visit this one for info on how to apply the script to an instance.
You can also create a simple startup-script to do the jobs:
$ gcloud compute instances add-metadata <instance-name> --zone <instance-zone> --metadata startup-script='#! /bin/bash
hostname <hostname>'
Notice that if you already have a startup-script you need to add to the existing startup-script below command or you will replace all the startup-script:
$ hostname instance-name
I was lucky to set hostname at GCE running CentOS.
Source: desantolo.com
Click EDIT on your instance
Go to "Custom metadata" section
Add hostname + your.hostname.tld (change "your.hostname.tld" to your actual hostname
run curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google"
run sudo env EDITOR=nano crontab -e to edit crontab
add line #reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
On your keyboard Ctrl + X
On your keyboard hit Y
On your keyboard hit Enter
run reboot
after system rebooted, run hostname and see if your changes applied
Good luck!
If anyone finds this solution does not work for them on GCS instance. Then I suggest you try using exit hooks as described by Google Support.
In fact, some distributions of Linux like CentOS and Debian use
dhclient-script script to configure the network parameters of the
machine. This script is invoked from time to time by dhclient which is
dynamic host configuration protocol client and provides a means for
configuring one or more network interfaces using the DHCP protocol,
BOOTP protocol, or if these protocols fail, by statically assigning an
address.
The following text is a quote from the man (manual) page of
dhclient-script:
After all processing has completed, /usr/sbin/dhclient-script
checks for the presence of an executable
/etc/dhcp/dhclient-exit-hooks script, which if present is invoked using the ´.´ command. The exit status of
dhclient-script will be passed to dhclient-exit-hooks in the exit_status shell variable, and will always be zero
if the script succeeded at the task for which it was invoked. The rest of the environment as described previ‐
ously for dhclient-enter-hooks is also present. The /etc/dhcp/dhclient-exit-hooks script can modify the valid of
exit_status to change the exit status of dhclient-script.
That being said, by taking a look into the code snippet of
dhclient-script, we can see the script checks for the existence of an
executable /etc/dhcp/dhclient-up-hooks script and all scripts in
/etc/dhcp/dhclient-exit-hooks.d/ directory.
ETCDIR="/etc/dhcp"
193 exit_with_hooks() {
194 exit_status="${1}"
195
196 if [ -x ${ETCDIR}/dhclient-exit-hooks ]; then
197 . ${ETCDIR}/dhclient-exit-hooks
198 fi
199
200 if [ -d ${ETCDIR}/dhclient-exit-hooks.d ]; then
201 for f in ${ETCDIR}/dhclient-exit-hooks.d/*.sh ; do
202 if [ -x ${f} ]; then
203 . ${f}204 fi
205 done
206 fi
207
208 exit ${exit_status}209 }
Therefore, in order to modify the hostname of your Linux VM you can
create a custom script with .sh extension and place it in
/etc/dhcp/dhclient-exit-hooks.d/ directory. If this directory does not
exist, you can create it. The content of the custom script will be:
hostname YourFQDN.sh
>
be sure to make this new .sh file executable:
chmod +x YourFQDN.sh
Source: (https://groups.google.com/d/msg/gce-discussion/olG_nXZ-Jaw/Y9HMl4mlBwAJ)
Im not sure I understand Adrián's answer. It seems overly complex since you have to run a script each boot why not just use hostname?
vi /etc/rc.local
add:
hostname your_hostname
thats it. tested and working. no need to fiddle with metadata and such.
Non-cron/metadata/script solution.
Edit /etc/dhclient-(network-interface).conf or create one if it doesn't exist.
Example:
sudo nano /etc/dhclient-eth0.conf
Then add the following line, replacing the desired FQDN between the double quotes:
supersede host-name "hostname.domain-name";
Persists between reboots and hostname and hostname -f works as intended.
Tested on Debian.
The dhclient sets the hostname using DHCP
You can override this by creating a custom hook script in /etc/dhcp/dhclient-exit-hooks.d/custom_set_hostname that would read the hostname from /etc/hostname:
if [ -f "/etc/hostname" ]; then
new_host_name=$(cat /etc/hostname)
fi
The script must have the execute permission.
It's important to set the new_host_name variable and not calling the hostname command directly as any call to the hostname command will be overriden by another hook or the dhclient-script which uses this variable
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and prevent the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
In my CentOS VMs I found that the script /etc/dhcp/dhclient.d/google_hostname.sh, installed by the google-compute-engine RPM, actually changed the hostname. This happens when the instance gets its IP address during boot.
While it's not the long-term solution I really want, for now I simply deleted this script. The hostname I set with hostnamectl now persists after a reboot.
The script is likely to be in exactly the same place in Debian/Ubuntu VMs, but of course I don't run any of those.
There is some hack you can do to achieve this as i did. Just do:
sudo chattr +i /etc/hosts
This command actually makes the file "(i)mmutable", which means even root can't change it (unless root does chattr -i /etc/hosts first, of course).
As above, you can undo this with sudo chattr -i /etc/hosts
Cheer!
An easy way to fix this is to set up a startup script with custom metadata.
Key :startup-script
Value:
#! /bin/bash
hostname <desired hostname>
I am trying to create a custom CD/DVD to deploy RHEL 7 with kickstart file. Here is what I did:
Edited isolinux.cfg (in the ISOLinux folder) and grub.cfg file (in the EFI\BOOT folder).
Created ISO using mkisofs.
But it is not working. Am I using correct files/method?
Edit the ISO image and put the ks.cfg file that you have created.
Preferably, put the ks.cfg file inside ks directory. More information can be found here.
You need to use the new command. Here is an example of what will work:
Add the kickstart file to your download and exploded ISO.
Run this command in the area with the ISO and kickstart and point to another location to build the ISO:
genisoimage -r -v -V "OEL6 with KS for OVM Manager" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o OEL6U6_OVM_Manager.iso /var/www/html/Template/ISO/
I found the way to create custom DVD from the RHEL7 page.
Mount the downloaded image
mount -t iso9660 -o loop path/to/image.iso /mnt/iso
Create a working directory - a directory where you want to place the contents of the ISO image.
mkdir /tmp/ISO
Copy all contents of the mounted image to your new working directory. Make sure to use the -p option to preserve file and directory permissions and ownership.
cp -pRf /mnt/iso /tmp/ISO
Unmount the image.
umount /mnt/iso
Make sure your current working directory is the top-level directory of the extracted ISO image - e.g. /tmp/ISO/iso. Create the new ISO image using genisoimage:
genisoimage -U -r -v -T -J -joliet-long -V "RHEL-7.1 Server.x86_64" -Volset "RHEL-7.1 Server.x86_64" -A "RHEL-7.1 Server.x86_64" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .
Hope the answer will helpful:
I am editing my answer due to the comments posted. Here is a more comprehensive solution:
(A) You need to create the ISO properly. I found helpful information in this URL.
Here is the line that I actually ended up with, for my MBR/UEFI ISO creation:
mkisofs -U -A "<Volume Header>" -V "RHEL-7.1 x86_64" -volset "RHEL-7.1 x86_64" -J -joliet-long -r -v -T -x ./lost+found -o ${OUTPUT}/${HOST}.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -boot-load-size 18755 /dir/where/sources/for/ISO/are/located
Be careful with the -V parameter, as it has to match what the kernel has defined for inst.stage2. In the default grub.conf included in the boot disk, it is configured to be "hd:LABEL=RHEL-7.1\x20x86_64" which matches with the settings above.
(B) You need the correct setup for EFI for RHEL7. For some reason, this has changed from RHEL6, where you could just use the /EFI/BOOT/BOOTX64.conf. Now it uses the /EFI/BOOT/grub.cfg. Common wisdom from Red Hat Manuals state to add the inst.ks= parameter to the kernel line. The grub.cfg that comes in the /EFI/BOOT directory of the RHEL7 boot iso actually has the linuxefi parameter, instead of the kernel one, I would guess they would work the same. If you are including the KS file on the CD, this should get you there.
Good Luck!