os-prober works but grub2-mkconfig wont add entry - fedora

I have two Fedora installations on my HDD. When I execute grub2-mkconfig from one, the other is detected, but no entry for it is created.
$ grub2-mkconfig 1>/dev/null
Generating grub configuration file...
Found Fedora 33 (Xfce) on /dev/sda5
Adding boot menu entry for EFI firmware configuration
done
$ grub2-mkconfig 2>/dev/null
...
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
...
$ os-prober
/dev/sda5:Fedora 33 (Xfce):Fedora:linux:btrfs:UUID=xxxxx...xxxxx:subvol=root00
Why isn't an entry added for this system ?

Related

Ubuntu Linux 18.04 WSL in Windows: MariaDB service start fails

After installing MariaDB repository configuration tool for the first time in my Linux WSL for Windows (as described in MariaDB Download Page), I executed mysql but there was a socket error. netstat -apn | grep mysql shows nothing, indicating the mysql service is stopped; sudo apt list | grep *mysql-server* shows I had successfully installed mysql-server.
However, as I tried sudo service mysql start, the command line gives:
* Starting MariaDB database server mysqld [fail]
I tried the following methods, but all failed and yielded the same answer:
Using /etc/init.d/mysql start
Removing /var/lib/mysql/ib_logfile0 and /var/lib/mysql/ib_logfile1
Upgrading access of /var/lib/mysql using chmod -R 777 /var/lib/mysql
Removing everything from /var/lib/mysql/
Changing port setting using port=1112 in /etc/my.cnf (since I have another mysql on the Windows side)
Filling in additional information in /etc/my.cnf (my configuration file was initially empty after installation, and I filled in the basedir, datadir, socket, log_error, and pid-file properties)
Trying systemctl instead of service (this failed because Linux WSL uses sysvinit instead of systemd)
How could I start my MariaDB service? Thanks
I'm able to reproduce your problem (or one that looks an awfully lot like it) on WSL1. Can you confirm that you are using WSL1?
I spun up two cloned instances (wsl --import of a clean backup) of Ubuntu 20.04 -- One on WSL1 and the other on WSL2. Unfortunately, I don't have a handy 18.04 to work with, but I'm hoping the problem is the same.
On WSL2, everything worked properly. After the installation steps (pretty much the ones you put in your comment, but for 20.04), I was able to:
sudo service mariadb start
and then sudo mysql -u root successfully.
On WSL1, however, the MariaDB installation seems to fail in a strange way. It does not create /etc/mysql/mariadb.cnf, which leads to what you saw with an empty /etc/mysql/my.cnf, since it's a symlink to mariadb.cnf.
So I created mariadb.cnf manually:
sudo vi /etc/mysql/mariadb.cnf
with the contents:
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 0. "/etc/mysql/my.cnf" symlinks to this file, reason why all the rest is read.
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# If you are new to MariaDB, check out https://mariadb.com/kb/en/basic-mariadb-articles/
#
# This group is read both by the client and the server
# use it for options that affect everything
#
[client-server]
# Port or socket location where to connect
# port = 3306
socket = /run/mysqld/mysqld.sock
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
This is simply the default mariadb.cnf that was created correctly by the installation on WSL2.
Attempting to start the service then gave an error about a missing /etc/mysql/debian-start, so I repeated the same steps of copying it over:
sudo vi /etc/mysql/debian-start
With the contents:
#!/bin/bash
#
# This script is executed by "/etc/init.d/mariadb" on every (re)start.
#
# Changes to this file will be preserved when updating the Debian package.
#
# NOTE: This file is read only by the traditional SysV init script, not systemd.
#
source /usr/share/mysql/debian-start.inc.sh
# Read default/mysql first and then default/mariadb just like the init.d file does
if [ -f /etc/default/mysql ]; then
. /etc/default/mysql
fi
if [ -f /etc/default/mariadb ]; then
. /etc/default/mariadb
fi
MYSQL="/usr/bin/mysql --defaults-file=/etc/mysql/debian.cnf"
MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
# Don't run full mysql_upgrade on every server restart, use --version-check to do it only once
MYUPGRADE="/usr/bin/mysql_upgrade --defaults-extra-file=/etc/mysql/debian.cnf --version-check"
MYCHECK="/usr/bin/mysqlcheck --defaults-file=/etc/mysql/debian.cnf"
MYCHECK_SUBJECT="WARNING: mysqlcheck has found corrupt tables"
MYCHECK_PARAMS="--all-databases --fast --silent"
MYCHECK_RCPT="${MYCHECK_RCPT:-root}"
## Checking for corrupt, not cleanly closed (only for MyISAM and Aria engines) and upgrade needing tables.
# The following commands should be run when the server is up but in background
# where they do not block the server start and in one shell instance so that
# they run sequentially. They are supposed not to echo anything to stdout.
# If you want to disable the check for crashed tables comment
# "check_for_crashed_tables" out.
# (There may be no output to stdout inside the background process!)
# Need to ignore SIGHUP, as otherwise a SIGHUP can sometimes abort the upgrade
# process in the middle.
trap "" SIGHUP
(
upgrade_system_tables_if_necessary;
check_root_accounts;
check_for_crashed_tables;
) >&2 &
exit 0
And then chmod 755 /etc/mysql/debian-start
After that, voila:
sudo service mariadb restart
sudo mysql -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 32
Server version: 10.5.8-MariaDB-1:10.5.8+maria~focal mariadb.org binary distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
Given the steps you've tried so far, I'd recommend blowing away pretty much all of it to try to start over "clean":
sudo apt remove mariadb-server
sudo apt autoremove
sudo rm -rf /etc/mysql
sudo rm -rf /var/lib/mysql
sudo rm -rf /usr/lib/mysql
Then reinstall mariadb-server and follow the steps above to create the correct files.

Startup script doesn't seem to work

I've recently started using Google's Compute engine for some of my projects the problem is my startup script doesn't seem to work, For some reason my script just doesn't work, the VM has the startup-script metadata and it works fine when I run it manually with:
sudo google_metadata_script_runner --script-type startup
Here is what I am trying to run on startup:
#!/bin/bash
sudo apt-get update
sudo rm -f Eve.jar
sudo rm -f GameServerStatus.jar
wget <URL>/Eve.jar
wget <URL>/GameServerStatus.jar
sudo chmod 7777 Eve.jar
sudo chmod 7777 GameServerStatus.jar
screen -dmS Eve sh Eve.sh
screen -dmS PWISS sh GameServerStatus.sh
There are no errors in the log either, it just seems to stop at the chmod or screen commands, Any ideas?
Thanks!
To add to kangbu's answer:
Checking the logs in container-optimized OS by
sudo journalctl -u google-startup-scripts.service
showed that the script could not find the user. After a long time of debugging I finally added a delay before the sudo and now it works. Seems the user is not registered when the script runs.
#! /bin/bash
sleep 10 # wait...
cut -d: -f1 /etc/passwd > /home/user/users.txt # make sure the user exists
cd /home/user/project # cd does not work after sudo, do it before
sudo -u user bash -c '\
source /home/user/.bashrc && \
<your-task> && \
date > /home/user/startup.log'
I have the same problem #Brina mentioned. I set up metadata key startup-script and value like:
touch a
ls -al > test.txt
When I ran the script above sudo google_metadata_script_runner --script-type startup, it worked perfectly, However if I reset my VM instance the startup script didn't work. So, I checked startup script logs
...
Jul 3 04:30:37 kbot-6 ntpd[1514]: Listen normally on 5 eth0 fe80::4001:aff:fe8c:7 UDP 123
Jul 3 04:30:37 kbot-6 ntpd[1514]: peers refreshed
Jul 3 04:30:37 kbot-6 ntpd[1514]: Listening on routing socket on fd #22 for interface updates
Jul 3 04:30:38 kbot-6 startup-script: INFO Starting startup scripts.
Jul 3 04:30:38 kbot-6 startup-script: INFO Found startup-script in metadata.
Jul 3 04:30:38 kbot-6 startup-script: INFO startup-script: Return code 0.
Jul 3 04:30:38 kbot-6 startup-script: INFO Finished running startup scripts.
Yes. they found startup-script and ran it. I guessed it had executed as an another user. I changed my script like this:
pwd > /tmp/pwd.txt
whoami > /tmp/whoami.txt
The result is:
myuserid#kbot-6:/tmp$ cat pwd.txt whoami.txt
/
root
Yes. It was executed at the / diectory as root user. Finally, I changed my script to sudo -u myuserid bash -c ... which run it by specified userid.
Go to the VM instances page.
Click on the instance for which you want to add a startup script.
Click the Edit button at the top of the page.
Under Custom metadata, click Add item.
Add your startup script using one of the following keys:
startup-script: Supply the startup script contents directly with this key.
startup-script-URL: Supply a Google Cloud Storage URL to the start script file with this key.
It is working. The documentation for the new instance and existing instance as shown in GCE Start Up Script
Startup script output is written to the following log files:
CentOS and RHEL: /var/log/messages
Debian: /var/log/daemon.log
Ubuntu 14.04, 16.04, and 16.10: /var/log/syslog
On Ubuntu 12.04, SLES 11 and 12, and all images older than v20160606:
sudo /usr/share/google/run-startup-scripts
think that you do not need sudo, as well as the chmod 7777 should be 777
also a cd (or at least a pwd) at the beginning might be useful.
... log to text file, in order to know where the script may fail.

net-snmp ubuntu - snmptrapd doesn't log in mysql

snmptrapd doesn't log in mysql
ISSUE - net-snmp does not log traps into the mysql database - Installed on Ubuntu
Net-snmp was configured with the following as per the tutorial - http://www.net-snmp.org/wiki/index.php/Net-Snmp_on_Ubuntu
I configured snmpdtrapd as mentioned on the following page.
http://www.net-snmp.org/wiki/index.php/Snmptrapd
My mysql installation was running with no issues, however it did not contain mysql_config file - so I ran the following install
sudo apt-get install libmysqlclient-dev – will get mysql_config file
Mysql continues to run with no issues
net-snmp configuration was run with the following command successfully
./configure --with-defaults --with-mysql
the config output showed that mysql logging was enabled.
cat snmptrapd.conf ---------------
authCommunity log public
# maximum number of traps to queue before forced flush
# set to 1 to immediately write to the database
sqlMaxQueue 1
# seconds between periodic queue flushes
sqlSaveInterval 1
cat snmpd.conf - contains as its line1 & line 2 -------------------
rwcommunity public localhost
linux#lin-850:~$ cat my.cnf
[snmptrapd]
user=root
password=qbcdfee
host=localhost
The following command runs well with appropriate output
snmpwalk -v 1 -c public localhost
db schema was made as per - /net-snmp-5.7.3/dist/schema-snmptrapd.sql
Where did I go wrong - pls help. Thanks in advance
regs
George

Openshift: Why can't I add a node to a district?

I'm trying to add a node to a district:
[root#broker ~]# oo-admin-ctl-district -c add-node -n small_district -i node1.example.com
ERROR OUTPUT:
Node with server identity: node1.example.com is of node profile '' and needs to be 'small' to add to district 'small_district'
But, when I go to the node, it seems to know that it should be a small:
[root#node1 ~]# grep -i profile /etc/mcollective/facts.yaml
node_profile: small
I ran oo-diagnostics on the broker and got:
[root#broker ~]# oo-diagnostics
FAIL: test_node_profiles_districts_from_broker
No node hosts found. Please install some,
or ensure the existing ones respond to 'mco ping'.
OpenShift cannot host gears without at least one node host responding.
FAIL: run_script
oo-accept-systems -w 2 had errors:
--BEGIN OUTPUT--
FAIL: No node hosts responded. Run 'mco ping' and troubleshoot if this is unexpected.
1 ERRORS
But mco ping shows no problems:
[root#broker ~]# mco ping
node1.example.com time=106.82 ms
---- ping statistics ----
1 replies max: 106.82 min: 106.82 avg: 106.82
I also found https://lists.openshift.redhat.com/openshift-archives/users/2013-November/msg00006.html, which lists the same error message. However, I already have everything in /etc/mcollective/facts.yaml that the thread suggets:
[root#node1 ~]# grep 'node_profile' /etc/mcollective/facts.yaml
node_profile: small
What could be preventing the node from being added to the district?
The problem was a misconfiguration of the node. https://bugzilla.redhat.com/show_bug.cgi?id=1064977 should resolve the documentation issue that led to this.
Resolution was to update /etc/mcollective/server.cfg:
[root#node1 ~]# git diff --color /etc/mcollective/server.cfg.old /etc/mcollective/server.cfg
diff --git a/etc/mcollective/server.cfg.old b/etc/mcollective/server.cfg
index c614ed9..fff36c5 100644
--- a/etc/mcollective/server.cfg.old
+++ b/etc/mcollective/server.cfg
## -22,4 +22,5 ## plugin.activemq.pool.1.password = marionette
# Facts
factsource = yaml
-plugin.yaml = /opt/rh/ruby193/root/etc/mcollective/facts.yaml
+plugin.yaml = /etc/mcollective/facts.yaml

Root access required for CUDA?

I am using GeForce 8400M GS on Ubuntu 10.04 and I am learning CUDA programming. I am writing and running few basic programs. I was using cudaMalloc, and it kept giving me an error until I ran the code as root. However, I had to run the code as root only once. After that, even if I run the code as normal user, I do not get an error on malloc. What's going on?
This is probably due to your GPU not being properly initialized at boot. I've come across this problem when using Ubuntu Server and other installations where an X server isn't being started automatically. Try the following to fix it:
Create a directory for a script to initialize your GPUs. I usually use /root/bin. In this directory, create a file called cudainit.sh with the following code in it (this script came from the Nvidia forums).
#!/bin/bash
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
# Count the number of NVIDIA controllers found.
N3D=`/usr/bin/lspci | grep -i NVIDIA | grep "3D controller" | wc -l`
NVGA=`/usr/bin/lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i;
done
mknod -m 666 /dev/nvidiactl c 195 255
else
exit 1
fi
Now we need to make this script run automatically at boot. Edit /etc/rc.local to look like the following.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#
# Init CUDA for all users
#
/root/bin/cudainit.sh
exit 0
Reboot your computer and try to run your CUDA program as a regular user. If I'm right about what the problem is, then it should be fixed.
To work with Ubuntu 14.04 I followed https://devtalk.nvidia.com/default/topic/699610/linux/334-21-driver-returns-999-on-cuinit-cuda-/ to add nvidia-uvm to etc/modules, and to add a line to a custom udev rule. Create /etc/udev/rules.d/70-nvidia-uvm.rules with this line:
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/bin/mknod -m 666 /dev/nvidia-uvm c $(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0;'"
I don't understand why sudo modprobe nvidia-uvm works to create a proper /dev/nvidia-uvm (as does sudo cuda_program) but the /etc/modules listing requires the udev rule.