I've removed a disk (mounted at /win-vm) from host which had a kvm volume pool. Now I can't start libvirtd
root#ws:/# systemctl start libvirtd
Failed to start libvirtd.service: Unit -.mount is masked.
Logs
root#ws:/# journalctl -u libvirtd.service
-- Logs begin at Thu 2020-02-13 17:43:58 CET, end at Thu 2020-02-13 18:25:32 CET. --
Feb 13 17:45:24 ws systemd[1]: Starting Virtualization daemon...
Feb 13 17:45:24 ws systemd[1]: Started Virtualization daemon.
Feb 13 17:45:25 ws libvirtd[1157]: libvirt version: 5.0.0, package: 4 (Guido Günther <agx#sigxcpu.org> Mon, 17 Jun 2019 19:05:40 +0200)
Feb 13 17:45:25 ws libvirtd[1157]: hostname: ws
Feb 13 17:45:25 ws libvirtd[1157]: cannot open directory '/win-vm': No such file or directory
Feb 13 17:45:25 ws libvirtd[1157]: internal error: Failed to autostart storage pool 'win-vm': cannot open directory '/win-vm': No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: Failed to open file '/sys/class/net/veth0eac943/operstate': No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: unable to read: /sys/class/net/veth0eac943/operstate: No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: Failed to open file '/sys/class/net/veth725116c/operstate': No such file or directory
Feb 13 17:45:31 ws libvirtd[1157]: unable to read: /sys/class/net/veth725116c/operstate: No such file or directory
Feb 13 17:56:42 ws libvirtd[1157]: Cannot recv data: Connection reset by peer
Feb 13 17:56:42 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:57:47 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:57:47 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:58:48 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:58:48 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:12 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:12 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:37 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 17:59:37 ws libvirtd[1157]: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused
Feb 13 18:02:35 ws systemd[1]: Stopping Virtualization daemon...
Feb 13 18:02:35 ws systemd[1]: libvirtd.service: Succeeded.
Feb 13 18:02:35 ws systemd[1]: Stopped Virtualization daemon.
I've an empty /win-vm directory but logs say directory doesn't exist. Is there a workaround? Or can I start without checking volume pools?
Related
I install Elasticsearch 7.17.7 on my local machine Linux Ubuntu 22.04.1 LTS ( not using virtual machine )
To start elasticsearch I run this command in terminal
sudo systemctl start elasticsearch.service
After it successfully start and run I notice it utilize up to 8.1GB of ram which is too huge from what I see, my machine only have 16GB of utilizeable ram
Attached is the elasticsearch info and the stacer output
From time to time I will check to see whether the elasticsearch is running or not because I realize when I am using my laptop doing work and everything, opening tools such as google chrome, vs code and smartgit, at one point my device becomes really slow, then I try to run certain command needed for work and it return error
Elasticsearch\Common\Exceptions\NoNodesAvailableException: No alive nodes found in your cluster
I check and found the error happen because elasticsearch suddenly stop working
I am able to restart elasticsearch using
sudo systemctl restart elasticsearch.service or sudo service elasticsearch restart
but sometimes an error come out causing elasticsearch unable to restart, if this happen, I just restart my laptop, which I wanted to avoid as it is disturbing my work
I am unable to reproduce back the error log but I will update this question once I found the log
Please provide any tips or experience if anyone have encountered this issue
I have tried installing different version of elasticsearch which is 7.16 but still same issue happen
I have tried reinstall elasticsearch but still same
Update : Elasticsearch finally crash and I am unable to restart the service, refer below image for detail and log
Elasticsearch service failed
Error during restart
Update : below is the log I able to receive when starting the elasticsearch
==> /var/log/syslog <==
Dec 8 14:26:37 farhan-Yoga-6-13ALC6 systemd[1]: Starting Elasticsearch...
==> /var/log/kern.log <==
Dec 8 14:26:47 farhan-Yoga-6-13ALC6 kernel: [16726.683629] [UFW BLOCK] IN=wlp2s0 OUT= MAC=01:00:5e:00:00:01:64:3a:ea:e9:0a:4e:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=14037 PROTO=2
==> /var/log/syslog <==
Dec 8 14:26:47 farhan-Yoga-6-13ALC6 kernel: [16726.683629] [UFW BLOCK] IN=wlp2s0 OUT= MAC=01:00:5e:00:00:01:64:3a:ea:e9:0a:4e:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=14037 PROTO=2
==> /var/log/ufw.log <==
Dec 8 14:26:47 farhan-Yoga-6-13ALC6 kernel: [16726.683629] [UFW BLOCK] IN=wlp2s0 OUT= MAC=01:00:5e:00:00:01:64:3a:ea:e9:0a:4e:08:00 SRC=0.0.0.0 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=14037 PROTO=2
==> /var/log/auth.log <==
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 sudo: pam_unix(sudo:session): session closed for user root
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 sudo: farhan : TTY=pts/3 ; PWD=/var/log ; USER=root ; COMMAND=/usr/sbin/service elasticsearch restart
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 sudo: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
==> /var/log/syslog <==
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Started Elasticsearch.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Stopping Elasticsearch...
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: uncaught exception in thread [process reaper (pid 46606)]
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "modifyThread")
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:485)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.security.AccessController.checkPermission(AccessController.java:1068)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:411)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at org.elasticsearch.secure_sm.SecureSM.checkThreadAccess(SecureSM.java:160)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at org.elasticsearch.secure_sm.SecureSM.checkAccess(SecureSM.java:120)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.Thread.checkAccess(Thread.java:2360)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.Thread.setDaemon(Thread.java:2308)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.ProcessHandleImpl.lambda$static$0(ProcessHandleImpl.java:103)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:637)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:928)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1021)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1158)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/java.lang.Thread.run(Thread.java:1589)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd-entrypoint[46384]: #011at java.base/jdk.internal.misc.InnocuousThread.run(InnocuousThread.java:186)
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: elasticsearch.service: Deactivated successfully.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Stopped Elasticsearch.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: elasticsearch.service: Consumed 42.311s CPU time.
Dec 8 14:26:49 farhan-Yoga-6-13ALC6 systemd[1]: Starting Elasticsearch...
==> /var/log/auth.log <==
Dec 8 14:27:00 farhan-Yoga-6-13ALC6 sudo: pam_unix(sudo:session): session closed for user root
==> /var/log/syslog <==
Dec 8 14:27:00 farhan-Yoga-6-13ALC6 systemd[1]: Started Elasticsearch.
Update : Able to find out the log that's causing the elasticsearch to stop working, apparently its because the ram usage is too high causing ubuntu to stop the process
Dec 8 14:30:43 farhan-Yoga-6-13ALC6 kernel: [16962.195555] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/elasticsearch.service,task=java,pid=46712,uid=128
Dec 8 14:30:43 farhan-Yoga-6-13ALC6 kernel: [16962.195728] Out of memory: Killed process 46712 (java) total-vm:16570600kB, anon-rss:8393480kB, file-rss:0kB, shmem-rss:0kB, UID:128 pgtables:17344kB oom_score_adj:0
==> /var/log/syslog <==
Dec 8 14:30:43 farhan-Yoga-6-13ALC6 kernel: [16962.194794] Monitor Deflati invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
Still unable to figure out how to make the elasticsearch use less ram
You can also run an elastic search via docker with the -e ES_JAVA_OPTS="-Xmx512m" option.
Example:
docker network create elastic
docker pull docker.elastic.co/elasticsearch/elasticsearch:8.6.1
docker run --name elasticsearch --net elastic -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xmx512m" -t docker.elastic.co/elasticsearch/elasticsearch:8.6.1
Trying to setup Sonarqube in Ubuntu 18.04 (on Ec2) using mysql, Went thru this digital Ocean tutorial
But after all that sudo service sonarqube start errors out:
ubuntu#ip-:/opt/sonarqube$ sudo service sonarqube start
Job for sonarqube.service failed because the control process exited with error code.
See "systemctl status sonarqube.service" and "journalctl -xe" for details.
ubuntu#ip-:/opt/sonarqube$ systemctl status sonarqube.service
sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; disabled; vendor prese
Active: failed (Result: exit-code) since Fri 2019-10-04 16:45:33 UTC; 1min 43
Process: 11036 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=
Oct 04 16:45:32 ip-172-31-41-243 systemd[1]: sonarqube.service: Control process
Oct 04 16:45:32 ip-172-31-41-243 systemd[1]: sonarqube.service: Failed with resu
Oct 04 16:45:32 ip-172-31-41-243 systemd[1]: Failed to start SonarQube service.
Oct 04 16:45:33 ip-172-31-41-243 systemd[1]: sonarqube.service: Service hold-off
Oct 04 16:45:33 ip-172-31-41-243 systemd[1]: sonarqube.service: Scheduled restar
Oct 04 16:45:33 ip-172-31-41-243 systemd[1]: Stopped SonarQube service.
Oct 04 16:45:33 ip-172-31-41-243 systemd[1]: sonarqube.service: Start request re
Oct 04 16:45:33 ip-172-31-41-243 systemd[1]: sonarqube.service: Failed with resu
Oct 04 16:45:33 ip-172-31-41-243 systemd[1]: Failed to start SonarQube service.
Tried another tutorial and with different version of sonarqube and different DB - same result. Is there a resource requirement (memory) for sonarqube - I am deploying on EC2 t2.micro - 8GB - assuming that should be enough?
Tried /opt/sonarqube/bin/linux-x86-64/sonar.sh start this gives error:
Syntax error: newline unexpected (expecting ")"). I had Not edited the sonar.sh file
SonarQube Docker
Tried going Docker route - did not work.
Connection refused on port 9000. I have TCP/9000 opened in the Security Group and NACL is all open.
http://IP:9000 This site can’t be reached x.x.x.x refused to connect
I had a similar issue. Mine was caused by an extra white space in the systemctl service config file. I suggest you check properly, or delete and recreate the service file. More carefully this time.
For a particular project I wanted to have Mint (Sonja) appliance with all MariaDB and only MariaDB doing MySQL's work, without a hint of MySQL so I know I'm working with MariaDB.
The appliance is one where I've installed multiple open source projects that power their own website (Alfresco, Request Tracker, SuiteCRM, etc.), and all of them seem to have worked either with MariaDB, or just used Postgres without interesting difficulty, and up until I tried to create a new MariaDB database to add a clone of a specific Wordpress site, MariaDB worked predictably well with no headaches on the same system. Until now, where I can't seem to find a pulse.
The MariaDB troubleshooting page confirmed what I'd found in my investigations (in particular, a ps wwaux | grep mysql only turned up the grep process). The basic problem as I'd encountered it was when I tried to log in to create a database:
monk#toolchest ~ $ mysql -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket
'/var/run/mysqld/mysqld.sock' (2 "No such file or directory")
I've never gotten involved with MySQL's or MariaDB's /etc configuration files, but I looked briefly. They appeared sane to my uneducated eye.
Neither an aptitude reinstall mariadb-server nor a service mysql start produce any changes so far as I could tell.
For service mysql start, I got:
# service mysql start
Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details.
The details for systemctl status mysql.service were:
● mysql.service - LSB: Start and stop the mysql database server daemon
Loaded: loaded (/etc/init.d/mysql; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2017-07-14 18:16:38 EEST; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 5011 ExecStart=/etc/init.d/mysql start (code=exited, status=1/FAILURE)
Jul 14 18:16:38 toolchest /etc/init.d/mysql[5479]: 0 processes alive and '/usr/bin/mysql
Jul 14 18:16:38 toolchest /etc/init.d/mysql[5479]: [61B blob data]
Jul 14 18:16:38 toolchest /etc/init.d/mysql[5479]: error: 'Can't connect to local MySQL
Jul 14 18:16:38 toolchest /etc/init.d/mysql[5479]: Check that mysqld is running and that
Jul 14 18:16:38 toolchest /etc/init.d/mysql[5479]:
Jul 14 18:16:38 toolchest mysql[5011]: ...fail!
Jul 14 18:16:38 toolchest systemd[1]: mysql.service: Control process exited, code=exited
Jul 14 18:16:38 toolchest systemd[1]: Failed to start LSB: Start and stop the mysql data
Jul 14 18:16:38 toolchest systemd[1]: mysql.service: Unit entered failed state.
Jul 14 18:16:38 toolchest systemd[1]: mysql.service: Failed with result 'exit-code'.
The recent (i.e. non-cronned) content of journalctl -xe ran:
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has failed.
--
-- The result is failed.
Jul 14 18:16:38 toolchest systemd[1]: mysql.service: Unit entered failed state.
Jul 14 18:16:38 toolchest systemd[1]: mysql.service: Failed with result 'exit-code'.
/etc/mysql/mariadb.cnf and /etc/mysql/my.cnf both read (comments stripped):
[client-server]
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
/etc/mysql/my.cnf.fallback dropps the latter !includedir. Commenting out the latter /etc/mysql/my.cnf to match the fallback file produced identical results.
I can edit my question to include more of /etc/mysql/*, but I wanted to ask. Error messages similar to the ERROR 2002 are to be found in questions like Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (38), but these appear to be MySQL rather than MariaDB, and I suspect this may be a MariaDB-specific fluke.
So I've been working on a rails server with MySQL and suddenly can't access my database. When I try to log in with mysql -u root -p I get ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I've read most of the forums I can find and looked through all the logs trying to reset mysqld.sock. I've tried reinstalling and repackaging unsuccessfully. The weird thing is that var/run/mysqld/ doesn't even exist after reinstalling.
So I decided to just back up my databases and overhaul everything. I cleaned out both apache and mysql with apt-get remove --purge and reinstalled. All went fine, apache launched fine, and then tried to relaunch mysql with systemctl start mysql which gave a prompt to check the log which says:
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Fri 2016-12-16 23:56:19 UTC; 17s ago
Process: 15690 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=1/FAILURE)
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: mysql.service: Control process exited, code=exited status=1
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: Failed to start MySQL Community Server.
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: mysql.service: Unit entered failed state.
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: mysql.service: Failed with result 'exit-code'.
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: mysql.service: Service hold-off time over, scheduling restart.
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: Stopped MySQL Community Server.
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: mysql.service: Start request repeated too quickly.
Dec 16 23:56:19 ip-172-31-0-55 systemd[1]: Failed to start MySQL Community Server.
Update - I created the folder mysqld, which seems to allow sudo mysqld --initialize to run. Unfortunately this yielded the error
2016-12-17T00:16:36.298825Z 0 [ERROR] Can't change data directory owner to mysql
2016-12-17T00:16:36.299212Z 0 [ERROR] Aborting
So no party yet. Any thoughts would be hugely appreciated.
edit /etc/mysql/conf.d/mysql.cnf :
sudo nano /etc/mysql/conf.d/mysql.cnf
add this line :
socket=/var/run/mysqld/mysqld.sock
then restart mysql service :
sudo service mysql restart
I modified the sshd_config but the ssh service became unavailable. How do I reset the config?
Here is the log:
Jul 29 14:10:03 bye sshd[578]: /etc/ssh/sshd_config line 6: Badly formatted port number.
Jul 29 14:10:03 bye systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Jul 29 14:10:03 bye systemd[1]: Unit ssh.service entered failed state.
Jul 29 14:10:04 bye systemd[1]: ssh.service holdoff time over, scheduling restart.
Jul 29 14:10:04 bye systemd[1]: Stopping OpenBSD Secure Shell server...
Jul 29 14:10:04 bye systemd[1]: Starting Google Compute Engine VM initialization...
Jul 29 14:10:04 bye systemd[1]: Started Google Compute Engine VM initialization.
Jul 29 14:10:04 bye systemd[1]: Starting OpenBSD Secure Shell server...
Jul 29 14:10:04 bye systemd[1]: Started OpenBSD Secure Shell server.
Jul 29 14:10:04 bye sshd[582]: /etc/ssh/sshd_config line 6: Badly formatted port number.
Jul 29 14:10:04 bye systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Jul 29 14:10:04 bye systemd[1]: Unit ssh.service entered failed state.
Jul 29 14:10:04 bye systemd[1]: ssh.service holdoff time over, scheduling restart.
Since you don't have access to your instance and you need to modify the sshd_config file you can either:
Delete the instance keeping the boot disk, attach it to another instance as secondary, modify the sshd_config file, detach the disk and then create a new instance using that disk.
Or you can modify the sshd_config file using the following startup-script:
#!/bin/bash
/bin/sed -i.bak 's/^Port .*/Port 22/g' /etc/ssh/sshd_config
This startup-script will modify the line that starts with "Port" to "Port 22". Also it will create a backup at /etc/ssh/sshd_config.bak.
After updating the instance metadata with the startup-script you need to reboot the instance because startup-scripts are executed when the instances boot up. Once you gain access to the instance, remove the script in order to avoid executing it to no avail.
I hope it helps.