previously my MySQL pod stuck at terminating status, and then I tried to force delete using command like this
kubectl delete pods <pod> --grace-period=0 --force
Later I tried to helm upgrade again, my pod was stuck at containercreating status, and this event from pod
17s Warning FailedMount pod/db-mysql-primary-0 MountVolume.SetUp failed for volume "pvc-f32a6f84-d897-4e35-9595-680302771c54" : kubernetes.io/csi: mount
er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix
/var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory"
17s Warning FailedMount pod/db-mysql-secondary-0 MountVolume.SetUp failed for volume "pvc-61fc6eda-97fa-455f-ac2c-df8ebcb90f1c" : kubernetes.io/csi: mount
er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix
/var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory"
anyone please can help me to resolve this issue, thanks a lot.
When you run the command
kubectl delete pods <pod> --grace-period=0 --force
you ask kubernetes to forget the Pod, not to delete it. You have to be careful while using this command. You have to make sure that the containers of the Pod are not running in the host especially when they are mounted to a PVC. Probably the containers is still running and attached to the PVC.
pool-product-8jd40 0
spec:
drivers: null
and on my some pool the driver csi not ready (null), it's supposed to be equal 1 (ready)
*sorry i can't attach the image yet
Related
When I try to deploy I see...
Error: Cannot find module 'express'
I am assuming this is because it isn't running npm install so I try to connect and do it manually but I get...
oc get pods
...
personal-ui-17-blah 0/1 CrashLoopBackOff 6 8m
oc rsh personal-ui-17-blah
error: unable to upgrade connection: container not found ("personal-ui")
How do I remote into a pod that is busted to fix it?
I am getting the error message
error: operation failed: Failed to connect to remote libvirt URI qemu+ssh://mytargethostname.mydomain.com/system: Cannot recv data: Host key verification failed.: Connection reset by peer
when I try to run the kvm migration command like this
virsh migrate --verbose --live --p2p --tunnelled hosttomigrate qemu+ssh://mytargethostname.mydomain.com/system
I can successfully view the running vms on the target host when I run
virsh -c qemu+ssh://mytargethostname.mydomain.com/system list --all
Is there some special configuration that I may need for kvm ?
I am having some trouble with default MySQL installation on CircleCI. In 'post' section of 'machine', I stop mysql using, "- sudo service mysql stop". The reason behind doing so is that I want to use docker mysql container on port 3306. My "docker-compose up" takes some time to finish and sometimes before the docker mysql container starts, the mysql process starts again for no reason obvious to me. I have been tracking this issue using the following command.
while true; do sudo netstat -nlp | grep :3306; sleep 2; done
I have a build that ran fine with docker being able to register port 3306, and also a build in which mysqld started again even after stopping giving me the following error on docker-compose up.
ERROR: for dbm01 Cannot start service dbm01: failed to create endpoint minimum_dbm01_1 on network minimum_default: Error starting userland proxy: listen tcp 0.0.0.0:3306: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Both the builds are of same commit so there is no difference in code. What might be the issue?
I am trying to compile eJabberd on CentOS6. I am following the steps mentioned # [https://www.process-one.net/docs/ejabberd/guide_en.html#htoc12][1]
However, this aborts with connection-timeout error while executing "make".
Following is the error snipet from command prompt:
*
[root#CentOS-6-64-EN ejabberd-15.04]# make
rm -rf deps/.got
rm -rf deps/.built
/usr/lib64/erlang/bin/escript rebar get-deps && :> deps/.got
==> rel (get-deps)
==> ejabberd-15.04 (get-deps)
Pulling p1_cache_tab from {git,"git://github.com/processone/cache_tab",
"cca096330ce39e8b56fe0e0c478df1ff452e7751"}
github.com[0: 192.30.252.131]: errno=Connection timed out
fatal: unable to connect a socket (Connection timed out)
Initialized empty Git repository in /root/Desktop/eJabberd/ejabberd-15.04/deps/p1_cache_tab/.git/
ERROR: git clone -n git://github.com/processone/cache_tab p1_cache_tab failed with error: 128 and output:
github.com[0: 192.30.252.131]: errno=Connection timed out
fatal: unable to connect a socket (Connection timed out)
Initialized empty Git repository in /root/Desktop/eJabberd/ejabberd-15.04/deps/p1_cache_tab/.git/
ERROR: 'get-deps' failed while processing /root/Desktop/eJabberd/ejabberd-15.04: rebar_abort
make: *** [deps/.got] Error 1
*
On trying the command "./rebar get-deps", I get the same connection timeout error.
My network connectivity is fine and it seems the github link is broken. Please Help!
You should try replacing the dependancy link to Github using https:// instead of git://
It should fix your issue.
We will check the project to make sure all our dependancies use https url scheme instead of ssh.
I am using hadoop-1.0.4 on amazon ec2 of 3 ubuntu 12.10 instances, 1 master and 2 slaves, just under ~ directory.
Now start-all.sh and stop-all.sh is ok, but when i run jps on master or slaves, it prints nothing. Then i tested hadoop examples:
~/hadoop$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
It shows
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:1879)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
However i've chmod 777 -R tmp to tmp folders.
~/hadoop$ sudo bin/hadoop jar hadoop-examples-1.0.4.jar pi 10 10000
With sudo, it produces
13/05/12 03:58:11 WARN conf.Configuration: DEPRECATED: hadoop-site.xml
found in the classpath. Usage of hadoop-site.xml is deprecated.
Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to
override properties of core-default.xml, mapred-default.xml
and hdfs-default.xml respectively
Number of Maps = 10
Samples per Map = 10000
13/05/12 03:58:12 WARN fs.FileSystem: "54.235.101.85:50001" is a deprecated
filesystem name. Use "hdfs://54.235.101.85:50001/" instead.
13/05/12 03:58:13 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 0 time(s).
13/05/12 03:58:14 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 1 time(s).
13/05/12 03:58:15 INFO ipc.Client: Retrying connect to server:
hdmaster/54.235.101.85:50001. Already tried 2 time(s).
Then failed to connect. So what is the problem? should i put sudo to run the examples? Thanks a lot.
I think, the problem is, 54.235.101.85 is suppose to be a public IP address. Use ifconfig in all the nodes to get a list of IP address and check for IP beginning with 10.x.x.x/172.x.x.x/192.x.x.x. If you find any, modify your configuration files in all the nodes accordingly.