I've got running geth with next command:
$ geth --testnet --networkid 3 --verbosity 3 --syncmode light --ipcdisable --ws --wsapi "db,eth,net,web3,personal,txpool,admin,miner" --wsorigins '*'
In second console I connect to geth jsonrpc with wscat.
Subscribing to an event "newHeads" works fine:
$ wscat -c ws://localhost:8546
> {"id": 2, "method": "eth_subscribe", "params": ["newHeads"]}
< {"jsonrpc":"2.0","id":2,"result":"0x660135584e36a9edb0c55f89c389848"}
< {"jsonrpc":"2.0","method":"eth_subscription","params":{"subscription":"0x660135584e36a9edb0c55f89c389848","result":{"parentHash":"0xe7d0...","hash":"0x1dcc...
But subscribing to an event "newPendingTransactions" not works:
$ wscat -c ws://localhost:8546
> {"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
< {"jsonrpc":"2.0","id":1,"result":"0x511b3274aa5dec44bb79d178c238e7fe"}
And it's all: I do not get new pending transactions.
Subscribing to an event "newPendingTransactions" on ropsten.infura.io works fine:
$ wscat -c wss://ropsten.infura.io/ws
> {"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
< {"jsonrpc":"2.0","id":1,"result":"0x18fda7bf20ee9c5b5f1f08edf5c3e482"}
< {"jsonrpc":"2.0","method":"eth_subscription","params":{"subscription":"0x18fda7bf20ee9c5b5f1f08edf5c3e482","result":"0xc1e00266ab9f2c512d6c1967c300fc00381586e868611b7dff6fd94f230dd707"}}
Info:
$ geth version
Geth
Version: 1.8.22-stable
Git Commit: 7fa3509e2eaf1a4ebc12344590e5699406690f15
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.10.4
Operating System: linux
GOPATH=
GOROOT=/usr/lib/go-1.10
I have two questions:
1. Why subscribing to event "newPendingTransactions" not works?
2. What am I doing wrong?
The problem was that I did not wait for the end of synchronization.
I started from scratch (this time i used Rinkeby network):
geth --rinkeby --verbosity 3 --syncmode fast --ipcdisable --ws --wsapi "db,eth,net,web3,personal,txpool,admin" --wsorigins '*'
For the Rinkeby network, this took about 20 hours and the size of the blockchain is about 25 gigabytes on my SSD.
And now the subscribing for newPendingTransactions event works fine!
Thanks all!
Related
The problem
I want to create a private Ethereum network, however my two peers refuse to connect. They always fail with the following error: Snapshot extension registration failed peer=af5dfeb7 err="peer connected on snap without compatible eth support".
My attempted fixes
Setting --snapshot=false produces the same error
Changing the --syncmode to full, fast or light made no difference
Adding the peers manually with admin.addPeer(${peer1.admin.nodeInfo.enode}) returns true, however net.peerCount and admin.peers.length return 0
Changing the chainId did not produce any different results
Waiting did not help
My setup proccess
Creating the peers:
geth init --datadir "peer1" genesis.json
geth init --datadir "peer2" genesis.json
Starting the peers and opening the console:
geth --datadir "peer1" --networkid 1111 --port 30401 console 2>peer1.log
geth --datadir "peer2" --networkid 1111 --port 30402 console 2>peer2.log
The genesis file (Created by puppeth)
{
"config": {
"chainId": 1111,
"homesteadBlock": 0,
"eip150Block": 0,
"eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"ethash": {}
},
"nonce": "0x0",
"timestamp": "0x61af5dd9",
"extraData": "0x0000000000000000000000000000000000000000000000000000000000000000",
"gasLimit": "0x47b760",
"difficulty": "0x1",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x0000000000000000000000000000000000000000",
"alloc": {},
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"baseFeePerGas": null
}
Geth version info
Geth
Version: 1.10.13-stable
Git Commit: 7a0c19f813e285516f4b525305fd73b625d2dec8
Architecture: amd64
Go Version: go1.17.2
Operating System: linux
GOPATH=
GOROOT=go
Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
I am trying to execute the steps from https://medium.com/#mvmurthy/full-stack-hello-world-voting-ethereum-dapp-tutorial-part-1-40d2d0d807c2.
However, the difference is I am running geth on AWS server instead of truffle or testnet as specified in the link. I need to get deployed contract address to try on UI. I am using the following geth command: geth --networkid 12312423 --port 30303 -rpc -rpcaddr "0.0.0.0" -rpcport 8080 --rpcapi "web3,db,net,personal,eth" --rpccorsdomain "*" --preload mining.js --nodiscover --maxpeers 0 --datadir=~/eth/data231 init genesis.json console
I have run Node commands on another screen in the same machine. I have gone till the end of the steps and I am receiving Error: exceeds block gas limit while deploying the contract. Is there any error in the genesis.json that I am using.
Any suggestions will help thanks...
genesis.json
{
"config": {
"chainId": 15,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
},
"difficulty": "200000000",
"gasLimit": "2100000",
"alloc": {
"7df9a875a174b3bc565e6424a0050ebc1b2d1d82": { "balance": "300000" },
"f41c74c9ae680c1aa78f42e5647a62f353b7bdde": { "balance": "400000" }
}
}
The problem is solved. There are 2 learnings from the solution:
The gasLimit in the genesis.json should be higher like "gasLimit": "3141592000000".
We should first start geth without init. Create account, unlock. Then allocate the balance for the account as below and do git init.
i.e.
Once you’ve generated your account, quit geth with and remove every folder except keystore/ from your datatir:
$ cd
$ rm -rf ls | grep -v keystore
update your genesis block json, adding the following to the alloc key:
"alloc": {
"": {
"balance": "10000000000000000000"
}
}
I am creating a cloudformation template, which creates some resources as EC2 instance, autoscaling group and launchConfiguration.
By the userData property of the launchConfiguration resource, I tried to install the Cloudwatch agent as follows:
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource LaunchCongig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"yum -y install wget\n",
"# Get the CloudWatch Logs agent\n",
"wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py\n",
"# Install the CloudWatch Logs agent\n",
"python ./awslogs-agent-setup.py -n -r ", { "Ref" : "AWS::Region" }, " -c /etc/cwlogs.cfg || error_exit 'Failed to run CloudWatch Logs agent setup'\n",
"service awslogs start"
]]}
After ssh into the instance, I checked the file /var/log/cloud-init-output.log to see if everything is fine, but here is what I got:
+ wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
--2017-02-17 14:36:10-- https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.226.59
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.226.59|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 47998 (47K) [text/x-python]
Saving to: ‘awslogs-agent-setup.py’
0K .......... .......... .......... .......... ...... 100% 196K=0.2s
2017-02-17 14:36:10 (196 KB/s) - ‘awslogs-agent-setup.py’ saved [47998/47998]
+ python ./awslogs-agent-setup.py -n -r eu-west-1 -c /etc/cwlogs.cfg
Step 1 of 5: Installing pip ...Traceback (most recent call last):
File "./awslogs-agent-setup.py", line 1144, in <module>
main()
File "./awslogs-agent-setup.py", line 1140, in main
setup.setup_artifacts()
File "./awslogs-agent-setup.py", line 693, in setup_artifacts
self.install_pip()
File "./awslogs-agent-setup.py", line 600, in install_pip
fail("Could not install pip. Please try again or see " + AGENT_SETUP_LOG_FILE + " for more details")
TypeError: fail() takes exactly 2 arguments (1 given)
+ error_exit 'Failed to run CloudWatch Logs agent setup'
/var/lib/cloud/instance/scripts/part-001: line 8: error_exit: command not found
Feb 17 14:36:12 cloud-init[2798]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [127]
Feb 17 14:36:12 cloud-init[2798]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Feb 17 14:36:12 cloud-init[2798]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 14:36:12 +0000. Datasource DataSourceEc2. Up 85.78 seconds
What is wrong with this script? Is there any other way to install the agent?
Thank you.
EDIT:
I figured out that is because maybe the python-pip package didn't get installed so I added this to the userData:
"yum -y install python-pip\n",
After that I played the template again and strangely I got the same Error.
I am usinh an Amazon ECS-optimized AMI
I solved the problem by installing the agent directly by yum awslogs:
"UserData":{ "Fn::Base64" : {
"Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum -y install aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v",
" --stack ", { "Ref": "AWS::StackName" },
" --resource launchConfig",
" --region ", { "Ref" : "AWS::Region" },"\n",
"yum -y install awslogs\n",
"service awslogs start"
]]}
Here is the output from the log file:
Installed:
awslogs.noarch 0:1.1.2-1.10.amzn1
Dependency Installed:
aws-cli.noarch 0:1.11.29-1.45.amzn1
aws-cli-plugin-cloudwatch-logs.noarch 0:1.3.3-1.15.amzn1
freetype.x86_64 0:2.3.11-15.14.amzn1
libjpeg-turbo.x86_64 0:1.2.90-5.14.amzn1
mailcap.noarch 0:2.1.31-2.7.amzn1
python27-botocore.noarch 0:1.4.86-1.62.amzn1
python27-colorama.noarch 0:0.2.5-1.7.amzn1
python27-dateutil.noarch 0:2.1-1.3.amzn1
python27-docutils.noarch 0:0.11-1.15.amzn1
python27-futures.noarch 0:3.0.3-1.3.amzn1
python27-imaging.x86_64 0:1.1.6-19.9.amzn1
python27-jmespath.noarch 0:0.9.0-1.11.amzn1
python27-ply.noarch 0:3.4-3.12.amzn1
python27-pyasn1.noarch 0:0.1.7-2.9.amzn1
python27-rsa.noarch 0:3.4.1-1.8.amzn1
Complete!
+ service awslogs start
Starting awslogs: [ OK ]
Cloud-init v. 0.7.6 finished at Fri, 17 Feb 2017 15:33:42 +0000. Datasource DataSourceEc2. Up 83.47 seconds
Everything works fine this way. Hope that will help someone someday!
For ECS specifically, see Using CloudWatch Logs with Container Instances in the EC2 Container Service documentation for details on configuring CloudWatch Logs. The documentation recommends using yum install -y awslogs instead of the Python install script.
The documentation provides a complete sample in the Configuring CloudWatch Logs at Launch with User Data section.
In your case, since you're already managing your config files using cfn-init and CloudFormation::Init metadata in CloudFormation, you don't need any complex parsing of config files in your User-Data script, but you can still use the script as a reference. One thing worth adding to your User-Data script is running chkconfig awslogs on to make sure the service continues running on the instance after a reboot.
EDITED:
I've an OpenShift cluster with one master and two nodes. I've installed NFS on the master and NFS client on the nodes.
I've followed the wordpress example with NFS: https://github.com/openshift/origin/tree/master/examples/wordpress
I did the following on my master as: oc login -u system:admin:
mkdir /home/data/pv0001
mkdir /home/data/pv0002
chown -R nfsnobody:nfsnobody /home/data
chmod -R 777 /home/data/
# Add to /etc/exports
/home/data/pv0001 *(rw,sync,no_root_squash)
/home/data/pv0002 *(rw,sync,no_root_squash)
# Enable the new exports without bouncing the NFS service
exportfs -a
So exportfs shows:
/home/data/pv0001
<world>
/home/data/pv0002
<world>
$ setsebool -P virt_use_nfs 1
# Create the persistent volumes for NFS.
# I did not change anything in the yaml-files
$ oc create -f examples/wordpress/nfs/pv-1.yaml
$ oc create -f examples/wordpress/nfs/pv-2.yaml
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 <none> 1073741824 RWO,RWX Available
pv0002 <none> 5368709120 RWO Available
This is also what I get.
Than I'm going to my node:
oc login
test-admin
And I create a wordpress project:
oc new-project wordpress
# Create claims for storage in my project (same namespace).
# The claims in this example carefully match the volumes created above.
$ oc create -f examples/wordpress/pvc-wp.yaml
$ oc create -f examples/wordpress/pvc-mysql.yaml
$ oc get pvc
NAME LABELS STATUS VOLUME
claim-mysql map[] Bound pv0002
claim-wp map[] Bound pv0001
This looks exactly the same for me.
Launch the MySQL pod.
oc create -f examples/wordpress/pod-mysql.yaml
oc create -f examples/wordpress/service-mysql.yaml
oc create -f examples/wordpress/pod-wordpress.yaml
oc create -f examples/wordpress/service-wp.yaml
oc get svc
NAME LABELS SELECTOR IP(S) PORT(S)
mysql name=mysql name=mysql 172.30.115.137 3306/TCP
wpfrontend name=wpfrontend name=wordpress 172.30.170.55 5055/TCP
So actually everyting seemed to work! But when I'm asking for my pod status I get the following:
[root#ip-10-0-0-104 pv0002]# oc get pod
NAME READY STATUS RESTARTS AGE
mysql 0/1 Image: openshift/mysql-55-centos7 is ready, container is creating 0 6h
wordpress 0/1 Image: wordpress is not ready on the node 0 6h
The pods are in pending state and in the webconsole they're giving the following error:
12:12:51 PM mysql Pod failedMount Unable to mount volumes for pod "mysql_wordpress": exit status 32 (607 times in the last hour, 41 minutes)
12:12:51 PM mysql Pod failedSync Error syncing pod, skipping: exit status 32 (607 times in the last hour, 41 minutes)
12:12:48 PM wordpress Pod failedMount Unable to mount volumes for pod "wordpress_wordpress": exit status 32 (604 times in the last hour, 40 minutes)
12:12:48 PM wordpress Pod failedSync Error syncing pod, skipping: exit status 32 (604 times in the last hour, 40 minutes)
Unable to mount +timeout. But when I'm going to my node and I'm doing the following (test is a created directory on my node):
mount -t nfs -v masterhostname:/home/data/pv0002 /test
And I place some file in my /test on my node than it appears in my /home/data/pv0002 on my master so that seems to work.
What's the reason that it's unable to mount in OpenShift?
I've been stuck on this for a while.
LOGS:
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.795267904Z" level=info msg="GET /containers/json"
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832179 1148 mount_linux.go:103] Mount failed: exit status 32
Oct 21 10:44:52 ip-10-0-0-129 origin-node: Mounting arguments: localhost:/home/data/pv0002 /var/lib/origin/openshift.local.volumes/pods/2bf19fe9-77ce-11e5-9122-02463424c049/volumes/kubernetes.io~nfs/pv0002 nfs []
Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832279 1148 kubelet.go:1206] Unable to mount volumes for pod "mysql_wordpress": exit status 32; skipping pod
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.832794476Z" level=info msg="GET /containers/json?all=1"
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.835916304Z" level=info msg="GET /images/openshift/mysql-55-centos7/json"
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.837085 1148 pod_workers.go:111] Error syncing pod 2bf19fe9-77ce-11e5-9122-02463424c049, skipping: exit status 32
Logs showed Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002
So it failed mounting on localhost.
to create my persistent volume I've executed this yaml:
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "registry-volume"
},
"spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/home/data/pv0002",
"server": "localhost"
}
}
}
So I was mounting to /home/data/pv0002 but this path was not on the localhost but on my master server (which is ose3-master.example.com. So I created my PV in a wrong way.
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "registry-volume"
},
"spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/home/data/pv0002",
"server": "ose3-master.example.com"
}
}
}
This was also in a training environment. It's recommended to have a NFS server outside of your cluster to mount to.