System information
Geth Version: 1.7.3-stable
Git Commit: 4bb3c89d44e372e6a9ab85a8be0c9345265c763a
Operating System: linux
Expected behaviour
Connect Mist with local private network
Actual behaviour
I type command :
geth --datadir ~/private_network init ~/private_network/genesis.json
geth --datadir ~/private_network --networkid 3131 --ipcpath ~/private_network/geth.ipc console 2>~/private_network/console.log
and I run Mist but I have an error "address already in use" even if I kill processes that uses port 30303 I have the same result
Backtrace
~/.ethereum/testnet/geth/ethash count=3
INFO [12-16|12:05:37] Disk storage enabled for ethash DAGs dir=~/.ethash count=2
INFO [12-16|12:05:37] Initialising Ethereum protocol versions="[63 62]" network=3
INFO [12-16|12:05:37] Loaded most recent local header number=797369 hash=81c88e…3044c5 td=587702682055345
INFO [12-16|12:05:37] Loaded most recent local full block number=0 hash=419410…ca4a2d td=1048576
INFO [12-16|12:05:37] Loaded most recent local fast block number=761870 hash=08735b…e597b9 td=571350456833753
INFO [12-16|12:05:37] Loaded local transaction journal transactions=0 dropped=0
INFO [12-16|12:05:37] Upgrading chain index type=bloombits percentage=79
INFO [12-16|12:05:37] Regenerated local transaction journal transactions=0 accounts=0
INFO [12-16|12:05:37] Starting P2P networking
Fatal: Error starting protocol stack: listen udp :30303: bind: address already in use
You're connecting to the Ropsten network (network=3). You have to pass in your network id into Mist using the --network option and provide the path to your .ipc file using --rpc.
$ ./Mist.exe --network 3131 --rpc ~/private_network/geth.ipc
Full command line options:
$ ./Mist.exe --help
Usage: Mist.exe --help [Mist options] [Node options]
Mist options:
--mode, -m App UI mode: wallet, mist. [string] [default: "mist"]
--node Node to use: geth, eth [string] [default: null]
--network Network to connect to: main, test
[string] [default: null]
--rpc Path to node IPC socket file OR HTTP RPC hostport (if
IPC socket file then --node-ipcpath will be set with
this value). [string]
--swarmurl URL serving the Swarm HTTP API. If null, Mist will
open a local node.
[string] [default: "http://localhost:8500"]
--gethpath Path to Geth executable to use instead of default.
[string]
--ethpath Path to Eth executable to use instead of default.
[string]
--ignore-gpu-blacklist Ignores GPU blacklist (needed for some Linux
installations). [boolean]
--reset-tabs Reset Mist tabs to their default settings. [boolean]
--logfile Logs will be written to this file in addition to the
console. [string]
--loglevel Minimum logging threshold: info, debug, error, trace
(shows all logs, including possible passwords over
IPC!). [string] [default: "info"]
--syncmode Geth synchronization mode: [fast|light|full] [string]
--version, -v Display Mist version. [boolean]
--skiptimesynccheck Disable checks for the presence of automatic time sync
on your OS. [boolean]
Node options:
- To pass options to the underlying node (e.g. Geth) use the --node- prefix,
e.g. --node-datadir
Options:
-h, --help Show help [boolean]
Related
I am trying to run openshift on Fedora 36 using Origin-Client or OC.
I have updated fedora to the latest version.
I have installed oc .
whenever I tried to do oc cluster up
it shows below error :
[root#fedora ridhoswasta]# oc cluster up
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I0825 12:11:14.411027 50887 flags.go:30] Running "create-kubelet-flags"
I0825 12:11:16.391985 50887 run_kubelet.go:49] Running "start-kubelet"
I0825 12:11:17.200056 50887 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
E0825 12:16:17.201364 50887 run_self_hosted.go:571] API server error: Get "https://127.0.0.1:8443/healthz?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused ()
Error: timed out waiting for the condition
Then I checked the logs for kubelet container it shows :
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-min-version has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0825 05:13:19.249680 51788 server.go:417] Version: v1.11.0+d4cacc0
I0825 05:13:19.249928 51788 plugins.go:97] No cloud provider specified.
F0825 05:13:19.253892 51788 server.go:261] failed to run Kubelet: mountpoint for cpu not found
I have tried to reinstall docker with latest version but still I face this issue.
Could someone give me another thing to try?
Thanks!
oc cluster up is using the deprecated version of OpenShift, this has been superseded by OpenShift Local now: https://developers.redhat.com/products/openshift-local/overview. Although OpenShift Local uses a good deal more resources than oc cluster up ever did. There's a spiritual successor that might be worth checking out, and that's MicroShift: https://microshift.io/
I am trying to setup a K3S cluster for learning purposes but I am having trouble connecting the master node with agents. I have looked several tutorials and discussions on this but I can't find a solution. I know I am probably missing something obvious (due to my lack of knowledge), but still help would be much appreciated.
I am using two AWS t2.micro instances with default configuration.
When ssh into the master and installed K3S using
curl -sfL https://get.k3s.io | sh -s - --no-deploy traefik --write-kubeconfig-mode 644 --node-name k3s-master-01
with kubectl get nodes, I am able to see the master
NAME STATUS ROLES AGE VERSION
k3s-master-01 Ready control-plane,master 13s v1.23.6+k3s1
So far it seems I am doing things right. From what I understand, I am supposed to configure the kubeconfig file. So, I accessed it by using
cat /etc/rancher/k3s/k3s.yaml
I copied the configuration file and the server info to match the private IP I took from AWS console, resulting in something like this
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <lots_of_info>
server: https://<master_private_IP>:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: <my_certificate_data>
client-key-data: <my_key_data>
Then, I ran vi ~/.kube/config, and there I pasted the kubeconfig file
Finally, I grabbed the token with cat /var/lib/rancher/k3s/server/node-token, ssh into the other machine and then run the following
curl -sfL https://get.k3s.io | K3S_NODE_NAME=k3s-worker-01 K3S_URL=https://<master_private_IP>:6443 K3S_TOKEN=<master_token> sh -
The output is
[INFO] Finding release for channel stable
[INFO] Using v1.23.6+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.23.6+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.23.6+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent
By this output, it looks like I have created an agent. However, when I run kubectl get nodes in the master, I still get
NAME STATUS ROLES AGE VERSION
k3s-master-01 Ready control-plane,master 12m v1.23.6+k3s1
What is the thing I was supposed to do in order to get the agent connected to the master? I am guess I am probably missing something simple, but I just can't seem to find the solution. I've read all the documentation but it is still not clear to me where I am making the mistake. I've tried saving the private master IP and token into the agent as environmental variables with export K3S_TOKEN=master_token and K3S_URL=master_private_IP and then simply running curl -sfL https://get.k3s.io | sh - but I still can't see the worker nodes when running kubectl get nodes
Any help would be appreciated.
It might be your VM instance firewall that prevents appropriate connection from your master to the worker node (and vice versa). Official rancher documentation advise to disable firewall for (Red Hat/CentOS) Enterprise Linux:
It is recommended to turn off firewalld:
systemctl disable firewalld --now
If enabled, it is required to disable nm-cloud-setup and reboot the node:
systemctl disable nm-cloud-setup.service nm-cloud-setup.timer reboot
If you are using Ubuntu on your VM's, there is a different firewall tool (ufw).
In my case, allowing 6443 and 443(not sure if required) port TCP connections worked fine.
Allow port 6443 and TCP connection in all of your cluster machines:
sudo ufw allow 6443/tcp
Then apply k3s installation script in your worker node(s):
curl -sfL https://get.k3s.io | K3S_NODE_NAME=k3s-worker-1 K3S_URL=https://<k3s-master-1 IP>:6443 K3S_TOKEN=<k3s-master-1 TOKEN> sh -
This should work. If not, you can try adding additional allow rule for 443 tcp port as well.
A few options to check.
Check Journalctl for errors
journalctl -u k3s-agent.service -n 300 -xn
If using RaspberryPi for a worker node, make sure you have
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
as the very end of your /boot/cmdline.txt file. DO NOT PUT THIS VALUE ON A NEW LINE! Should just be appended to the end of the line.
If your master node(s) have self-signed certs, make sure you copy the master node's self signed cert to your worker node(s). In linux or raspberry pi copy cert to /usr/local/share/ca-certificates, then issue an
sudo update-ca-certificates
on the worker node
Don't forget to reboot the worker node after you make these changes!
Hope this helps someone!
ubuntu#ip-172-31-39-89:~$ export LIBP2P_FORCE_PNET=1 && IPFS_PATH=~/.ipfs ipfs daemon
I have the following error -
export LIBP2P_FORCE_PNET=1 && IPFS_PATH=~/.ipfs ipfs daemon
go1.11.1
Successfully raised file descriptor limit to 2048.
13:37:13.509 ERROR p2p-config: tried to create a libp2p node with no Private Network Protector but usage of Private Networks is forced by the enviroment config.go:69
13:37:13.512 ERROR cmd/ipfs: error from node construction: privnet: private network was not configured but is enforced by the environment daemon.go:332
Error: privnet: private network was not configured but is enforced by the environment
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
Any ideas about the cause?
I carefully repeated the process and regenerated the swarm keys and it worked this time.
I was having the same problem, when I tried to run it off a systemd service. Using the LIBP2P_FORCE_PNET env variable raises the error. I deleted the ipfs(~/.ipfs) data directory and run the init command again.
I think the reason could be the fact that I did not add the IPFS swarm key. Here's the commands if someone has a similar problem.
export LIBP2P_FORCE_PNET=1
IPFS_PATH=~/.ipfs ipfs init
#then,
go get -u github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
ipfs-swarm-key-gen > ~/.ipfs/swarm.key
IPFS_PATH=~/.ipfs ipfs daemon
I am creating a private Network (Ethereum). I wrote out the genesis.json file (code below), I then initialized it without error, but when I try to connect to it, a new line is created (implying an additional command should be specified). When I press enter, geth simply connects to the main network. How do I get geth to connect to the private network?
note: you can immediately tell geth connects to the main network because my chain's ID is 15 and it shows a connection to 1.
genesis.json:
{
"difficulty" : "0x20000",
"extraData" : "",
"gasLimit" : "0x8000000",
"alloc": {},
"config": {
"chainId": 15,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
}
}
Command Line:
Ryan-Cocuzzos-Laptop:BlockDev Ryan$ geth init ./genesis.json --datadir mychaindata
WARN [01-05|12:12:00] No etherbase set and no accounts found as default
INFO [01-05|12:12:00] Allocated cache and file handles database=/Users/Ryan/Desktop/BlockDev/mychaindata/geth/chaindata cache=16 handles=16
INFO [01-05|12:12:00] Writing custom genesis block
INFO [01-05|12:12:00] Successfully wrote genesis state database=chaindata hash=0613eb…9a64e7
INFO [01-05|12:12:00] Allocated cache and file handles database=/Users/Ryan/Desktop/BlockDev/mychaindata/geth/lightchaindata cache=16 handles=16
INFO [01-05|12:12:00] Writing custom genesis block
INFO [01-05|12:12:00] Successfully wrote genesis state database=lightchaindata hash=0613eb…9a64e7
Ryan-Cocuzzos-Laptop:BlockDev Ryan$ geth --datadir .\mychaindata\
>
WARN [01-05|12:13:37] No etherbase set and no accounts found as default
INFO [01-05|12:13:37] Starting peer-to-peer node instance=Geth/v1.7.3-stable/darwin-amd64/go1.9.2
INFO [01-05|12:13:37] Allocated cache and file handles database=/Users/Ryan/Desktop/BlockDev/.mychaindata/geth/chaindata cache=128 handles=1024
INFO [01-05|12:13:37] Writing default main-net genesis block
INFO [01-05|12:13:37] Initialised chain configuration config="{ChainID: 1 Homestead: 1150000 DAO: 1920000 DAOSupport: true EIP150: 2463000 EIP155: 2675000 EIP158: 2675000 Byzantium: 4370000 Engine: ethash}"
INFO [01-05|12:13:37] Disk storage enabled for ethash caches dir=/Users/Ryan/Desktop/BlockDev/.mychaindata/geth/ethash count=3
INFO [01-05|12:13:37] Disk storage enabled for ethash DAGs dir=/Users/Ryan/.ethash count=2
INFO [01-05|12:13:37] Initialising Ethereum protocol versions="[63 62]" network=1
INFO [01-05|12:13:37] Loaded most recent local header number=0 hash=d4e567…cb8fa3 td=17179869184
INFO [01-05|12:13:37] Loaded most recent local full block number=0 hash=d4e567…cb8fa3 td=17179869184
INFO [01-05|12:13:37] Loaded most recent local fast block number=0 hash=d4e567…cb8fa3 td=17179869184
INFO [01-05|12:13:37] Regenerated local transaction journal transactions=0 accounts=0
INFO [01-05|12:13:37] Starting P2P networking
ADDITIONAL INFO:
I am running MacOS Sierra (10.12.3)
I am using Terminal (2.7.1)
You're missing --networkid when starting geth.
geth --networkid 15 --datadir mychaindata
You are missing --networkid and other import parameters like port etc.
Use below command for connecting to your network.
geth --networkid 15 --datadir .\mychaindata
Additional parameters for geth are below
geth --networkid <chainId>
--mine
--datadir <datadir>
--nodiscover
--rpc --rpcport "8545"
--port "30303"
--rpccorsdomain "*"
--nat "any"
--rpcapi eth,web3,personal,net
--unlock 0
--password <password file>
--ipcpath <path to .ipc file>
I have set up my Kubernetes 1.3.4 cluster on GCE with
export KUBE_ENABLE_CLUSTER_MONITORING=google
This works quite nicely, I get application logs (for some reason in the Container Engine section, but well) and also pod and node metrics.
The only thing that is missing are the node memory metrics, only CPU is shown (see screenshot)
No memory metrics
In the heapster logs I see tons of lines like this
{
metadata: {
severity: "ERROR"
projectId: "<project-id>"
serviceName: "container.googleapis.com"
zone: "europe-west1-d"
labels: {
container.googleapis.com/cluster_name: "production"
compute.googleapis.com/resource_type: "instance"
compute.googleapis.com/resource_name: "fluentd-cloud-logging-production-minion-group-p0w8"
container.googleapis.com/instance_id: "6772154497331326454"
container.googleapis.com/pod_name: "heapster-v1.1.0-2102007506-23b3e"
compute.googleapis.com/resource_id: "6772154497331326454"
container.googleapis.com/stream: "stderr"
container.googleapis.com/namespace_name: "kube-system"
container.googleapis.com/container_name: "heapster"
}
timestamp: "2016-09-13T14:40:08.000Z"
projectNumber: "930564692351"
}
textPayload: "E0913 14:40:08.665035 1 gcm.go:179] Error while sending request to GCM googleapi: Error 400: Timeseries 76, point: start is not older than end, for a cumulative metric, invalidParameter
"
insertId: "pt5bo7g132r266"
log: "heapster"
}
Not sure if this is related.
Any ideas?
If you are running your cluster using GCE instead of GKE
You should install the stackdriver agent and verify the credentials that agent is using to communicate with stackdriver link
If you are using linux you can install the agent by executing:
curl -sSO https://dl.google.com/cloudagents/install-monitoring-agent.sh
sudo bash install-monitoring-agent.sh
and you can check your credentials running the following command:
sudo cat $GOOGLE_APPLICATION_CREDENTIALS
sudo cat /etc/google/auth/application_default_credentials.json