I have a simple question but could not find answer to it. I know in linux nohup can be used to run geth node in background but how can i do it on windows 10.
Related
I am running an ethereum node on Windows 11. I am using Geth for my execution client along with Prysm for my consensus client. They have been syncing for the past two days but no data is being written to my hard drive by Geth and no progress is being made towards a working ethereum node.
I installed Geth through the download page.
I installed Prysm with this command in an administrative Git Bash in the Prysm directory;
curl https://raw.githubusercontent.com/prysmaticlabs/prysm/master/prysm.bat --output prysm.bat
I run both Geth and Prysm in separate administrative command prompts.
This is the command I use to start Geth;
geth --datadir D:\ethereum --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret jwt.hex
This is the command I use to start Prysm;
prysm.bat beacon-chain --execution-endpoint=http://localhost:8551 --jwt-secret=jwt.hex --suggested-fee-recipient=0x01234567722E6b0000012BFEBf6177F1D2e9758D9
I always start Prysm after starting Geth.
My Geth terminal repeats the message "Beacon client online, but never received consensus updates. Please ensure your client is operational to follow the chain"
My Prysm terminal commonly displays messages such as
←[90m[2023-01-17 20:30:44]←[0m ←[32m INFO←[0m←[36m initial-sync:←[0m Waiting for enough suitable peers before syncing ←[32mrequired←[0m=3 ←[32msuitable←[0m=0
Below are screenshots of my Geth and Prysm terminals.
Geth terminal
Prysm terminal
Why is Geth not writing any data? My friend who has a working node says it should write about 800gb.
If your node does not find any peers in the peer-to-peer network, it cannot download any data and sync.
This is usually a sign of a local network issue. Make sure your node has a public IP address or has properly exposed ports to Internet. Preferably any computer in Internet should be able to connect to your computer running Prysm.
See the documentation here.
I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
I recently installed minishift, the openshift origin environment built on docker on my laptop. The instance works fine at the first time when installed. However, when I poweroff my machine and then try to start the system again, it fails.
The issue is the ip address assigned while provisioning the VM first time changes when the system is restarted.
The issue doesn't persist when I delete the VM and then start it again. What's the solution for this? I have tried several possible solutions provided on the internet.
I have also tried --host-only-cidr "192.168.99.1/24" to minishift while starting it for the first time. But that didn't help either.
I have found the solution. Though it requires using a third party script, currently there's no provision to assign static ip to Virtualbox VMs. I have used the library https://github.com/ahilbig/docker-machine-ipconfig and performed the steps followed, which provide a static ip address for minishift. The command is
minishift-ipconfig static <your_ip_address>
Please note the ip address should be the same which was assigned while creating the VM.
So I have CoreOS running inside a VirtualBox VM.
In the past, I have ran docker images that share the host X11 socket with the container to, for example, run a firefox GUI app.
Is this possible to do within my CoreOS VM?
We're using OpenNebula to simulate a simple replicated JBoss application.
We've installed all opennebula packages, qemu and kvm and libvirt.
We have created a simple ethernet network ad hoc between my pc (a node) and the one of my friend (which is both node and front-end) by plugging an ethernet cable between me and him (10.0.0.1 and 10.0.0.2).
So we can ping each other correctly, we've set everything to that we can ssh without a password to each other with "oneadmin" user.
We've configured all files such as below:
/etc/libvirt/libvirtd.conf
/etc/default/libvirtd-bin
And so on...
kvm and kvm-intel are both enabled.
The daemon
libvirtd -d -l
seems to start correctly.
In fact, from the gui of opennebula in the front end, we can see both the hosts monitored.
Anyway there's a problem when we try to start the virtual machine on the node which is not the front-end. I mean when we try to do a deploy of a VM on the other node. The error is something like this
cannot stat `/var/lib/one/datastores/1/f5394317d377beaa09fc07697df9ff68
but if, from the front end which has virtual machine n°1 we perform,
cd /var/lib/one/datastores/1
then we can see that file, we've also given all the permissions to it...
Any idea? :(
This may be related with the datastore configuration. If you left the default values, OpenNebula expects a shared filesystem (ie NFS) between the front-end and the virtualization nodes.
More context on the error (which I believe can be found in /var/lib/one/oned.log) would help analysing this problem.