I am running an ethereum node on Windows 11. I am using Geth for my execution client along with Prysm for my consensus client. They have been syncing for the past two days but no data is being written to my hard drive by Geth and no progress is being made towards a working ethereum node.
I installed Geth through the download page.
I installed Prysm with this command in an administrative Git Bash in the Prysm directory;
curl https://raw.githubusercontent.com/prysmaticlabs/prysm/master/prysm.bat --output prysm.bat
I run both Geth and Prysm in separate administrative command prompts.
This is the command I use to start Geth;
geth --datadir D:\ethereum --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret jwt.hex
This is the command I use to start Prysm;
prysm.bat beacon-chain --execution-endpoint=http://localhost:8551 --jwt-secret=jwt.hex --suggested-fee-recipient=0x01234567722E6b0000012BFEBf6177F1D2e9758D9
I always start Prysm after starting Geth.
My Geth terminal repeats the message "Beacon client online, but never received consensus updates. Please ensure your client is operational to follow the chain"
My Prysm terminal commonly displays messages such as
←[90m[2023-01-17 20:30:44]←[0m ←[32m INFO←[0m←[36m initial-sync:←[0m Waiting for enough suitable peers before syncing ←[32mrequired←[0m=3 ←[32msuitable←[0m=0
Below are screenshots of my Geth and Prysm terminals.
Geth terminal
Prysm terminal
Why is Geth not writing any data? My friend who has a working node says it should write about 800gb.
If your node does not find any peers in the peer-to-peer network, it cannot download any data and sync.
This is usually a sign of a local network issue. Make sure your node has a public IP address or has properly exposed ports to Internet. Preferably any computer in Internet should be able to connect to your computer running Prysm.
See the documentation here.
Related
I hosted my ejabberd on the AWS cloud server and accessing using putty. I start my ejabberd node using the ./ejabberdctl live command which is working perfectly fine. When I closed my putty session and start again on the next day I can't attach live logs again until I stop that running node and start again. How can I attach live logging of the previously running node?
There are typically two ways to run ejabberd:
A)
ejabberdctl live starts a new node and attaches an interactive shell immediately to it. You view the logs immediately in the shell. This is useful for debugging, testing, developing
B)
ejabberdctl start starts a new node keeping it running in the background. You can see the log messages in the log files (/var/log/ejabberd/ejabberd.log or something like that). This is useful for production servers.
Later, you can run ejabberdctl debug to attach an interactive shell to that node. This is useful when you run a production server, and want to perform some administrative task.
I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
My GCP server is down. It was working last day. I can see the server in VM Instances but can not connect using SSH. All the client websites are down.
Can any one help ?
There is several reasons this could happen:
If your disk is full
sshd deamon isn't configured properly
If OS login is enabled on your instance
A firewall rule block port 20
Sometimes, you see some connection errors in the console, that worth to take a look.
EDIT:
I will need additional information if that still not working;
Take a look to your serial console logs and tell me if you have any relevant logs that can help like a kernel panic, issue with networking, permission denied, etc
Use Cloud Shell and try to connect to your VM instance with these commands:
gcloud compute firewall-rules create --network=default default-allow-ssh --allow tcp:22
gcloud compute ssh YOUR_INSTANCE_NAME --zone YOUR_ZONE -- -vvv
If you can't connect from cloud shell, try to ping your VM instance (internal IP & external IP)
I highly recommend to delete your screenshots showing information about your VM instance (Firewall rules, Project name, nmap scans, etc).
We're using OpenNebula to simulate a simple replicated JBoss application.
We've installed all opennebula packages, qemu and kvm and libvirt.
We have created a simple ethernet network ad hoc between my pc (a node) and the one of my friend (which is both node and front-end) by plugging an ethernet cable between me and him (10.0.0.1 and 10.0.0.2).
So we can ping each other correctly, we've set everything to that we can ssh without a password to each other with "oneadmin" user.
We've configured all files such as below:
/etc/libvirt/libvirtd.conf
/etc/default/libvirtd-bin
And so on...
kvm and kvm-intel are both enabled.
The daemon
libvirtd -d -l
seems to start correctly.
In fact, from the gui of opennebula in the front end, we can see both the hosts monitored.
Anyway there's a problem when we try to start the virtual machine on the node which is not the front-end. I mean when we try to do a deploy of a VM on the other node. The error is something like this
cannot stat `/var/lib/one/datastores/1/f5394317d377beaa09fc07697df9ff68
but if, from the front end which has virtual machine n°1 we perform,
cd /var/lib/one/datastores/1
then we can see that file, we've also given all the permissions to it...
Any idea? :(
This may be related with the datastore configuration. If you left the default values, OpenNebula expects a shared filesystem (ie NFS) between the front-end and the virtualization nodes.
More context on the error (which I believe can be found in /var/lib/one/oned.log) would help analysing this problem.
I am trying out a "Hello World" exercise for GCE. First, I went with CentOS Image, added the instance, installed Apache, added the Firewall. All looks good as far as configuration is concerned. When I try to access the web page from outside, it cannot reach the page.
The Local Apache Server is running, from the local instance I can do a curl and all is well.
On the other hand, if I try out the same exact steps with the Debian distribution, everything works smoothly.
I saw another post that mentioned about additional firewall settings but I have not tried that out and I am not sure why it should be done either.
Can anyone explain if the CentOS setup does need additional Firewall settings and what those are?
CentOS defaults to a restrictive operating system level firewall (using iptables), while debian defaults to a permissive one. You can relax the firewall rules on CentOS as well. When running on Compute Engine, the service level firewall will only allow connections from the internet via configured ports.
To relax the CentOS firewall:
$ sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
Then test that your connections work as expected. To save this configuration across system reboots:
$ /sbin/service iptables save
See the IPTables HowTo on the CentOS wiki for more information about working with iptables on CentOS.
You need free the ports in the cloud console.
Watch this video that explain the proccess.
Google Compute Engine Test Drive