I am trying to run a beacon-chain for Ethereum2.0 in the pyrmont testnet with Prysm and Besu.
I run the ETH1 node with the command :
besu --network=goerli --data-path=/root/goerliData --rpc-http-enabled
This command is working and download the entire blockchain, then run properly.
But when I launch :
./prysm.sh beacon-chain --http-web3provider=localhost:8545 --pyrmont
I get :
Verified /root/prysm/dist/beacon-chain-v1.0.0-beta.3-linux-amd64 has been signed by Prysmatic Labs.
Starting Prysm beacon-chain --http-web3provider=localhost:8545 --pyrmont
[2020-11-18 14:03:06] WARN flags: Running on Pyrmont Testnet
[2020-11-18 14:03:06] INFO flags: Using "max_cover" strategy on attestation aggregation
[2020-11-18 14:03:06] INFO node: Checking DB database-path=/root/.eth2/beaconchaindata
[2020-11-18 14:03:08] ERROR main: database contract is xxxxxxxxxxxx3fdc but tried to run with xxxxxxxxxxxx6a8c
I tried to delete the previous data folder /root/goerliData and re-download the blockchain but nothing changed...
Why does the database contract didn't change and what should I do ?
Thanks :)
The error means that you have an existing database for another network, probably medalla.
Try starting your beacon node with the flag --clear-db next time, and you'll see it the error disappear and start syncing Pyrmont.
So I have a registry on my lan, from other machines and from the host curl, nslookup, docker pull/run and podman pull/run work as does just curling the v2 manifests address. From within a container curlying the address https://docker.infrastructure.lan.mydomain/v2/my-image/manifests/latest also works works. So how does k3s/containerd do dns lookups? My guess is that k3s is using an internet DNS like 8.8.8.8 instead of coredns for the equivalent of docker pulls? I want it to use mine (or even coredns)
Anyways here's the error is see, the domain suffix was changed.
Pulling image "docker.infrastructure.lan.mydomain/my-image:latest"
Warning Failed 27m (x4 over 29m) kubelet, infrastructure.lan.mydomain Failed to pull image "docker.infrastructure.lan.mydomain/my-image:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.infrastructure.lan.mydomain/my-image:latest": failed to resolve reference "docker.infrastructure.lan.mydomain/my-image:latest": failed to do request: Head https://docker.infrastructure.lan.mydomain/v2/my-image/manifests/latest: dial tcp: lookup docker.infrastructure.lan.mydomain: no such host
Again inside a container this is fine (I can curl the url), and it's fine on the host. It's also fine from other non-k3s machines on my network. But things like kubectl run --image docker.infrastructure.lan.mydomain/my-image:latest testing give the above error
I have troubles deploying MySQL image on AWS ECS FARGATE.
The cloudformation script that i have is this (dont mind the syntax, i am using python lib Troposphere to manage cloudfromation templates):
TaskDefinition(
'WordpressDatabaseTaskDefinition',
RequiresCompatibilities=['FARGATE'],
Cpu='512',
Memory='2048',
NetworkMode='awsvpc',
ContainerDefinitions=[
ContainerDefinition(
Name='WordpressDatabaseContainer',
Image='mysql:5.7',
Environment=[
Environment(Name='MYSQL_ROOT_PASSWORD', Value='root'),
Environment(Name='MYSQL_DATABASE', Value='wpdb'),
Environment(Name='MYSQL_USER', Value='root'),
Environment(Name='MYSQL_PASSWORD', Value='root'),
],
PortMappings=[
PortMapping(
ContainerPort=3306
)
]
)
]
)
The deployment succeeds. I can even see that the task is running for few seconds until its state changes to STOPPED.
The only thing that i can see is:
Stopped reason Essential container in task exited
Exit Code 1
On localhost it works like a charm. What am i doing here wrong? At least - are there ways to debug this?
With AWS ECS, if it is stopping, it may be failing a health check which is causing the container to restart. What port is the container DB mapped to and can you check the container logs to see what is happening when it starts then stops? Also, check the logs in ECS under the service or task. Post it here so I can take a look at them.
So, I found out a mistake.
THE VERY FIRST THING YOU DO - is you test that docker container on localhost and see if you can reproduce the issue. In my case docker mysql container on a local machine with the exact same environment crashed too. I was able to inspect logs and found out that it fails to create "root" user. Simply changing user and password made everything work, even on ECS.
This is the complete stack to have a mysql docker image running on AWS ECS FARGATE:
self.wordpress_database_task = TaskDefinition(
'WordpressDatabaseTaskDefinition',
RequiresCompatibilities=['FARGATE'],
Cpu='512',
Memory='2048',
NetworkMode='awsvpc',
# If your tasks are using the Fargate launch type, the host and sourcePath parameters are not supported.
Volumes=[
Volume(
Name='MySqlVolume',
DockerVolumeConfiguration=DockerVolumeConfiguration(
Scope='shared',
Autoprovision=True
)
)
],
ContainerDefinitions=[
ContainerDefinition(
Name='WordpressDatabaseContainer',
Image='mysql:5.7',
Environment=[
Environment(Name='MYSQL_ROOT_PASSWORD', Value='root'),
Environment(Name='MYSQL_DATABASE', Value='wpdb'),
Environment(Name='MYSQL_USER', Value='wordpressuser'),
Environment(Name='MYSQL_PASSWORD', Value='wordpressuserpassword'),
],
PortMappings=[
PortMapping(
ContainerPort=3306
)
]
)
]
)
self.wordpress_database_service = Service(
'WordpressDatabaseService',
Cluster=Ref(self.ecs_cluster),
DesiredCount=1,
TaskDefinition=Ref(self.wordpress_database_task),
LaunchType='FARGATE',
NetworkConfiguration=NetworkConfiguration(
AwsvpcConfiguration=AwsvpcConfiguration(
Subnets=[Ref(sub) for sub in VpcFormation().public_subnets],
AssignPublicIp='ENABLED',
SecurityGroups=[Ref(self.security_group)]
)
),
)
Note the AssignPublicIp='ENABLED' option so you would be able to connect to the database remotely.
After the stack completed i was able to successfully connect with a command:
mysql -uwordpressuser -pwordpressuserpassword -h18.202.31.123
Thats it :)
To minimize the setup-time for attaching a debug session to the remote pod (microservice deployed on OpenShift) using intelliJ,
I am trying to get the most out of the 'Before launch'-setting of the Remote Debug-Configuration.
I use 2 steps before attaching the debugger to the JVM Socket with following command-line arguments (this setup works but needs editing every new deploy);
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
step 1:
external tools: oc with arguments:
login
https://url.of.openshift.environment
--username=<login>
--password=<password>
step 2:
external tools: oc with arguments:
port-forward
microservice-name-65-6bhz8 -> this needs to be changed after every deploy
8000
3000
3001
background info:
this is the info in the service his YAML under spec>containers>env:
- name: JAVA_TOOL_OPTIONS
value: >-
-agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=3000
-Dcom.sun.management.jmxremote.rmi.port=3001
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As the name of the pod changes every (re-)deploy I am trying to find a oc-command which can be used to port-forward without having to provide the pod-name.(eg. based on the service-name)
Or a completely other solution that allows me to hit 1 button to setup a debug-session (preferably in intelliJ).
> Screenshot IntelliJ settings
----------------------------- edit after tips -------------------------------
For now I made a small batch-script which does the trick:
Feel free to help on a even faster solution
(I'm checking https://openshiftdo.org/)
or other intelliJent solutions
set /p _username=Type your username:
set /p _password=Type your password:
oc login replace-with-openshift-console-url --username=%_username% --password=%_password%
oc project replace-with-project-name
oc get pods --selector app=replace-with-app-name -o jsonpath={.items[?(#.status.phase=='Running')].metadata.name} > temp.txt
set /p PODNAME= <temp.txt
del temp.txt
oc port-forward %PODNAME% 8000 3000 3001
Your going to need the pod name in order to port forward but of course you can fetch that programatically consistantly so you don't need to update in place every time.
There are a number of ways you can do this, via jsonpath, go template, bash etc. An example would be to use the following, replacing your app name as required:
oc get pod -l app=replace-me -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.