ethereum No geth.ipc in the folder - ethereum

ethereum No geth.ipc in the folder
Terminal display
PS C:\Users\88693\Desktop\rian\dada1> ls
目錄: C:\Users\88693\Desktop\rian\dada1
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2020/9/8 下午 03:08 geth
d----- 2020/9/8 下午 03:08 keystore
PS C:\Users\88693\Desktop\rian\dada1> geth attach ipc:/geth.ipc
Fatal: Unable to attach to remote geth: Invalid pipe address '/geth.ipc'.```

The geth.ipc file will not be created, if there is any error in starting the ethereum nodes.(after executing ./startnode1.sh) Please check your log file to find the error.

Related

GitHub Actions workflow deploy complaining about env variables

I am running a deploy workflow for azure and getting the following error. any idea what is it complaining about
error: error validating "STDIN": error validating data: [ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "args" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "command" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "env" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "ports" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "volumeMounts" in io.k8s.api.core.v1.LocalObjectReference]; if you choose to ignore these errors, turn validation off with --validate=false
90
Error: Process completed with exit code 1.
It deployed the pod and the pod is stuck at this on AKS:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
view-app-dev-895f4c475-mrmtj 0/1 ImagePullBackOff 0 4h14m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 32m (x45 over 3h57m) kubelet Pulling image " view:latest"
Normal BackOff 2m23s (x1031 over 3h57m) kubelet Back-off pulling image " view:latest"
The issue was I put the imagepullsecrets in the manifest file at the wrong place.
that fixed the issue.
imagepullsecrets should be below volume mounts not above

How to check is the given <ipfs_hash> is already fully downloaded or not?

First when I run ipfs --offline block stat <ipfs_hash> and if the hash does not locally exist I get following message: Error: blockservice: key not found.
Afterwards I run following: ipfs object stat <ipfs_hash> and after getting a valid output
I run ipfs --offline block stat <ipfs_hash> again , now it always return valid information (hence does not give an error) even the hash is not downloaded. So assuming if ipfs --offline block stat <ipfs_hash> gives and Error message is not correlated that the given hash is locally downloaded.
How can I resolve this in order to detect if the asked hashed is fully downloaded or not?
I can do something like ipfs refs local | grep <hash> , but I don't want to keep fetch all the hashes and it will be slower when there is hundreds of hashes exist.
Related: https://discuss.ipfs.io/t/how-to-check-is-the-given-ipfs-hash-and-its-linked-hashes-already-downloaded-or-not/7588
ipfs files stat --with-local --size <path> returns the downloaded percentage of the requested ipfs hash. If its 100.00% than we can verify that its fully downloaded into local ipfs repo.
ipfs files stat <path> - Display file status.
--with-local bool - Compute the amount of the dag that is local, and if possible the total size.
--size bool - Print only size. Implies '--format=<cumulsize>'. Conflicts with other format options. ```
$ hash="QmPHTrNR9yQYRU4Me4nSG5giZVxb4zTtEe1aZshgByFCFS"
$ ipfs files stat --with-local --size /ipfs/$hash
407624015
Local: 408 MB of 408 MB (100.00%)

Ethereum Mist connect with local private network

System information
Geth Version: 1.7.3-stable
Git Commit: 4bb3c89d44e372e6a9ab85a8be0c9345265c763a
Operating System: linux
Expected behaviour
Connect Mist with local private network
Actual behaviour
I type command :
geth --datadir ~/private_network init ~/private_network/genesis.json
geth --datadir ~/private_network --networkid 3131 --ipcpath ~/private_network/geth.ipc console 2>~/private_network/console.log
and I run Mist but I have an error "address already in use" even if I kill processes that uses port 30303 I have the same result
Backtrace
~/.ethereum/testnet/geth/ethash count=3
INFO [12-16|12:05:37] Disk storage enabled for ethash DAGs dir=~/.ethash count=2
INFO [12-16|12:05:37] Initialising Ethereum protocol versions="[63 62]" network=3
INFO [12-16|12:05:37] Loaded most recent local header number=797369 hash=81c88e…3044c5 td=587702682055345
INFO [12-16|12:05:37] Loaded most recent local full block number=0 hash=419410…ca4a2d td=1048576
INFO [12-16|12:05:37] Loaded most recent local fast block number=761870 hash=08735b…e597b9 td=571350456833753
INFO [12-16|12:05:37] Loaded local transaction journal transactions=0 dropped=0
INFO [12-16|12:05:37] Upgrading chain index type=bloombits percentage=79
INFO [12-16|12:05:37] Regenerated local transaction journal transactions=0 accounts=0
INFO [12-16|12:05:37] Starting P2P networking
Fatal: Error starting protocol stack: listen udp :30303: bind: address already in use
You're connecting to the Ropsten network (network=3). You have to pass in your network id into Mist using the --network option and provide the path to your .ipc file using --rpc.
$ ./Mist.exe --network 3131 --rpc ~/private_network/geth.ipc
Full command line options:
$ ./Mist.exe --help
Usage: Mist.exe --help [Mist options] [Node options]
Mist options:
--mode, -m App UI mode: wallet, mist. [string] [default: "mist"]
--node Node to use: geth, eth [string] [default: null]
--network Network to connect to: main, test
[string] [default: null]
--rpc Path to node IPC socket file OR HTTP RPC hostport (if
IPC socket file then --node-ipcpath will be set with
this value). [string]
--swarmurl URL serving the Swarm HTTP API. If null, Mist will
open a local node.
[string] [default: "http://localhost:8500"]
--gethpath Path to Geth executable to use instead of default.
[string]
--ethpath Path to Eth executable to use instead of default.
[string]
--ignore-gpu-blacklist Ignores GPU blacklist (needed for some Linux
installations). [boolean]
--reset-tabs Reset Mist tabs to their default settings. [boolean]
--logfile Logs will be written to this file in addition to the
console. [string]
--loglevel Minimum logging threshold: info, debug, error, trace
(shows all logs, including possible passwords over
IPC!). [string] [default: "info"]
--syncmode Geth synchronization mode: [fast|light|full] [string]
--version, -v Display Mist version. [boolean]
--skiptimesynccheck Disable checks for the presence of automatic time sync
on your OS. [boolean]
Node options:
- To pass options to the underlying node (e.g. Geth) use the --node- prefix,
e.g. --node-datadir
Options:
-h, --help Show help [boolean]

Hadoop: Data node not started, Logs show "Java bind exception (port in use)"

Data node service is not started on one of my Hadoop cluster.
Data node logs has the following information...
Exception details on PC where datanode service is not started:
2015-08-12 15:51:09,331 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: localhost:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpSe
...........................
On successful Data Node PCs the Log looks like this
2015-08-12 15:43:57,520 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 34958
2015-08-12 15:43:57,520 INFO org.mortbay.log: jetty-6.1.26
2015-08-12 15:43:57,619 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:34958
I have tried fixing the ports in hdfs-site.xml as explained in the link
But this did not work. Please throw some light in fixing this issue.
Thanks
"localhost:0 "
please check your /etc/hosts ,most likely this file not set well
I have uncommented the following line in /etc/hosts and everything worked fine.
127.0.0.1 localhost
This problem is due to the port is already used, hence BindException Thrown. to fix this issue follow the below steps.
1.
run netstat -np command to know the port used with process id
2.
Kill process id for the port which is already bind.

mercurial .hgrc notify hook

Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit.
.hgrc
[paths]
default = ssh://www.domain.com/repo/hg
[ui]
username = intern <user#domain.com>
ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub"
[extensions]
hgext.notify =
[hooks]
changegroup.notify = python:hgext.notify.hook
incoming.notify = python:hgext.notify.hook
[email]
from = user#domain.com
[smtp]
host = smtp.gmail.com
username = user#gmail.com
password = sure
port = 587
tls = true
[web]
baseurl = http://dev/...
[notify]
sources = serve push pull bundle
test = False
config = /path/to/subscription/file
template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n
maxdiff = 300
Error
Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" user#domain.com "hg -R repo/hg serve --stdio""
sending hello command
sending between command
remote: FATAL ERROR: Server unexpectedly closed network connection
abort: no suitable response from remote hg!
, error code: -1
running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" user#domain.com "hg -R repo/hg serve --stdio""
sending hello command
sending between command
remote: FATAL ERROR: Server unexpectedly closed network connection
abort: no suitable response from remote hg!
Did you follow the steps detailled in "AccessingSshRepositoriesFromWindows"?
If yes, you still can try:
Plink.exe also has a -batch argument which tells plink to run non-interactively.
Any activity that would normally require user interaction (a new host key, for instance) will cause plink to exit immediately rather than stall.
When an ssh operation fails, you can use the --debug argument to figure out what went wrong.
I believe you have to have the private key locally, and the public key goes on the target machine. It does seem strange that it would connect at all though.
The problem can be with the push not with the send email using notify extension.
If you followed the instructions correctly, maybe you have a problem if the public key and private key.
You need to edit the authorized_keys, at your server, inside .ssh folder of your user, and put your public key of your key inside this file.
The private key of your key you will use at client with pageant (Add Key button).
I recommend to use another email service instead gmail, maybe, if you send a lot of automatic email. gmail can put your ip to black list and block the emails.