Get CALL_EXCEPTION details - ethereum

I am running private geth node and I am wondering if there is any way to find the root cause of transaction exception. When I send the transaction, all I can see is:
transaction failed [ See:
https://links.ethers.org/v5-errors-CALL_EXCEPTION ]
And when I run the same transaction in hardhat network, I get more details:
VM Exception while processing transaction: reverted with panic code
0x11 (Arithmetic operation underflowed or overflowed outside of an
unchecked block)
Is it possible to get the same info from my geth node?

The revert reason is extracted using transaction replay, see the example implementation. This sets requirements for what data your node must store in order to be able to replay the transaction. See your node configuration and detailed use case for further diagnosis.
Your node must support EIP-140 and EIP-838. This has been case for many years now so it is unlikely your node does not support this.
Unless a smart contract explicitly reverts, the default reverts (payable function called with value, math errors) JSON-RPC error messages depend on the node type and may vary across different nodes
Hardhat is internally using Ganache simulated node, not GoEtheruem

Related

UEFI ARM64 Synchronous Exception

I am developing an UEFI application for ARM64 (ARMv8-A) and I have come across the issue: "Synchronous Exceptions at 0xFF1BB0B8."
This value (0x0FF1BB0B8) is exception link register (ELR). ELR holds the exception return address.
There are a number of sources of Synchronous exceptions (https://developer.arm.com/documentation/den0024/a/AArch64-Exception-Handling/Synchronous-and-asynchronous-exceptions):
Instruction aborts from the MMU. For example, by reading an
instruction from a memory location marked as Execute Never.
Data Aborts from the MMU. For example, Permission failure or alignment checking.
SP and PC alignment checking.
Synchronous external aborts. For example, an abort when reading translation table.
Unallocated instructions.
Debug exceptions.
I can't update BIOS firmware to output more debug information. Is there any way to detect more precisely what causes Synchronous Exception?
Can I use the value of ELR (0x0FF1BB0B8) to locate the issue? I compile with -fno-pic -fno-pie options.

Rejected tx with old nonce on Parity

I had observed that the transaction send to the Parity node didn't processed,
and the error messsage "Rejected tx with old nonce" was shown.
The nonce value of the sendTransaction call was calculated so that it would become
the next nonce value. The message was not applicable to the situation.
There are three validator nodes in our Parity environment.
The version of Parity is 2.5.13, and it runs on Ubuntu Server 18.04.
The reproducibility of the phenomeon is not good, and it tends to be resolved with the passage of time.
Is there something that is considered to be the cause of the phenomeon?
When it will occur again, how will I survey the cause?
Only you with the private key can generate valid transactions. So you need to figure out why your code posts transactions with the same nonce to the Parity / mempool.
Check for duplicate transactions
Check for nonce reuse

How can i limit the calling of apply method to 1 instead of 2 in Transaction Processor?

Is there anyway that i can limit the calling of apply method to just 1 in the Transaction Processor? By default it is called twice.
I guess your question is based on the log traces you see.
Short answer: apply method also the core business logic in your transaction family is executed once for a input transaction. There is a different reason for you to see the logs appear twice. Well, in reality transaction execution happens and state transitions are defined with respect to the context. Read the long answer for detailed understanding.
Long answer: If you're observing logs, then you may need to go a little deeper into the way Hyperledger Sawtooth works. Let's get started.
Flow of events (at very high level):
Client sends the transaction embed in a batches.
Validator adds all the transactions in the pending queue.
Based on the consensus engine's request, the validator will start creating the block.
For the block creation, a current state context information is passed along with the transaction request to the Transaction Processor. Eventually send to the right Transaction Family's apply method.
The apply method's result either success or failure is recorded. The transaction is removed from the pending queue if is invalid or it is added to the block.
If the response of the apply method is internal error then that is resubmitted.
If a transaction is added to the block. Depending on the consensus algorithm, the created block is broadcasted to all the nodes.
Every node executes the transactions in the arriving block. The node that created the block will also execute. This probably is what you're talking about.

Public transactions in quorum stuck in pending in transaction pool

I followed Quorum's docs and have created a 2 node network using the raft-consensus. In the genesis block i had pre-allocated funds to one of the accounts. Now I am trying to do a public transaction of some ethers to the other node.
However the transaction is getting stuck in the transaction pool and the balances of both nodes remain unchanged.
I have used the same genesis.json file that was provided in the documentation. Is there something I am missing?
Once the two nodes were brought up, I tried running -
eth.sendTransaction({from:current-node-address, to: second-node's-address, value:0x200,gas:21000})
On checking the transactionReceipt with the transaction hash that was generated, it displays null.
It sounds like your network is not minting blocks, so you may have some Raft misconfiguration.
Check the log files for any error messages.
You can also check that both nodes are in the network and that one of them is minting (is the leader) by using the command raft in the geth console.

hgwatchman throws warning when trying to clone

I installed "watchman" and "hgwatchman" in my linux box. Configured them following the https://bitbucket.org/facebook/hgwatchman link.
When I tried to clone a hg repo, I get the below warning:
warning: watchman unavailable: watchman socket discovery error: "A non-recoverable condition has triggered. Watchman needs your help!
The triggering condition was at timestamp=1408431707: inotify-add-watch(/home/prabhugs/work/sw/.hg/store/data/export/types) -> No space left on device
All requests will continue to fail with this message until you resolve
the underlying problem. You will find more information on fixing this at
https://facebook.github.io/watchman/troubleshooting.html#poison-inotify-add-watch
"
My hgrc file is like,
[extensions]
hgwatchman = /path/to/hgwatchman
[watchman]
mode = {off, on, paraoid}
There is enough space in the disk
please help to overcome this warning.
Please follow the instructions in the documentation.
For reference:
If you've encountered this state it means that your kernel was unable
to watch a dir in one or more of the roots you've asked it to watch.
This particular condition is considered non-recoverable by Watchman on
the basis that nothing that the Watchman service can do can guarantee
that the root cause is resolved, and while the system is in this
state, Watchman cannot guarantee that it can respond with the correct
results that its clients depend upon. We consider ourselves poisoned
and will fail all requests for all watches (not just the watch that it
triggered on) until the process is restarted.
There are two primary reasons that this can trigger:
The user limit on the total number of inotify watches was reached or the kernel failed to allocate a needed resource
Insufficient kernel memory was available
The resolution for the former is to revisit System Specific
Preparation Documentation and raise your limits accordingly.
The latter condition implies that your workload is exceeding the
available RAM on the machine. It is difficult to give specific advice
to resolve this condition here; you may be able to tune down other
system limits to free up some resources, or you may just need to
install more RAM in the system.