Rejected tx with old nonce on Parity - ethereum

I had observed that the transaction send to the Parity node didn't processed,
and the error messsage "Rejected tx with old nonce" was shown.
The nonce value of the sendTransaction call was calculated so that it would become
the next nonce value. The message was not applicable to the situation.
There are three validator nodes in our Parity environment.
The version of Parity is 2.5.13, and it runs on Ubuntu Server 18.04.
The reproducibility of the phenomeon is not good, and it tends to be resolved with the passage of time.
Is there something that is considered to be the cause of the phenomeon?
When it will occur again, how will I survey the cause?

Only you with the private key can generate valid transactions. So you need to figure out why your code posts transactions with the same nonce to the Parity / mempool.
Check for duplicate transactions
Check for nonce reuse

Related

Get CALL_EXCEPTION details

I am running private geth node and I am wondering if there is any way to find the root cause of transaction exception. When I send the transaction, all I can see is:
transaction failed [ See:
https://links.ethers.org/v5-errors-CALL_EXCEPTION ]
And when I run the same transaction in hardhat network, I get more details:
VM Exception while processing transaction: reverted with panic code
0x11 (Arithmetic operation underflowed or overflowed outside of an
unchecked block)
Is it possible to get the same info from my geth node?
The revert reason is extracted using transaction replay, see the example implementation. This sets requirements for what data your node must store in order to be able to replay the transaction. See your node configuration and detailed use case for further diagnosis.
Your node must support EIP-140 and EIP-838. This has been case for many years now so it is unlikely your node does not support this.
Unless a smart contract explicitly reverts, the default reverts (payable function called with value, math errors) JSON-RPC error messages depend on the node type and may vary across different nodes
Hardhat is internally using Ganache simulated node, not GoEtheruem

Why can't transactions be sent directly to validators/mining pools?

Now 99% of all interactions with the blockchain occur through Infuria or Alchemy (MetaMask - API Infuria).
Nobody raises their geth nodes.
Because of this, most people are skeptical about the word "decentralization", since the application still has a point of failure.
Why is it impossible to send transactions directly from the browser to the validator? What prevents this?
After all, this is the last obstacle before decentralization. If the browser/extensions stored hundreds of addresses of mining pools to which you can send a transaction, then such an application is almost fail-safe.
Generally, a signed transaction is sent from a wallet software to a node (in the peer-to-peer Ethereum network) that broadcasts it to the rest of the network as a "transaction waiting to be mined" (i.e. placed in a mempool).
Miners usually take transactions from the mempool and place them in blocks.
It is technically possible for a miner to accept a transaction from another source (or create and sign it themselves), and place it in a block.
But it comes with an inconvenience for the transaction sender - they need to wait until this specific miner mines a block containing their transaction. If they sent the transaction to the mempool instead, any miner could have picked it up and include in their block. And there is currently no standardized way of sending a transaction to the miner directly - so each might have a different channel and different rules.
So to answer your question:
Why can't transactions be sent directly to validators/mining pools?
They can. But it's just faster (for the transaction sender) to use the mempool and let the transaction be mined by anyone, instead of waiting for one specific mining pool to mine a block.

Public transactions in quorum stuck in pending in transaction pool

I followed Quorum's docs and have created a 2 node network using the raft-consensus. In the genesis block i had pre-allocated funds to one of the accounts. Now I am trying to do a public transaction of some ethers to the other node.
However the transaction is getting stuck in the transaction pool and the balances of both nodes remain unchanged.
I have used the same genesis.json file that was provided in the documentation. Is there something I am missing?
Once the two nodes were brought up, I tried running -
eth.sendTransaction({from:current-node-address, to: second-node's-address, value:0x200,gas:21000})
On checking the transactionReceipt with the transaction hash that was generated, it displays null.
It sounds like your network is not minting blocks, so you may have some Raft misconfiguration.
Check the log files for any error messages.
You can also check that both nodes are in the network and that one of them is minting (is the leader) by using the command raft in the geth console.

Sending offline transactions to mainnet is "flaky."

This is the code that I'm using to send signed transactions to mainnet programmatically:
import Web3 from 'web3'
import EthereumTx from 'ethereumjs-tx'
const web3 = new Web3(new Web3.providers.HttpProvider(INFURA_URL))
const Contract = new web3.eth.Contract(ABI, CONTRACT_ADDRESS)
const createItem = (name, price, nonce, callback) => {
console.log(`Nonce: ${nonce}. Create an Item with Name: ${name}, Price: ${price}`)
const data = Contract.methods.createItem(name, price).encodeABI()
const tx = new EthereumTx({
nonce: nonce,
gasPrice: web3.utils.toHex(web3.utils.toWei('4', 'gwei')),
gasLimit: 400000,
to: CONTRACT_ADDRESS,
value: 0,
data: data,
})
tx.sign(new Buffer(MAINNET_PRIVATE_KEY, 'hex'))
const raw = '0x' + tx.serialize().toString('hex')
web3.eth.sendSignedTransaction(raw, callback)
}
export default createItem
I have to mass create (i.e. populate) items in my contract, and I want to do it programatically. However, while the code works well in ropsten, it fails to send all the transactions in mainnet; it only sends the first few transactions and doesn't send the rest. The errors are not helpful because this error is usually guaranteed to occur:
Unhandled rejection Error: Transaction was not mined within 50 blocks, please make sure your transaction was properly sent. Be aware that it might still be mined!
I wonder how other people do when they have to send a lot of transactions to Ethereum mainnet today. Is there anything I'm doing wrong?
You basically cannot reliably do what you guys are trying to do. Here's a set of problems that arises:
Transactions only succeed in nonce order. If an early nonce is pending (or worse, went missing), the other transaction will pend until the early nonce has been consumed (or it gets removed from the mempool).
There's no hard rule for when a transaction will drop from the mempool. This is scary when you've made a nonce error or an intermediary nonce has not reached the network for some reason because you don't know what is going to happen when you finally post that nonce.
Many transactions with the same nonce can be sent. They are very likely to be selected by gas price (because miners are incentivized to do exactly that). One useful trick when something strange has happened is to clear out your nonces by sending a bunch of high gas price, zero value transactions. You might call this an increment method. Remember this comes with a cost.
A lot of tools do one of two things to handle nonces: a live read from getTransactionCount() or a read from getTransactionCount() followed by incrementing for each additional transaction you send. Neither of these work reliably: for the first, transactions are sometimes pending but not visible in the pool yet. This seems to especially happen if gas price is below safemin, but I'm not entirely sure what is happening here. For the second, if any other system sends a transaction with the same address, it will not work.
So, how do we work around this?
A smart contract is a straightforward way to reduce a transaction from many sender nonces to few sender nonces. Write a contract that sends all your different transactions, then send the budget to that contract. This is a relatively high cost (in terms of time/effort/expertise) way to solve the problem.
Just do it anyway, batch style. When I've had to send many transactions manually, I've batched them in to sets of 10 or so and gone for it. Incrementing the nonce manually each time (because the transaction is usually not on the network yet) and then waiting sufficiently long for all the transactions to confirm. Don't rely on pending transactions on Etherscan or similar to determine whether this is working, as things often vanish unpredictably from this level. Never reuse a nonce for a different transaction that isn't a 0ing high gas transaction - you will screw it up and you'll end up sending the same transaction twice by mistake.
Serialize. One-by-one you post a transaction, wait for it to confirm, increment your nonce. This is probably the best of the easy-to-implement automated solutions. It will not fail. It might buffer forever if you have a constant stream of transactions. It also assures you can never have more than one transaction per block, limiting your throughput to 4 a minute or so.
Fire and retry. This is a little sketchy because it involves reusing nonces for different transactions. Send all (or some large batch) of your transactions to the network. If any fail, restart from the failure nonce and send again. There's possibly a more intelligent solution where you just try to swap out the missing nonce. You'll need to be very careful you never send a transaction that is secretly in the pending-but-not-visible pool.
New address for every transaction. A buffering step of distributing funds to your own addresses ensures you never screw it up for other people. It does double your transaction time and cost though.
I think some variant of fire and retry is what most of the big services (like pools and exchanges) do. Some of these can be improved by splitting the budget over a few addresses as available (reducing the collision frequency).

Different Retry policies for different messages in MSMQ

We have a requirement for an API, which allows asynchronous updates via a MSMQ message queue, that I'm putting together which will allow the developer consuming the API to specify different retry policies per message. So a high priority client system, e.g. for sales will submit all messages with 5 delivery attempts (retries) and 15 minutes between each attempt, whereas a low priority client system, e.g. back-end mail shot system will allow users to update their marketing preferences, submitting messages with 3 retries and an hour between each attempt.
Is there a way in the System.Messaging MSMQ (version 3 or 4) implementation to specify number of retries, retry delay and things like whether messages are sent to a dead letter queue or just deleted? (and if so, how?)
I would be open to using other messaging frameworks if they fulfilled this requirement.
Is there a way in the System.Messaging MSMQ (version 3 or 4) implementation to specify number of retries
Depending on which operating system/msmq version you're using, specifying retry semantics is highly sophisticated in WCF. The following is for Windows 2008 and MSMQ4 using a transactional queue.
The main setting on the binding is called MaxRetryCycles. One retry cycle is an attempt to successfully read a message from a queue and process it inside the handling method. This "attempt" can actually be made up of multiple attempts, as defined by the msmq binding property ReceiveRetryCount. ReceiveRetryCount is the number of times an application will try to read the message and process it before rolling back the de-queue transaction. This marks the end of one retry cycle.
You can also introduce a delay in between cycles with the RetryCycleDelay property.
A more complicated consideration is what to do with the messages which fail even after multiple retry cycles.
allow the developer consuming the API to specify different retry policies per message
I am not sure how you could do this with MSMQ - as far as I'm aware it's only possible to set retry semantics on a per-endpoint basis. If you're using transactions then you can't even allow API users to set the priority of the messages being sent (transactional queues guarantee delivery in order).
The only thing you could do is host a another instance of your API as high-priority and one for low priority. These could be hosted on different environments, and this has the added benefit that low priority messages won't be competing for system resources with high priority messages.
Hope this helps.