I'm creating a transaction with web3.eth.accounts.signTransaction. I'm then storing that transaction hash, then sending it over the ETH network with web3.eth.sendSignedTransaction
How do I programmatically check if a signed transaction was or wasn't sent yet over the network yet? I'm not interested if it was mined, confirmed or rejected - I just want to check if it was ever sent or not.
How do I programmatically check if a signed transaction was or wasn't sent yet over the network yet?
When you create a transaction, you get a transaction hash. With this transaction hash, you can query its status using eth_getTransactionByHash JSON-RPC API.
After you see your transaction being mined in the first block. because the probabilistic nature of proof-of-work network, you need to wait few blocks to ensure that the Ethereum blockchain chain tip does not reorganise. Then you can use status flag of the receipt field to determine if the transaction was reverted or not.
Related
In a private Ethereum network, transactions get submitted to the network via a single RPC node. If the RPC node crashes at some point, is there a possibility of loss of transactions that are in its mempool and haven't been propagated yet to the other nodes? Since mempool is in node memory, after RPC node recovery those transactions are supposed to be lost permanently from the network?
If the RPC node crashes at some point, is there a possibility of loss of transactions that are in its mempool and haven't been propagated yet to the other nodes?
Yes.
Until you 1) receive a block with your transaction 2) are sure about the fact that the network has accepted this block, you cannot know if your transaction is propagated or included in the chain.
Can an Ethereum contract execute transactions directly on the Bitcoin network? Ethereum contracts must be pure functions without any external side effects, and committing a transaction on another blockchain would be an external side effect, so I would assume this is not possible. What options would be possible for this sort of scenario?
Can an Ethereum contract execute transactions directly on the Bitcoin network?
Not directly. Ethereum and Bitcoin are two separate networks with different architecture and without any "official" bridge.
However, I could imagine a wild scenario that involves creating a BTC transaction based on Ethereum transaction:
User makes a transaction to an Ethereum address
An external app is listening to incoming transactions to this Ethereum address. When it learns about the new (Ethereum) transaction, it creates a BTC transaction object, signs it and broadcasts it to the Bitcoin network.
It's based on the way how some oracles work. They are listening to incoming transactions containing instructions, fetching some off-chain data (based on the instructions) and sending a new Ethereum tx that passes the off-chain data to a smart contract.
This question already has an answer here:
ActiveMQ messageId not working to stop duplication
(1 answer)
Closed 2 years ago.
In order to test the communication performance in the event of a failure, I numbered each message and sent it continuously, sending about 30 messages per second. And found that even if the ha policy is set, consumers will repeatedly receive a small number of received messages after failover/failback. Is this normal?
I know that Artemis provides automatic duplicate message detection by giving a unique value to the message, which can avoid repeated sending of messages, but the repeated received messages have different "client ack messageID". Does this mean that it cannot prevent receiving repeated messages?
Depending on how you've written your client you can get duplicates on failover because some message acknowledgements may get lost when the failure happens. For example, if you receive a message from the broker and process it but then the broker fails before you send the acknowledgement (or fails while the acknowledgement is in transit) then the backup will still have the message you received already and will dispatch it again.
If you don't want duplicates to be a problem for your client then you have a couple of options:
Use a transaction on your client and don't commit until the acknowledgement has been confirmed successfully. If the acknowledgement fails then rollback the transaction.
Make sure your consumer is idempotent so duplicates don't really matter.
I have read a few questions from StackOverflow. They said we can enabled the Session Support to the queue to keep the message FIFO. Some mention the ordering cannot be guaranteed. To make sure the message processed in order we have to deal with manual during the processing by the timestamp.
Is that true?
Azure Service Bus Queue itself follows FIFO. In some cases, the processing of the messages may not be sequential. If you are sure that the size of the payload will be consistent, then you can go with the normal Queues, which will process the messages in order(works for me).
If there will be change in payload size between the messages, it is preferred to go with Session enabled Queues as Sean Feldman mentioned in his answer.
To send/receive messages in FIFO mode, you need to enable enable "Require Sessions" on the queue and use Message Sessions to send/receive messages. The timestamp doesn't matter. What matters is the session.
Upon sending, setting message's SessionId
Upon receiving, either receive any session using MessageReceiver or a specific session using lower level API (SessionClient) and specifying session ID.
A good start would be to read the documentation and have a look at this sample.
I'm implementing PayPal Payments Standard in the website I'm working on. The question is not related to PayPal, I just want to present this question through my real problem.
PayPal can notify your server about a payment in two ways:
PayPal IPN - after each payment PayPal sends a (server-to-server) notification to a url (choose by you) with the transaction details.
PayPal PDT - after a payment (if you set this up in your PP account) PayPal will redirect the user back to your site, passing the transaction id in the url, so you can query PayPal about that transaction, to get details.
The problem is, that you can't be sure which one happens first:
Will your server notified by IPN
Will be the user redirected back to your site
Whichever is happening first, I want to be sure I'm not processing a transaction twice.
So, in both cases, I query my DB against the transaction id coming from paypal (and the payment status actually..but it doesn't matter now) to see if I already saved and processed that transaction. If not, I process it, and save the transaction id with other transaction details into my database.
QUESTION
What happens if I start processing the first request (let it be the PDT..so the user was redirected back to my site, but my server wasn't notified by IPN yet), but before I actually save the transaction to database, the second (the IPN) request arrives and it will try to process the transaction too, because it doesn't find it in DB.
I would love to make sure that while I'm writing a transaction into database, no other queries can read the table, looking for that given transaction id.
I'm using InnoDB, and don't want to lock the whole table, for the time of the write.
Can this be solved simply by transactions, have I to lock "manually" that row? I'm really confused, and I hope some more experienced mysql developers can help making this clear for me and solving the problem.
Native database locks are almost useless in a Web context, particularly in situations like this. MySQL connections are generally NOT done in a persistent way - when a script shuts down, so does the MySQL connection and all locks are released and any in-flight transactions are rolled back.
e.g.
situation 1: You direct a user to paypal's site to complete the purchase
When they head off paypal, the script which sent over the http redirect will terminate and shuts down. Locks/transactions are released/rolled back, and they come back to a "virgin" status as far as the DB is concerned. Their record is no longer locked.
situation 2: Paypal does a server-to-server response. This will be done via a completely separate HTTP connection, utterly distinct from the connection established by the user to your server. That means any locks you establish in the yourserver<->user connection will be distinct from the paypal<->yourserver session, and the paypal response will encounter locked tables. And of course, there's no way of predicting when the paypal response comes in. If the network gods smile upon you and paypal's not swamped, you get a response very quickly and possibly while the user<->you connection is still open. If things are slow and the response is delayed, that response MAY encounter unlocked tables/rows because the user<->server session has completed.
You COULD use persistent MySQL connections, but they open up a whole other world of pain. e.g. consider the case where your script has a bug which gets triggered halfway through processing. You connection, do some transaction work, set up some locks... and then the script dies. Because the MySQL connection is persistent, MySQL will NOT see that the client script has died, and it will keep the transactions/locks in-flight. But the connection is still sitting there, in the shared pool waiting for another session to pick it up. When it invariably is, that new script has no idea that it's gotten this old "stale" connection. It'll step into the middle of a mess of locks and transactions it has no idea exists. You can VERY easily get yourself into a deadlock situation like this, because your buggy scripts have dumped garbage all over the system and other scripts cannot cope with that garbage.
Basically, unless you implement your own locking mechanism on top of the system, e.g. UPDATE users SET locked=1 WHERE id=XXX, you cannot use native DB locking mechanisms in a Web context except in 1-shot-per-script contexts. Locks should never be attempted over multiple independent requests.