Difference between Receipt & Confirmation callbacks - ethereum

I'd like to verify programmatically that a transaction was successful - that is the Etherum network itself recognizes that for a given transaction hash, it was valid.
Using web3.eth.sendSignedTransaction returned a promise, there's two events - one 'confirmation', the other 'receipt'.
Can I rely on the 'receipt' callback to ascertain that a transaction truly occurred? Or do I have to rely on the 'confirmation' call back as well? If so - how?
Similarly, reading getTransactionReceipt it mentions that -
The receipt is not available for pending transactions and returns null.
So, if I do get a receipt then - it means the transaction is no longer 'pending' ? That is, was successful?

_So, if I do get a receipt then - it means the transaction is no longer 'pending' ? That is, was successful?
When the receipt becomes available (or the event is handled in your case), it means that the transaction was mined into a block.
But the tx could have been reverted (if it was a tx to a smart contract that reverted it for some reason). Check the receipt field status
true means that the transaction was successful
false means it was reverted
Mind that the status is not included in pre-Byznatium (October 2017) transactions and can also be missing in some future transaction types. See more about the transaction types in this answer.

Related

Does transaction with next nonce get mined only after transaction with current nonce confirmed, not just mined?

When we send transactions with incremental nonce from a single account to Ethereum blockchain (public or private), for example we submitted very first two transactions with nonces 0 and 1:
Would the transaction with nonce 1 be executed only after transaction with nonce 0 was "confirmed" (meaning its block won't be orphaned), or after transaction with nonce 0 was merely mined (and then its block could become orphaned)? - In the latter case, there is a possibility that transactions with nonces 0 and 1 will end up in the reverse order on the blockchain, or only transaction with nonce 1 will be executed, while transaction with nonce 0 won't, which can undermine the original business intent of both transactions being in the intended sequence.
Generally, a transaction is considered "confirmed" after six block confirmations in public Ethereum. So the question boils down to: Does Ethereum wait for such confirmation for a transaction with a previous nonce in order to start processing transaction with the next nonce? (for example if these two transactions get into separate blocks). If it does, what confirmation threshold does Ethereum use for this?
Yes. Transactions will be ignored until the transaction with lower nonce is processed, no matter which one is broadcasted earlier to the network.
Below quotes is from the book "Mastering Ethereum" by Dr.Gavin Wood. Page 101 :
Without the nonce, it would be
random as to which one gets accepted and which rejected. However, with the
nonce included, the first transaction you sent will have a nonce of, let’s say, 3,
while the 8-ether transaction has the next nonce value (i.e., 4). So, that transac‐
tion will be ignored until the transactions with nonces from 0 to 3 have been pro‐
cessed, even if it is received first. Phew!

TransactionError when using Brownie on Optimism - Tx dropped without known replacement

I have a Python script using Brownie that occasionally triggers a swap on Uniswap by sending a transaction to Optimism Network.
It worked well for a few days (did multiple transactions successfully), but now each time it triggers a transaction, I get an error message:
TransactionError: Tx dropped without known replacement
However, the transaction goes through and get validated, but the script stops.
swap_router = interface.ISwapRouter(router_address)
params = (
weth_address,
dai_address,
3000,
account.address,
time.time() + 86400,
amount * 10 ** 18,
0,
0,
)
amountOut = swap_router.exactInputSingle(params, {"from": account})
There is a possibility that one of your methods seeks data off-chain and is being called prematurely before the confirmation is received.
I had the same problem, and I managed to sort it out by adding
time.sleep(60)
at the end of the function that seeks for data off-chain
"Dropped and replaced" means the transaction is being replaced by a new one, Eth is being overloaded with a new gas fee. My guess is that you need to increase your gas costs in order to average the price.

Is there any way to understand if the card(emv or magnetic) is used first time at ATM or POS? For EMV card ATC is reliable?

Is there any way to understand if the card(emv or magnetic) is used first time at ATM or POS?
For EMV card ATC is reliable?
The "first time" could be different.
You can ask for ATC after selection ( command 80CA9F5200 ) and if it equals 0000, Get Processing Options wasn't performed, what means there wasn't any transaction.
Bit if if > 0000, it does not mean what "full" transaction was on card. ATC shows number of launch command Get Processing Options.
For Visa card you can find specific bit in CVR ( CVR3, BIT5 ) "New card". He shows if successful online transaction was performed with card.
You can trust ATC for EMV transaction but there is no counter for magnetic transaction.
There is one bit( new card bit) that was set in first EMV transaction. If the Last Online ATC Register is 0 then “New card” bit in the TVR will be set to 1. You could check that bit to see if this transaction is first for this card.
I found ATC is incremented just after GPO is performed. It is possible that transaction falied just after GPO,
Next time when we fire GPO, we get the value > 0 ( ATC already incremented) here we can't say that card is not new because yet to processed first transaction successfully.
so I think ATC value is not a parameter to find out whether card is new or already used. [Sometime as per setting we need to check the card is new or not to perform certain activity]
There are two ATC-related values could be read using GET_DATA: the current ATC and Last Online ATC. For a new card that never went online the Last Online ATC would be zero. This should be true for a 'classical' scheme of the EMV technology employment by a traditional payment system.
Hope this helps

Proton CEP Fiware: delete old received events

I've got this kind of problem with Proton CEP: i currently have a "Sequence" EPA; its input are 2 events. But these events have different granularity: let's say i have A and B events; i receive N "A" events, and M "B" events, where M << N.
So i'd like to have a rule like "if event of type A is not consumed within X seconds, remove it", otherwise i've got a long A events queue; i only need the rule to be evaluated for closest (temporally) events.
Practically, i've got a fake room temperature sensor that sends its temperature updates every 5seconds, and i've got another program that checks external weather and sends it every minute.
Any idea how to solve this situation?
Thank you very much!
I guess that in "consume" you mean arrival, so do you want to evaluate the time the A event took to get to the proton pcoressor? or the time between A events? Do you want to ensure that the A events are indeed continuous in a fix rate? "Removing" an event means to ignore it, since events are not kept anywhere, just processed. At the end, what is that you want to detect here? Like, what is the trend of room temperature compared to the outside temperature? then, emit output events accordingly?
Thanks.
all the relevant event instances are kept within the local state of a corresponding EPA.
For each EPA operand you have policies which dictates how the state is gathered and how the matching set for event derivation is built.
For example, instance selection policy which is defined per operand, and has the values of "Each", "First" and "Last" will tell you if all A instances are examined for match with B instance, or the first (in the order of arrival), or the last.
The consumption policy says what to do with the operand state once a seqence is detected - should the instances of say A which participated in sequence be removed from EPA's state ("consume" value of the policy) or should they remain.
Playing with combination of those policies should give you the behaviour you require

CEP's sequence detection‏

In developing for Fiware's Proton CEP, I came across an issue with Sequence event detection. I'll take advantage of DoSAttack example project, that comes with the software, to explain the issue.
I make two main changes to an original copy of DoSAttack:
-One is to make ExpectedCrash event have 3 more variables. This way I can log to DoSAttackTRConsumer file the 3 values that triggered it.
-Then I also change the Cardinality Policy of the Agent from Single to Unrestricted. This way the event can be triggered several times in a row, as TrafficReports come in (this may be a source to the issue).
I test this result and I find it works ok. I can see in the log that the values that trigger detection are the sequence of 3 values that arrived just before the event, after the first three events have arrived.
This, taking into account that the test beeing made on those 3 values still remains the original example test: (TR3.volume>1.50* TR2.volume AND TR2.volume>1.50 * TR1.volume).
The issue arrises if I make the test be just (TR3.volume>1.50* TR2.volume), for example, then CEP doesn't hold TR1 correctly. Now TR1 is the same as TR2, so cep loses "memory" of this value.
Going a step further, I make the test, just the condition (3>2) which is always true and should trigger a detection on any event that arrives. In this case, as events arrive, all TR1, TR2 and TR3 are the same and CEP has no memory of past values, even though the agent is of Type: Sequence.
The desired application would be for the CEP to recieve 22 readings as a sequence of input events and analyse only the 1st, 8th, 15th and 22nd values of this sequence, at each value that enters. But I find I can't make CEP remember the values correctly unless I'm testing all of them explicitly in the Condition view-box.
What would be the correct way to analyse the 1st, 8th, 15th and 22nd values that arrived, evaluating each time a new one arrives?
Here is the specificatin of DoSAttack, after altering it:
{"epn":{"events":[{"name":"TrafficReport","attributes":[{"name":"volume","type":"Integer","dimension":0}]},{"name":"ExpectedCrash","attributes":[{"name":"Cost","type":"Double","dimension":0},{"name":"TR1","type":"Integer","dimension":"0"},{"name":"TR2","type":"Integer","dimension":"0"},{"name":"TR3","type":"Integer","dimension":"0"}]}],"epas":[{"name":"IncreasingTraffic","epaType":"Sequence","context":"3MinAfterStartUp","inputEvents":[{"name":"TrafficReport","alias":"TR1","consumptionPolicy":"Consume","instanceSelectionPolicy":"First"},{"name":"TrafficReport","alias":"TR2","consumptionPolicy":"Consume","instanceSelectionPolicy":"First"},{"name":"TrafficReport","alias":"TR3","consumptionPolicy":"Consume","instanceSelectionPolicy":"First"}],"computedVariables":[],"assertion":"3>2","evaluationPolicy":"Immediate","cardinalityPolicy":"Unrestricted","internalSegmentation":[],"derivedEvents":[{"name":"ExpectedCrash","reportParticipants":false,"expressions":{"Cost":"10","TR1":"TR1.volume","TR2":"TR2.volume","TR3":"TR3.volume"}}],"derivedActions":[]}],"contexts":{"temporal":[{"name":"3MinAfterStartUp","type":"TemporalInterval","atStartup":true,"neverEnding":false,"initiators":[],"terminators":[{"terminatorType":"RelativeTime","terminationType":"Terminate","relativeTime":"180000"}]}],"segmentation":[],"composite":[]},"consumers":[{"name":"SysTemCrashConsumer","type":"File","properties":[{"name":"filename","value":"/opt/tomcat10/sample/DoSAttack_PredictedCrash.txt"},{"name":"formatter","value":"json"},{"name":"delimiter","value":";"},{"name":"tagDataSeparator","value":"="},{"name":"SendingDelay","value":"1000"}],"events":[{"name":"ExpectedCrash"}],"actions":[]},{"name":"DoSAttackTRConsumer","type":"File","properties":[{"name":"filename","value":"/opt/tomcat10/sample/DoSAttack_TrafficReport.txt"},{"name":"formatter","value":"json"},{"name":"delimiter","value":";"},{"name":"tagDataSeparator","value":"="},{"name":"SendingDelay","value":"1000"}],"events":[{"name":"TrafficReport"}],"actions":[]}],"producers":[{"name":"TrafficReportFileProducer","type":"File","properties":[{"name":"filename","value":"/opt/tomcat10/sample/DoSAttackScenarioJSON.txt"},{"name":"pollingInterval","value":"1000"},{"name":"sendingDelay","value":"1500"},{"name":"formatter","value":"json"},{"name":"delimiter","value":";"},{"name":"tagDataSeparator","value":"="}],"events":[]}],"actions":[],"name":"DoSAttack"}}
The producer file, DoSAttackScenarioJSON.txt, is still the original one, unaltered:
{"Name":"TrafficReport", "volume":"1000"}
{"Name":"TrafficReport", "volume":"1600"}
{"Name":"TrafficReport", "volume":"2500"}
If you do include more values than 3 you can see that the issue propagates.
If you need more information let me know.
Thank you
In the Sequence pattern, the engine looks for event instances that occurred in a particular order.
In Sequence (A, B, C), the engine looks for three event instances, the first one of type A, the second of type B and the third of type C, where:
(A's detection time) <= (B's detection time) AND (B's detection time) <= (C's detection time)
Usually in a Sequence pattern, either the event types are different, or there is other condition above the participants events (as in the DoSAttack example).
When you use the same event type in a sequence (e.g., Sequence(A, A, A)), then the same event instance can be used in all the three places, since it holds the detection order listed above.
In addition, if you use a "consumptionPolicy": "Consume" for a participant event, then after the event was used to detect the pattern, it will not be used for future detections of this pattern.
This is why when you have a Sequence(A, A, A) with no condition, and event instance A1 of type A arrives, it causes a pattern detection, and since it has Consume policy, it will not be kept for future detections. Later when event A2 of type A arrives, it causes another detection based on A2 alone.
Also, according to the Sequence built-in condition over the detection time, a sequence of events can be detected although other events arrived in between.
Please describe the pattern you would like to detect. Maybe you can use a Trend or Aggregate EPA instead.