What happens to Ethereum transaction mined into an orphaned block? [closed] - ethereum

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 months ago.
Improve this question
In case if an Ethereum transaction was mined into an orphaned block:
Would it still make it into a different "confirmed" block, or it would be reverted?
Can this disrupt the nonce sequence of transactions in the confirmed blocks? For example, can we get transactions with reverse order of nonces in different blocks? (transaction with nonce 1 in block 10 and transaction with nonce 0 in block 11)

The longest chain is the source of truth.
Orphaned blocks (aka uncle blocks) are valid blocks that are not part of the longest chain.
Even though the transaction has been included in an orphaned block, it is not part of the longest chain. Which means it's not mined yet, state changed resulting from this transaction have not been accepted by the network, and it can be still included in a "confirmed" block.
Applied to your example from point 2: The "transaction nonce 0" has not happened yet, even though it's part of an orphaned block. So it can be mined in some next block. And because it has lower nonce, it has to be executed before the "transaction nonce 1".

Related

Checking the number of confirmed blocks for a transaction?

How does one check the number of "block confirmations" for a given transaction?
I tried checking the transaction hash in block heights of +1, +2, etc. but they don't contain the transaction ID.
Would I instead need to wait for future blocks to be mined, and the transaction status to still be considered valid? the Receipt.Status.
After lots of research, I can say that it is the number of blocks that have been mined after the block your transaction was included in, and your transaction is still considered valid. So to check block confirmations, you would check whether the transaction is still valid, and see how many more blocks above the transaction block height have been mined.
Therefore, if your transaction has 13 block confirmations (see above graphic), then there have been 12 blocks mined since the block was mined that included your transaction.
https://jaredstauffer.medium.com/what-is-a-block-confirmation-on-ethereum-e27d29ca8c01#:~:text=A%20block%20confirmation%20is%20simply,mined%20that%20included%20your%20transaction.

what is the time complexity of iterator increments and decrements for stl::map [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What's the complexity of iterator++ operation for stl RB-Tree(set or map)?
I always thought they would use indices thus the answer should be O(1), but recently I read the vc10 implementation and shockly found that they did not.
To find the next element in an ordered RB-Tree, it would take time to search the smallest element in the right subtree, or if the node is a left child and has no right child, the smallest element in the right sibling. This introduce a recursive process and I believe the ++ operator takes O(lgn) time.
Am I right? And is this the case for all stl implementations or just visual C++?
Is it really difficult to maintain indices for an RB-Tree? As long as I see, by holding two extra pointers in the node structure we can maintain a doubly linked list as long as the RB-Tree. Why don't they do that?
The amortized complexity when incrementing the iterator over the whole container is O(1) per increment, which is all that's required by the standard. You're right that a single increment is only O(log n), since the depth of the tree has that complexity class.
It seems likely to me that other RB-tree implementations of map will be similar. As you've said, the worst-case complexity for operator++ could be improved, but the cost isn't trivial.
It quite possible that the total time to iterate the whole container would be improved by the linked list, but it's not certain since bigger node structures tend to result in more cache misses.

gaslimit for a block and other questions related to gas model

i know what is a gas, gaslimit and gasprice, but still have confusion even after searching and reading through the Internet.
There is a gaslimit per block, but why many blocks did not reach it? in other words, can a miner send a block to the network without reaching the gaslimit for the block?
Assume the block gaslimit is 4 million and i sent a transaction with 4 million gaslimit. But when the miner executed it (used gas was 1 million). Can the miner add extra transactions to the block to fill the remaining 3 million or not. In another way, does a transaction with a big gaslimit (but uses a fraction of that gas) affects the miner of adding more transactions to the block?
Each Opcode coast some value of gas. How Ethereum measure the cost of each EVM opcode? (any reference for explanation?).
Thanks
Q1 The block gas limit is an upper bound on the total cost of transactions that can be included in a block. Yes, the miner can and should send a solved block to the network, even if the gas cost is 0. Blocks are meant to arrive at a steady pace in any case. So "nothing happened during this period" is a valid solution.
Q2a The gas cost of a transaction is the total cost of executing the transaction. Not subject to guesswork. If the actual cost exceeds the supplied gas then the transaction fails with an out-of-gas exception. If there is surplus gas, it's returned to the sender.
Q2b Yes, a miner can and should include multiple transactions in a block. A block is a well-ordered set of transactions that were accepted by the network. It's a unit of disambiguation that clearly defines the accepted order of events. Have a look here for exact meaning of this: https://ethereum.stackexchange.com/questions/13887/is-consensus-necessary-for-ethereum
Q3 I can't say for sure (possibly someone can confirm) that this is an up-to-date list: https://docs.google.com/spreadsheets/d/1m89CVujrQe5LAFJ8-YAUCcNK950dUzMQPMJBxRtGCqs/edit#gid=0

Is throwing and catching exceptions only an improvement of readability instead of using if statments on the return value? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is there any reason why it's better to use a try/catch statement when one can check to see if a value that is designated to indicate a problem is returned?
For example a function prime factors a number. If a negative integer is passed to it would it be "better" for an exception to be thrown or a particular value returned (that would never be a legitimate value, say -1).
If a function doesn't need to return something would it be less efficient to return true on success and false on failure, as opposed to throwing something?
Exceptions can propagate up across multiple call frames at a time without any extra code in the intervening call frames to check for error conditions/returns. This means they have the potential to yield better performance (at least in the non-error case) than code that's based on return value checks at every call level. That's probably the main concrete benefit.
The use of a Try/Catch versus returning a boolean value has been discussed for a while. I found this StackOverflow article that might add some insight. For myself I tend to use both the Try/Catch and the boolean return. It depends when a method is being called. As well, sometimes it is best to let an error bubble up the stack so it can be caught and dealth with effieciently.
Finally, if your find you are not getting any answers here you can also try https://softwareengineering.stackexchange.com/

How do you call those little annoying cases you have to check all the time? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
How do you call those little annoying cases that has to be checked , like "it's the first time someone entered a record / delete the last record in a linked list (in c implementation) / ... " ?
The only term I know translates not-very-nicely to "end-cases" . Does it have a better name?
Edge cases.
Corner cases
Ever prof I have ever had has referred to them as boundary cases or special cases.
I use the term special cases
I call it work ;-).
Because they pay me for it.
But edge cases (as mentioned before) is probably a more correct name.
I call them "nigglies". But, to be honest, I don't care about the linked list one any more.
Because memory is cheap, I always implement lists so that an empty list contains two special nodes, first and last.
When searching, I iterate from first->next to last->prev inclusively (so I'm not looking at the sentinel first/last nodes).
When I insert, I use this same limit to find the insertion point - that guarantees that I'm never inserting before first or after last, so I only ever have to use the "insert-in-the-middle" case.
When I delete, it's similar. Because you can't delete the first or last node, the deletion code only has to use the "insert-in-the-middle" case as well.
Granted, that's just me being lazy. In addition, I don't do a lot of C work any more anyway, and I have a huge code library to draw on, so my days of implementing new linked lists are long gone.