I wonder why all the Txs send with the method web3.eth.sendTransaction({})in my private network to a node ending up in the pending section of the txpool. Why aren't they counted in the queued-section? what are the differences betweend pending and queued? and how do i get the tx queued instead of pending?
You probably don't want your transactions to go in the queued section. Pending is where they wait, until a miner includes it in the next block. Sounds like your private network miner is not including your transactions.
For more about pending vs queued, see: What is the difference between a pending transaction and a queued transaction in the geth mempool?
Pending transactions are transactions that are ready to be processed and included in the block.
Queued transactions are transactions where the transaction nonce is not in sequence. The transaction nonce is an incrementing number for each transaction with the same From address.
Related
I am learning ACID, but not quite understand what is the difference between atomic and isolation.
From my understanding, if a transaction is atomic, that means the integrity of transaction-related data is written to DB when the transaction is succeeded, and not when failed.
so why we need isolation?
It can not be seen by others when failed for sure if atomic is guaranteed !?
Atomicity means that a single transaction is either completely executed or not at all executed. So for example if you have two account rows and you want to transfer money from one account to the other, you would see an increase to the amount of the one account a decrease to the amount of the other account or no change at all.
Isolation means that independent transactions do not interfere. So if you have multiple concurrent transactions in the same account rows you can be sure that each transaction is executed in isolation of the others. So if you have multiple transfers in the account table concurrently the results would be the same as if they were serially processed.
I heard that if Ethereum transaction fails for some reason, then remaining gas is refunded. But what if the transaction's nonce is wrong? If you deliberately propagate the same nonce transaction from an account, it is judged to be wrong transactions, so the receiving node will ignore the transaction, or will consider it fail and only refund a part of the gas?
https://ethereum.stackexchange.com/questions/46827/is-the-gas-fee-refunded-if-the-transaction-fails
If the latter is right, then how much does it cost for wrong transaction format?
It won't cost you any ether(gas), because when the nodes receive a transaction with, for example, a wrong nonce, they will simply discard this transaction from its transaction pool after a period of time.
Since the transaction will not be executed nor written into a block, the gas will not be consumed.
I am using aiomysql (https://github.com/aio-libs/aiomysql) and have some problems with the unclosed transaction and locking rows. I use AIO connections pool in my application.
I am NOT using SA context managers for transactions.
My questions:
If I do only and only SELECT's as I understand there are no locks on rows, So do I need to call wait for conn.commit() or I can skip it if I can skip how MySQL should now what the transaction ends?
In the code below, then AIO MySQL start a new transaction? then acquire() connection called or then create cursor called or I should explicitly call "START TRANSACTION"?
The commit needs to be inside the try block as you want to be sure to rollback if there is no commit. However selects do not require commits.
If autocommit is True on your connection then each insert or update is considered a single transaction and implicitly committed. If autocommit is False then you automatically get transactions and must commit after your inserts. You do not need to call START TRANSACTION if autocommit is false.
If you need to call START TRANSACTION you use conn.begin() documented here:
https://aiomysql.readthedocs.io/en/latest/connection.html#connection
A MySQL transaction is used if you have multiple contingent updates that must all be successful together or rolled back. For example a bank transfer that fails on the second update needs to be rolled back:
Withdraw money from account A
Deposit money in account B
You can find a transaction example in the aiomysql github.
https://github.com/aio-libs/aiomysql/tree/master/examples
According to this wikipedia entry, the repeatable read isolation level holds read and write locks when selecting data.
My understanding is that this can prevent the age old banking example:
Start a transaction
Get (SELECT) account balance ($100)
Withdraw $10 and UPDATE new value ($90)
Commit transaction
If in between 2 & 3 the customer receives a deposit of $1000, that transaction should be blocked because of the read/write lock acquired in step 2. Otherwise, step 3 would write $90 instead of $1090.
However, according to the MySQL docs, repeatable read (default) works differently. All it ensures is that no matter how many SELECTs we do, we get the same value, regardless whether the value has been changed by another transaction. Also other transactions are allowed to modify the values we read.
This sounds broken, not sure why I would want to read an old balance. The doc says that an explicit FOR UPDATE needs to be added to the SELECT to acquire the appropriate locks.
I'm confused about the definition and implementation of repeatable read. Could somebody clarify how the banking problem is solved?
I'll talk about how it works in MySQL and PostgreSQL, since I'm not as familiar with other SQL implementations. In the banking example you gave, the following should be noted:
the deposit would not be blocked. Repeatable read isolation does not prevent concurrent changes in another transaction. It only dictates how read operations will behave within a transaction. Namely, as you said, read operations get a frozen version of the database, from the time the transaction began.
the withdraw operation in step 3, however, would be blocked by the deposit operation, and would have to wait until the deposit transaction has been committed. This kind of lock happens on all transaction isolation levels, even the least conservative (read uncommitted), because changes to data are involved.
Now, the resulting balance after both transactions are finished will depend on how you write your statements. With a SQL statement for the withdrawal step done as follows,
UPDATE accounts SET balance = balance - 10 WHERE id = 10;
the final balance will be $1090. MySQL/PostgreSQL will realize the data has changed in another transaction, and the withdrawal will thus use the latest value, even though the UPDATE statement was called within a "repeatable read" transaction.
If you, however, subtract $10 from the $100 you got from step 2 in code, and then run the withdrawal like this
UPDATE accounts SET balance = 90 WHERE id = 10;
the transaction isolation won't help, and the balance will end up being $90.
I have few instances of my application, from each instance we have a single thread that picks a unprocessed item from a MySql table, and starts processing it. The table structure is as follows:
id | status | other_params
| |
| |
'status' field will denotes whether the entry is processed or not.
I am facing issues as to how, I will ensure that when one instance/thread picks up an entry from the table, no other thread picks that entry for processing.
I have thought of solution about changing the status to 'PROCESSING' and to 'PROCESSED' when done, but for the change to be visible to other threads, I need to commit. And if the node processing the request fails, it would always be 'PROCESSING'.
Also the operation is an heavy operation, so I don't want more than one thread to do the task.
Any ideas someone can provide will be helpful.
how, I will ensure that when one instance/thread picks up an entry
from the table, no other thread picks that entry for processing.
You can do that by providing row level lock. Also put a lock wait timeout.
Suppose you have 2 threads T1,T2 which are trying to pick the same unprocessed item from the table. If T1 fails due to any reason, the transaction will timeout and the lock will be released. In that case, T2 can go and process it.
Have you considered acquiring a pessimistic lock on the row table? You will also have to do the work inside a transaction for this to work.
select * from your_table where id=1 for update;
Here are some links on pessimistic locking
manual
stackvoerflow
Please think of worker thread model.
Master thread would run on specific interval to fetch unprocessed records and hand them over to the worker thread.
It would be worker thread's responsiblity to mark status to Processed once successful processing.
Master thread should also cache the id's of the reocord it sent to workers.(it would be required to eliminate them for subsequent execution)