In POA Clique Ethereum (Private Network)
Does it create a block even there is no transaction?
If yes, what's the benefit of creating always even there is no transaction?
Yes it does create a block even when there is no transaction.
The idea behind creating blocks even when there is no transaction is to do with confirmations. By generating new blocks, older blocks (and their inner transactions) get more confirmations thus become even more reliable.
I hope that makes sense.
Related
Now 99% of all interactions with the blockchain occur through Infuria or Alchemy (MetaMask - API Infuria).
Nobody raises their geth nodes.
Because of this, most people are skeptical about the word "decentralization", since the application still has a point of failure.
Why is it impossible to send transactions directly from the browser to the validator? What prevents this?
After all, this is the last obstacle before decentralization. If the browser/extensions stored hundreds of addresses of mining pools to which you can send a transaction, then such an application is almost fail-safe.
Generally, a signed transaction is sent from a wallet software to a node (in the peer-to-peer Ethereum network) that broadcasts it to the rest of the network as a "transaction waiting to be mined" (i.e. placed in a mempool).
Miners usually take transactions from the mempool and place them in blocks.
It is technically possible for a miner to accept a transaction from another source (or create and sign it themselves), and place it in a block.
But it comes with an inconvenience for the transaction sender - they need to wait until this specific miner mines a block containing their transaction. If they sent the transaction to the mempool instead, any miner could have picked it up and include in their block. And there is currently no standardized way of sending a transaction to the miner directly - so each might have a different channel and different rules.
So to answer your question:
Why can't transactions be sent directly to validators/mining pools?
They can. But it's just faster (for the transaction sender) to use the mempool and let the transaction be mined by anyone, instead of waiting for one specific mining pool to mine a block.
Is there anyway that i can limit the calling of apply method to just 1 in the Transaction Processor? By default it is called twice.
I guess your question is based on the log traces you see.
Short answer: apply method also the core business logic in your transaction family is executed once for a input transaction. There is a different reason for you to see the logs appear twice. Well, in reality transaction execution happens and state transitions are defined with respect to the context. Read the long answer for detailed understanding.
Long answer: If you're observing logs, then you may need to go a little deeper into the way Hyperledger Sawtooth works. Let's get started.
Flow of events (at very high level):
Client sends the transaction embed in a batches.
Validator adds all the transactions in the pending queue.
Based on the consensus engine's request, the validator will start creating the block.
For the block creation, a current state context information is passed along with the transaction request to the Transaction Processor. Eventually send to the right Transaction Family's apply method.
The apply method's result either success or failure is recorded. The transaction is removed from the pending queue if is invalid or it is added to the block.
If the response of the apply method is internal error then that is resubmitted.
If a transaction is added to the block. Depending on the consensus algorithm, the created block is broadcasted to all the nodes.
Every node executes the transactions in the arriving block. The node that created the block will also execute. This probably is what you're talking about.
Can any one clarify whether (or, not) an increase in mining pool in Ethereum network with decrease the average block generation time? (For example, if another pool like ethermine join the network today and start mining). Since all the pools are competing with each other, I am getting confused
No, block generation times are driven by the current difficulty for solving the the algorithm used in the Proof of Work model. Only when a solution is found, is the block accepted to the chain and the difficulty determines how long it will take to find that solution. This difficulty automatically adjusts to speed up or slow down block generation times.
From the mining section of the Ethereum wiki:
The proof of work algorithm used is called Ethash (a modified version of Dagger-Hashimoto) involves finding a nonce input to the algorithm so that the result is below a certain threshold depending on the difficulty. The point in PoW algorithms is that there is no better strategy to find such a nonce than enumerating the possibilities while verification of a solution is trivial and cheap. If outputs have a uniform distribution, then we can guarantee that on average the time needed to find a nonce depends on the difficulty threshold, making it possible to control the time of finding a new block just by manipulating difficulty.
The difficulty dynamically adjusts so that on average one block is produced by the entire network every 12 seconds.
(Note that the current block generation time is closer to 15 seconds. You can find the block generation times on Etherscan)
I was wondering how ethereum blockchain works compared with the bitcoin blockchain.
I know that, in bitcoin, all nodes compete to mine blocks (and put to public transaction into them and thus make bitcoin as transaction processing fee), and that all nodes compete for the next block at one time with an equal chance of mining it.
But in ethereum, where you want a network of distributed apps that get executed according to the gas price they are willing to pay (and starting gas), are all nodes competing for the next block at one given time? Wouldn't this be a waste of computation?
Yes, all the nodes do compete for (pretty much) the same blocks, and yes - they do execute all the code in a block, even if this block is not going to be successfully mined.
Don't think of it as "waste," but rather as a mechanism to ensure proof of work.
In short, yes, there is a lot of wasted computation.
Ethereum's mining process is almost the same as bitcoin’s.
For each block of transactions, miners will run the block’s unique header metadata (including timestamp and software version) through a hash function. If the miner finds a hash that matches the current target, the miner will be awarded ether and broadcast the block across the network for each node to validate and add to their own copy of the ledger. If miner B finds the hash, miner A will stop work on the current block and repeat the process for the next block.
I'm evaluating possible solutions for handling a large quantity of queued messages, which must be delivered to workers at a certain date and time. The result of executing them is mostly updates to stored data, and they may or may not be originally triggered by user action.
For example, think of what you'd implement in a hypothetical large-scale StarCraft game server for storing and executing users' actions, like upgrading a building, hatching a soldier, all of which requires to be applied to the game state after several seconds or minutes after the player initiates them.
The problem is I can't seem to find the right term to name this problem area. There are several that looks similar, but different:
cron/task/job scheduler
The content of the queue is not dynamic, it's predefined.
Each task is scheduled.
message queue
The content of the queue is dynamic.
Each task is intended to be delivered immediately.
???
The content of the queue is dynamic.
Each task is scheduled.
If there are message queues that allow conditional delivery of messages, that might be it.
Summary:
What are these kind of technology called?
What are some of the solutions out there?
This just sounds like a trivial priority queue on the surface. The priority in this case is the time of completion, and you check the front of the queue to see when the next event is due. Pretty much every language comes with a priority queue or something that can easily be used as one, so I'm not sure what the actual problem is here.
Is it that you're worried about scalability, when it comes to millions of messages? Obviously 'millions' is a meaningless term - if that's millions per day, it's a trivial problem. If it's millions per second, then you can just scale horizontally, splitting the queue across multiple processes. (And the benefit of such a queue system is that this parallelization is really simple.)
I would bet that when implementing a large scale real-time strategy game server you would hit networking problems long before you start hitting problems with the message queue.
Have you tried looking at push queues by Iron.io? The content of the queue can be anything you like, and you specify a webhook to where the messages will be pushed to. You can also set a delay for each of the messages.
The webhook is static though for each queue and delay isn't always exactly on time (could be up to a minute off). If timing is more important or the ability of providing a different webhook per message is important, try looking at boomerang.io.
They say they are pretty accurate on the timing, you can provide a delay or unix timestamp for the webhook to return and that is per message. Sounds like either of those might work for you.
For StarCraft, I would use the Red Dwarf server.
For a Java EE app, I would use Quartz Scheduler.
It seems to me that a queue-based solution would be best in this case for a number of reasons:
Management. Most queuing solutions provide support for inspecting the content of queues which makes it easier to debug, easier to take action when certain threshold are exceeded, ...
Performance. You can divide workload by having multiple enqueue/dequeue processes (gives you the ability to scale out).
Prioritizing. Most queues support prioritizing of messages (probably not all messages are equally important).
...
Remaining problem is the immediate delivery of messages in the queue. You have two ways to solve this: either delay enqueuing of messages or delay execution of dequeued messages. I would go with the first approach, delayed enqueuing.
A message then has two properties: (content, delay). You provide the message to a component in your system that queues the message at the appropriate time.
I'm not sure what programming language you're using, but the MS .NET 4 framework has support for such a scenario (delayed execution of tasks).