From Avalanche's whitepaper in the context of Nakamoto consensus protocols:
By construction, they cannot quiesce: their security relies
on constant participation by miners, even when there are no
decisions to be made.
I understand why blockchain's require active participation from nodes when transactions are being registered (to ensure that the transaction is not malicious), but what is the concern when the system is dormant and no transactions are being created?
If all transactions stopped and all nodes went inactive for a day, what would be the risk during that time / when the system becomes active again? Why can't they quiesce when no decisions are to be made?
If all transactions stopped and all nodes went inactive for a day, what would be the risk during that time / when the system becomes active again? Why can't they quiesce when no decisions are to be made?
There is no risk within the system. However if a blockchain halt for a day then a lot of interconnected world goes boom. You can study this when Solana halted several times over the last few years.
No payments work
People cannot move funds in and out in centralised ramps
On-chain decentralised finance markets go out of sync with other markets
Price feeds go out of sync
Collateral liquidations for lending markets may not happen timely; lending protocols go net negative
Generally, one the value promise of decentralised network is 100% uptime due to resiliency from decentralisation. If the network cannot be guaranteed to be 100% functional, this value promise disappears and greatly causes users to doubt the benefits of decentralisation.
Related
Gas costs for even a simple contract deployment are astronomical. If I spin up my own node and connect directly to it can I deploy on that node and avoid the gas fees?
Can I avoid gas fees by running my own Ethereum node?
Simple answer: No.
A node accepts the transaction, stores it in its mempool, and relays it to other nodes so that they can also store it in their mempool. There's nothing related to gas costs at this point.
Theoretically you could deploy a contract for "free" by running your own miner. But that's impossible without professional hardware costing very large amounts - many times more expensive than what you'd save on the gas fees.
However, in a block that you mined, you could include a transaction deploying a contract with 0 gas price. This sometimes happens - miners do occasionally put their own 0-priced transactions in their own blocks. But apart from that, no miner would accept your transaction with 0 gas price (or any transaction priced lower than the current market rate), as that wouldn't be profitable for them.
I'm interested in the conceptual topic of creating rights managements systems on the the Ethereum block chain with digital assets represented by an NFT.
I am just reading up on how to write programs that run on Etherium but I have some very basic questions just to get to started.
I read that NFT are created on the Ethereum blockchain. I don't really understand if that is the same block chain on which the currency Ether is maintained? Seems like the ledger will become impossibly large huge if both the every currency transaction and every digital asset and copy thereof that migrates to Ethereum is stored in one single giant ledger and that each miner on the chain has to download the entire ledger to one single machine in order to validate transactions? Have I got big misunderstanding there? I know there is talk about "sharding" in the future, but it seems like that isn't coming very soon.
Cost of running a smart contract on the blockchain? Assuming that the we are talking about the same block chain, from what I can see the price of "Gas" is quite high. I'm reading that the price of ETH transfer from one party to another is 21,000 Gwei, about $0.03 today. Just trying to understand the basics, how much does it cost to create a NFT? And roughly how much does it cosst to execute a simple function on the blockchain (without loops). Let say the equivalent of 5 statement function which takes a few simple params, reads a few blocks, doesn't write to the block chain but just performs some simple math and a few if statements and returns a string? Does that also cost, like, more than penny? Is the conversion to ETH2 switch from proof of work to proof of stake going to bring those costs down by orders of magnitude?
Any good resources or reference on how to write programs which create and manipulate NFTS on Etherium? Most of what I have seen in the bookstores seem to cover financial transactions with Ether.
Yes, it's the same blockchain.
You can see in the stats that full node (stores current state) currently takes about 400 GB and archive node (stores current and historical states as well) takes about 6.6 TB.
My observation is that most web apps using blockchain data don't verify and trust a third-party service running a node (such as Infura). And I believe that most end users or businesses who want/need to verify, usually have the capacity to store 400+ GB and are able to scale.
But if this amount of data is okay or "impossibly large huge", I'll leave that to your decision. :)
Deployment of a token smart contract usually costs between 500k to 3M gas. My estimate is that most token contracts with basic features that were compiled with an optimizer, cost around 1M gas to deploy. With current prices of ~200 Gwei/gas and $1800/ETH, that's about $350. But I remember just few months ago the average gas prices were ~20 and ETH cost $500, so that would be around $10. So yea, the cost of deploying a contract is very volatile.
Simple function that performs validations and transformations in memory is going to cost the base 21k + few hundred gas. (Working with memory data is cheap gas-wise, accessing the storage is much more expensive.) So in current prices around $7, few months ago it could have been $0.25.
As for the question, whether ETH2.0 is going to bring lower gas price: My opinion is that L2 (which should be released earlier than PoS) is going to have some effect on the price since it allows for sidechain transactions (similar to Lightning network on Bitcoin). But this is a development forum, so I'm not not going to dive deeper into price speculations.
I recommend OpenZeppelin docs where they cover their opensource implementations of ERC standards (including ERC-721 NFTs) or googling the topic you're interested in and read articles that catch your eye (at least that's my current approach).
And if you're new to Solidity in general, I recommend at least few chapters from CryptoZombies tutorial. In my opinion, the first few chapters are great and you'll learn a lot, but then the quality slowly fades.
I have N number of agents/users accessing a single wireless channel and at each time, only one agent can access the channel and receive a reward.
Each user has a buffer that can store B number of packets and I assume it as infinite buffer.
Each user n gets observation from the environment if the packet in time slot t was successful or failure (collision). If more than one users access the channel, they get penalty.
This feedback from the channel is same for all the users since we only have one channel. The reward is - B_n (negative of the number of packets in buffer). Each user wants to maximize its own reward and try to empty the buffer.
Packets arrive at each users following a poisson process with average $\lambda$ packets per time slot.
Each user has a history of previous 10 time slots that it uses as an input to the DQN to output the probability of taking action A_n: stay silent or transmit. The history is (A_n, F, B_n)
Each user is unaware of the action and buffer status of other users.
I am trying to model my problem with multiagent reinforcement learning and so far I have tried it with DQN but results are more or less like a random scheme. It could be that users don't have much contextual information in order to learn the behaviour of other users? Or can there be any other reason?
I would like to know how can I model my environment since the state (in RL sense) is static, the environment doesn't changes. The only thing that changes is each users history at each time slot. So I am not sure if its a partially observable MDP or should it be modelled as multiagent single-arm bandit problem which I don't know is correct or not.
The second concern is that I have tried DQN but it has not worked and I would like to know if such problem can be used with tabular Q-learning? I have not seen multiagent works in which anyone has used QL. Any insights might be helpful.
Your problem can be modeled as a Decentralized POMDP (see a overview here).
Summarizing this approach, you consider a multi-agent system where each agent model his own policy, and then you try to build a joint policy through these individual ones. Of course that, the complexity grows as the number of agents, states and actions increases,so for that you have several approaches mainly based in heuristics to prune branches of this joint policy tree that are not "good" in comparison with others. A very know example using this approach is exactly about routing packages where is possible define a discrete action/space.
But be aware that even for tiny system, the complexity becomes often infeasible!
I see that Google Cloud may terminate preemptible instances at any time, but have any unofficial, independent studies been reported, showing "preempt rates" (number of VMs preempted per hour), perhaps sampled in several different regions?
Given how little information I'm finding (as with similar questions), even anecdotes such as: "Looking back the past 6 months, I generally see 3% - 5% instances preempt per hour in uswest1" would be useful (I presume this can be monitored similarly to instance count metrics in AWS).
Clients occasionally want to shove their existing, non-fault-tolerant code in the cloud for "cheap" (despite best practices), and without having an expected rate of failure, they're often blind-sighted by the cheapness of preemptible, so I'd like to share some typical experiences of the GCP community, even if people's experiences may vary, to help convey safe expectations.
Thinking about “unofficial, independent studies” and “even anecdotes such as:” “Clients occasionally want to shove their existing, non-fault-tolerant code in the cloud for "cheap"” it ought to be said that no one architect or sysadmin in right mind would place production workloads with defined SLA into an execution environment without SLA. Hence the topic is rather speculative.
For those who is keen, Google provides preemption rate expectation:
For reference, we've observed from historical data that the average
preemption rate varies between 5% and 15% per day per project, on a
seven-day average, occasionally spiking higher depending on time and
zone. Keep in mind that this is an observation only: Preemptible
instances have no guarantees or SLAs for preemption rates or
preemption distributions.
Besides that there is an interesting edutainment approach to the task of "how to make inapplicable applicable".
I have recently downloaded the GUI wallet, and it gives the option to create contract based wallets and connect to them to a main account. What is the difference between using a contract based wallet and an account? And what should be used to store my ether?
Contract-based wallets are more robust and can be more secure. For example, a contract can be setup to require transfers over a certain threshold to be approved my multiple people/keys. Even if these keys all reside on your local computer, having to compromise even a slightly-improved 2 of 3 can provide far greater security than a single key alone.
Additionally, contracts benefit from transaction receipts, which contain a permanent log of all events. This makes it much easier to inspect the state and verify the history of a contract. For example, when a new transaction request is initiated against a wallet contract requiring multiple signatures, an event log for "ConfirmationNeeded" with the operation ID will be added. After the operation has received the appropriate number of signatures, a "MultiTransact" might occur containing the recipient, value, and associated data with the transaction.
Standard accounts benefit from none of this and can only send transactions, not respond autonomously.
Here are advantages and disadvantages for comparison.
Advantages of contract-based account wallets:
Funds are not stored on a single key.
You can cycle through management keys.
Mutisig functionality; only execute a transaction on majority rule (eg. Gnosis Multisig)
Allows for account recovery, in case your management keys are lost (eg. Argent).
Set transfer and withdraw limits enforced by the contract.
You can have access controls for keys, meaning you can restrict what methods a key can invoke. Useful when you want to delegate control to someone else but restrict what they can do.
Batched transactions; execute multiple transactions as 1 atomic transaction.
Defi protocol composition; easily integrate with other smart contracts (eg. one-click DAI saving rate accounts)
Meta transactions: pay your transaction costs using a different asset, like a token (eg. Gas Station Network). Relayers may also offer free transactions (eg. Authereum)
Disadvantages of contract-based account wallets:
Contract are susceptible to attacks; people write buggy code all the time (eg. Parity hack). An Externally-owned account (EOA) can't be hacked because there is no code to hack.
Backward incompatible features may render funds locked if contract not properly written. (eg. Istanbul hardfork gas cost changes)
Deployment costs; unlikes key pairs which doesn't cost anything to generate, there's a fee cost for deploying contract-based accounts.