I'm taking a look at the uniswapv2 tutorial walkthrough.
The followng is in reference to this function in the github repo and the tutorial states the following:
uint _kLast = kLast; // gas savings
The kLast state variable is located in storage, so it will have a
value between different calls to the contract. Access to storage is a
lot more expensive than access to the volatile memory that is released
when the function call to the contract ends, so we use an internal
variable to save on gas.
So in traditional programming, _kLast would be a reference to kLast. _kLast is referenced 3 more times after it's instantiation.
Had they used just kLast as the variable, and not assigned it to uint, would it cost a storage read each time kLast is used?
If this is NOT the case, then I really don't understand how they're saving on gas, can someone explain?
Each storage read (opcode sload) of the same slot costs 2,100 gas the first time during a transaction, and then 100 gas each other time during the same transaction. (After the EIP-2929 implemented in the Berlin hardfork. Before that, it was 800 for each read no matter how many times you performed the reads.)
Each memory write (opcode mstore) and each memory read (opcode mload) cost 3 gas.
So in traditional programming, _kLast would be a reference to kLast
In this particular Solidity snippet, _kLast is not a reference pointing to the storage. It's a memory variable that has value assigned from the storage variable.
So 3 storage reads - not creating the memory variable - would cost 2,300 gas (= 2,100 + 100 + 100).
But because the code creates the memory variable, it performs one storage read (2,100 gas), one memory write (3 gas), and three memory reads (3 x 3 gas) - totalling 2,112 gas. Which is a cheaper option.
Some other EVM-compatible networks, such as BSC, might still use the original gas calculation of 800 per each sload. Which would make even larger difference - non-optimized 2,400 gas (3 x 800), and optimized 812 gas (800 + 3 + 3x3).
Related
I was testing this code using Remix. I was wondering why the execution cost of the function (in gas) depends on the input x. The cost seems to increase in multiples of 12 as the value of x increases. I haven't found a pattern.
// SPDX-License-Identifier: MIT
pragma solidity 0.8.4;
contract Example{
function test (uint x) external pure returns(bool z) {
if(x > 0)
z=true;
else
z=false;
}
}
https://github.com/wolflo/evm-opcodes/blob/main/gas.md#a0-0-intrinsic-gas
The bytes sent to the contract indeed decides the gas cost as can be seen from the link.
gas_cost += 4 * bytes_zero: gas added to base cost for every zero byte of memory data
gas_cost += 16 * bytes_nonzero: gas added to base cost for every nonzero byte of memory data
so if you send 0x0001 or 0x0010 it will cost the same amount of gas. But if you send 0x0011 it will cost 12(16-4) gas more than the previous case.
Gas is charged in three scenarios
The computation of an operation
For contract creation or message calls
An increase in the use of memory. Function args and local variables in functions are memory data. In Ethereum Solidity, what is the purpose of the "memory" keyword?
An address consumes 20,000 gas via SSTORE.
Given is a gas price of 35 Gwei.
If I store 10,000 addresses in a map, it will cost me:
20,000 gas * 10,000 = 200,000,000 gas
200,000,000 Gas * 35 Gwei = 7 Ether.
Is the calculation correct?
If I do the same on a layer2 chain, does the whole thing cost me 7 matic for example, or is there something else I need to consider?
Your calculation is correct.
I'm assuming you want to store the values in an array instead of 10k separate storage variables. If it's a dynamic-length array, you should also consider the cost of sstore while updating a (non-zero to non-zero) value of the slot holding the array length (currently 2900 gas for each .push() function resizing the array).
You should also consider the block gas limit - a transaction costing 200M gas is not going to fit into a block on probably any network, so any miner won't mine it.
So based on your use case, you might want to change the approach. For example, if the addresses are used for validation, you might be able to store just the merkle tree root (1 value instead of the 10k) and then validate against it using the address and its merkle proof.
I am wondering besides these below mathematical expressions are there any other functions available to call inside a smart contract? Like math functions, like pi, sin, cosine, random() etc?
I am wondering if one can write smart contracts that require a little more than just basic arithmetic.
Below Image is taken from this page:
https://docs.soliditylang.org/en/develop/cheatsheet.html#function-visibility-specifiers
Solidity doesn't natively support storing floating point numbers both in storage and memory, probably because the EVM (Ethereum Virtual Machine; underlying layer) doesn't support it.
It allows working with them to some extent such as uint two = 3 / 1.5;.
So most floating point operations are usually done by defining a uint256 (256bit unsigned integer) number and another number defining the decimal length.
For example token contracts generally use 18 decimal places:
uint8 decimals = 18;
uint256 one = 1000000000000000000;
uint256 half = 500000000000000000;
There are some third-party libraries for calculating trigonometric functions (link), working with date time (link) and other use cases, but the native language currently doesn't support many of these features.
As for generating random numbers: No native function, but you can calculate a modulo of some pseudo-random variables such as block.hash and block.timestamp. Mind that these values can be (to some extent) manipulated by a miner publishing the currently mined block.
It's not recommended to use them in apps that work with money (pretty much most of smart contracts), because if the incentive is big enough, there can be a dishonest miner who can use the advantage of knowing the values before rest of the network and being able to modify them to some extent to their own benefit.
Example:
// a dishonest miner can publish a block with such params,
// that will result in the condition being true
// and their own tx to be the first one in the block that executes this function
function win10ETH() external {
if (uint256(blockhash(block.number)) % 12345 == 0) {
payable(msg.sender).transfer(10 ether);
}
}
If you need a random number that is not determinable by a miner, you can use the oracle approach, where an external app (called oracle) listens to transactions in a predefined format (generally also from&to specific addresses), performs an off-chain action (such as generating a random number, retrieving a google search result, or basically anything) and afterwards sends another transaction to your contract, containing the result of the off-chain action.
I have a function that needs to send tokens to a lot of accounts.
I know that a write operation into a store is very costly.
I have read when doing computing on storage it's better to do it on a memory variable and then set this memory variable into the storage and by that save the write operation on storage.
so what I was thinking to do is something like this
mapping (address => uint256) private _balances;
function addToBalance(address[] memory accounts,uint[] memory amounts) public{
mapping (address => uint256) memory balances = _balances;
for(uint i=0;i<accounts.length;i++){
balances[accounts[i]]+=amounts[i];
}
_balances = balances;
}
the _balances mapping can become pretty big. so is it really a better way to reduce the costs?
Not with a mapping.
Mapping (and dynamic-size array) values are all "scattered" in different storage slots. The slot positions are determinable, but because they are calculated based on a hash, the value for key 1 can be in storage slot 123456789 and the value for key 2 can be in storage slot 5. (The numbers are made up just to show an easy example.)
Which means, when you're saving N values into a mapping (under N keys), you always need to write into (the same amount) N storage slots. Writing into one (256bit) slot costs 5,000 gas, and it can add up quickly.
So apart from moving the values into a fixed-size array (which doesn't make much sense in this use case when you're storing balances of an unknown amount of addresses), I'd consider shifting the transaction costs to your users (have a list of eligible users and another list of already used airdrops and let them claim it themselves).
I am dealing with a CUDA shared memory access pattern which i am not sure if it is good or has some sort of performance penalty.
Suppose i have 512 integer numbers in shared memory
__shared__ int snums[516];
and half the threads, that is 256 threads.
The kernel works as follows;
(1) The block of 256 threads first applies a function f(x) to the even locations of snums[], then (2) it applies f(x) to the odd locations of snums[]. Function f(x) acts on the local neighborhood of the given number x, then changes x to a new value. There is a __syncthreads() in between (1) and (2).
Clearly, while i am doing (1), there are shared memory gaps of 32bits because of the odd numbers not being accessed. The same occurs in (2), there will be gaps on the even locations of snums[].
From what i read on CUDA documentation, memory bank conflicts should occur when threads access the same locations. But they do not talk about gaps.
Will there there be any problem with banks that could incur in a performance penalty?
I guess you meant:
__shared__ int snums[512];
Will there be any bank conflict and performance penalty?
Assuming at some point your code does something like:
int a = snums[2*threadIdx.x]; // this would access every even location
the above line of code would generate an access pattern with 2-way bank conflicts. 2-way bank conflicts means the above line of code takes approximately twice as long to execute as the optimal no-bank-conflict line of code (depicted below).
If we were to focus only on the above line of code, the obvious approach to eliminating the bank conflict would be to re-order the storage pattern in shared memory so that all of the data items previously stored at snums[0], snums[2], snums[4] ... are now stored at snums[0], snums[1], snums[2] ... thus effectively moving the "even" items to the beginning of the array and the "odd" items to the end of the array. That would allow an access like so:
int a = snums[threadIdx.x]; // no bank conflicts
However you have stated that a calculation neighborhood is important:
Function f(x) acts on the local neighborhood of the given number x,...
So this sort of reorganization might require some special indexing arithmetic.
On newer architectures, shared memory bank conflicts don't occur when threads access the same location but do occur if they access locations in the same bank (that are not the same location). The bank is simply the lowest order bits of the 32-bit index address:
snums[0] : bank 0
snums[1] : bank 1
snums[2] : bank 2
...
snums[32] : bank 0
snums[33] : bank 1
...
(the above assumes 32-bit bank mode)
This answer may also be of interest