I'm almost done with my contract but I've done (almost) everything to reduce the byte size of my contract and I'm still missing like two features. That's why I need to split the contract in order to reduce size even more. I've already extracted a part of my code into two other contracts. Bar is abstract and will be extended by my main contract and Baz will be initialized and called from the main contract. Example:
contract Foo extends Bar {
Baz baz;
constructor() {
baz = new Baz();
}
}
That worked out but my contract size didn't really decrease. Should I inject Baz instead of initializing it on the main contract? How would I do that since something like that
contract Foo extends Bar {
Baz baz;
constructor(address barAdress) {
baz = address;
}
}
will throw Type address is not implicitly convertible to expected type contract Baz. obviously.
Does an extended abstract contract even reduce the byte code or will it be combined in compilation and result in the same size?
Does an extended abstract contract even reduce the byte code or will it be combined in compilation and result in the same size?
The compiler still needs to include the abstract contract, so it doesn't reduce the final bytecode size.
An effective way to reduce the bytecode size is to use the compiler optimization option. The value specifies the number of the contract runs that it should optimize for (not the number of optimization iterations). So lower values will result in smaller bytecode but also higher gas fees each time you're executing a function.
Related
recently, I’ve been learning about the development of Ethereum smart contracts. I have a question about the following code; the result obtained by the “a” method is 4. This result is the same as my common sense understanding because, in the floating-point processing, the corresponding floating-point number will be intercepted when running. However, when I call the method “b”, the result is 5. I don't understand this result very well. Is it because the compiler has optimised during the compilation process to calculate the outcome directly and store it? Thanks a lot, guys.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract Test {
uint constant c = 5;
function a() public pure returns(uint){
return c/2 + c/2;
}
function b() public pure returns(uint){
return 5/2 + 5/2;
}
}
It seems that compiler optimize the return 5/2 + 5/2; line. In assembly output, after compiling with solc-v0.8.13 --asm --optimize-run=100000, there are following lines:
/* "Test.sol":215:224 5/2 + 5/2 */
0x05
/* "Test.sol":208:224 return 5/2 + 5/2 */
swap1
pop
What indicates that 0x05 value is directly returned from function call.
In function a the integer division is rounded down, so returned value is 0x04. Compiler doesn't optimize this line, even though it operates on constant and literals, and allows to run integer division on EVM.
Worth mentioning is fact, that solidity is a language under continous development. So it's possible, that future compiler version will optimize code in such a way that functions in this example will give both the same results.
I am wondering besides these below mathematical expressions are there any other functions available to call inside a smart contract? Like math functions, like pi, sin, cosine, random() etc?
I am wondering if one can write smart contracts that require a little more than just basic arithmetic.
Below Image is taken from this page:
https://docs.soliditylang.org/en/develop/cheatsheet.html#function-visibility-specifiers
Solidity doesn't natively support storing floating point numbers both in storage and memory, probably because the EVM (Ethereum Virtual Machine; underlying layer) doesn't support it.
It allows working with them to some extent such as uint two = 3 / 1.5;.
So most floating point operations are usually done by defining a uint256 (256bit unsigned integer) number and another number defining the decimal length.
For example token contracts generally use 18 decimal places:
uint8 decimals = 18;
uint256 one = 1000000000000000000;
uint256 half = 500000000000000000;
There are some third-party libraries for calculating trigonometric functions (link), working with date time (link) and other use cases, but the native language currently doesn't support many of these features.
As for generating random numbers: No native function, but you can calculate a modulo of some pseudo-random variables such as block.hash and block.timestamp. Mind that these values can be (to some extent) manipulated by a miner publishing the currently mined block.
It's not recommended to use them in apps that work with money (pretty much most of smart contracts), because if the incentive is big enough, there can be a dishonest miner who can use the advantage of knowing the values before rest of the network and being able to modify them to some extent to their own benefit.
Example:
// a dishonest miner can publish a block with such params,
// that will result in the condition being true
// and their own tx to be the first one in the block that executes this function
function win10ETH() external {
if (uint256(blockhash(block.number)) % 12345 == 0) {
payable(msg.sender).transfer(10 ether);
}
}
If you need a random number that is not determinable by a miner, you can use the oracle approach, where an external app (called oracle) listens to transactions in a predefined format (generally also from&to specific addresses), performs an off-chain action (such as generating a random number, retrieving a google search result, or basically anything) and afterwards sends another transaction to your contract, containing the result of the off-chain action.
Sequence containers need to have fill constructors and range constructors, i.e. these must both work, assuming MyContainer models a sequence container whose value_type is int and size_type is std::size_t:
// (1) Constructs a MyContainer containing the number '42' 4 times.
MyContainer<int> c = MyContainer<int>(4, 42);
// (2) Constructs a MyContainer containing the elements in the range (array.begin(), array.end())
std::array<int, 4> array = {1, 2, 3, 4};
MyContainer<int> c2 = MyContainer<int>(array.begin(), array.end());
Trouble is, I'm not sure how to implement these two constructors. These signatures don't work:
template<typename T>
MyContainer<T>::MyContainer(const MyContainer::size_type n, const MyContainer::value_type& val);
template<typename T>
template<typename OtherIterator>
MyContainer<T>::MyContainer(OtherIterator i, OtherIterator j);
In this case, an instantiation like in example 1 above selects the range constructor instead of fill constructor, since 4 is an int, not a size_type. It works if I pass in 4u, but if I understand the requirements correctly, any positive integer should work.
If I template the fill constructor on the size type to allow other integers, the call is ambiguous when value_type is the same as the integer type used.
I had a look at the Visual C++ implementation for std::vector and they use some special magic to only enable the range constructor when the template argument is an iterator (_Is_iterator<_Iter>). I can't find any way of implementing this with standard C++.
So my question is... how do I make it work?
Note: I am not using a C++11 compiler, and boost is not an option.
I think you've got the solution space right: Either disambiguate the call by passing in only explicitly size_t-typed ns, or use SFINAE to only apply the range constructor to actual iterators. I'll note, however, that there's nothing "magic" (that is, nothing based on implementation-specific extensions) about MSVC's _Is_iterator. The source is available, and it's basically just a static test that the type isn't an integral type. There's a whole lot of boilerplate code backing it up, but it's all standard C++.
A third option, of course, would be to add another fill constructor overload which takes a signed size.
I am implementing a compiler for a proprietary language.
The language has one built-in integer type, with unlimited range. Sometimes variables are represented using smaller types, for example if a and b are integer variables but b is only ever assigned the value of the expression a % 100000 or a & 0xFFFFFF, then b can be represented as an Int32 instead.
I am considering implementing the following optimization. Suppose it sees the equivalent of this C# method:
public static void Main(string[] args)
{
BigInt i = 0;
while (true)
{
DoStuff(i++);
}
}
Mathematically speaking, transforming into the following is not valid:
public static void Main(string[] args)
{
Int64 i = 0;
while (true)
{
DoStuff(i++);
}
}
Because I have replaced a BigInt with an Int64, which will eventually overflow if the loop runs forever. However I suspect I can ignore this possibility because:
i is initialized to 0 and is modified only by repeatedly adding 1 to it, which means that will take 263 iterations of the loop to make it overflow
If DoStuff does any useful work, it will take centuries (extrapolated from my very crude tests) for i to overflow. The machine the program runs on will not last that long. Not only that but its architecture probably won't last that long either, so I also don't need to worry about it running on a VM that is migrated to new hardware.
If DoStuff does not do any useful work, an operator will eventually notice that it is wasting CPU cycles and kill the process
So what scenarios do I need to worry about?
Do any compilers already use this hack?
Well.. It seems to me you already answered your question.
But I doubt the question really has any useful outcome.
If the only built-in integer has unlimited range by default it should not inefficient for typical usage such as a loop counter.
I think expanding value range (and allocate more memory to the variable) only after actual overflow occur won't that hard for such language.
Cyclomatic Complexity provides a rough metric for how hard to understand a given function is, or how much potential for containing bugs it has. In the implementations I've read about, usually all of the basic control flow constructs (if, case, while, for, etc.) increase the complexity of a function by 1. It appears to me given that cyclomatic complexity is intended to determine "the number of linearly independent paths through a program's source code" that virtual function calls should increase the cyclomatic complexity of a function as well, because of the ambiguity of which implementation will be called at runtime (the call creates another branch in the path of execution).
However, penalizing the function the same amount that one would if it contained an equivalent switch statement (one point for every 'case' keyword, with one case keyword for every class in the hierarchy implementing the virtual function in question) feels overly harsh, because a virtual function call is generally regarded as much better programming practice.
What should the cost in cyclomatic complexity of a virtual function call be? I'm not sure if my reasoning is an argument against the utility of cyclomatic complexity as a metric or one against the use of virtual functions or something different.
Edit: After people's responses I realized that it shouldn't add to cyclomatic complexity because we could consider the virtual function call equivalent to a call to a global function that contains the massive switch statement. Even though that function will get a bad score, it only exists once in the program, whereas replacing each virtual function call directly with switch statement would cause the cost many times.
Cyclomatic complexity usually does not apply across function call boundaries, but is an intra-function metric. Hence, virtual calls do not count any more than non-virtual, static function calls.
A virtual function call does not increase the cyclomatic complexity, because the "ambiguity [over] of which implementation will be called" is external to the function call. Once the objects value is set, there is no ambiguity. We know exactly what methods will be called.
BaseClass baseObj = null;
// this part has multiple paths & add to CC
if (x == y)
baseObj = new Derived1();
else
baseObj = new Derived2();
// this part has one path and does not add to the CC
baseObj.virtualMethod1();
baseObj.virtualMethod2();
baseObj.virtualMethod3();
virtual function calls should increase
the cyclomatic complexity of a
function as well, because of the
ambiguity of which implementation will
be called at runtime
Ah, but it isn't ambiguous at runtime (unless you're doing metaprogramming / monkey patching); it's completely determined by the type/class of the receiver.
I'm not a big fan of cyclomatic complexity, but in this case you're calling a function. It will do approximately the same thing (unless the class hierarchy design is really screwed up), with some variations depending on that calls it. Thing is, if you call any function, you can get some varied behavior depending on the arguments you pass in, and this isn't counted in CC.
Therefore, I'd completely ignore that cost.