Are there more complex functions to call from within Solidity (Smart Contracts/Ethereum)? - ethereum

I am wondering besides these below mathematical expressions are there any other functions available to call inside a smart contract? Like math functions, like pi, sin, cosine, random() etc?
I am wondering if one can write smart contracts that require a little more than just basic arithmetic.
Below Image is taken from this page:
https://docs.soliditylang.org/en/develop/cheatsheet.html#function-visibility-specifiers

Solidity doesn't natively support storing floating point numbers both in storage and memory, probably because the EVM (Ethereum Virtual Machine; underlying layer) doesn't support it.
It allows working with them to some extent such as uint two = 3 / 1.5;.
So most floating point operations are usually done by defining a uint256 (256bit unsigned integer) number and another number defining the decimal length.
For example token contracts generally use 18 decimal places:
uint8 decimals = 18;
uint256 one = 1000000000000000000;
uint256 half = 500000000000000000;
There are some third-party libraries for calculating trigonometric functions (link), working with date time (link) and other use cases, but the native language currently doesn't support many of these features.
As for generating random numbers: No native function, but you can calculate a modulo of some pseudo-random variables such as block.hash and block.timestamp. Mind that these values can be (to some extent) manipulated by a miner publishing the currently mined block.
It's not recommended to use them in apps that work with money (pretty much most of smart contracts), because if the incentive is big enough, there can be a dishonest miner who can use the advantage of knowing the values before rest of the network and being able to modify them to some extent to their own benefit.
Example:
// a dishonest miner can publish a block with such params,
// that will result in the condition being true
// and their own tx to be the first one in the block that executes this function
function win10ETH() external {
if (uint256(blockhash(block.number)) % 12345 == 0) {
payable(msg.sender).transfer(10 ether);
}
}
If you need a random number that is not determinable by a miner, you can use the oracle approach, where an external app (called oracle) listens to transactions in a predefined format (generally also from&to specific addresses), performs an off-chain action (such as generating a random number, retrieving a google search result, or basically anything) and afterwards sends another transaction to your contract, containing the result of the off-chain action.

Related

Does Chainlink fees increase with the number of Random words requested?

I am writing a smart contract that requires a number of random words to be generated via the Chainlink VRF.
Now, according to Chailink VRFV2Consumer contract, you can request the number of random words like so:
uint32 numWords = 2;
My question is, will it cost more if you increase the number of random words requested or will the fees remain same? I haven't seen any explanation of this yet so I think it would be helpful for developers if they knew how this would impact the fees.
As they mentioned in their documents yes it depends on the number random values you asked.
Read the document linked below carefully.
https://docs.chain.link/docs/chainlink-vrf/
Gas price:
Callback gas:
Verification gas:
These variables depend on current network conditions, your specified limit on callback gas, and the number of random values in your request. The cost of each request is final only after the transaction is complete, but you define the limits you are willing to spend for the request with the following variables:

How to receive an int array using a Chainlink oracle?

I'm traing to get data from an off-chain API to use it in my smart contract. For this, I'm using Chainlink oracle.
I've seen jobs to get one Uint256 or one Bool or a Bytes32 variable. But what if you want to receive an array?
I want to receive something like [1, 2, 3, 4] as an uint[] to be able to loop and use the individual values.
What's the best way to do this?
I already used the Get->Bytes32 and the Get->Bytes, method but then I need to parse these bytes inside the EVM and I don't think that's a good idea.
At the moment arrays are not supported as a response type, here is the list of currently supported response types.
You can:
Read uint256 value by uint256 value, assuming it's a fixed-size array, like this
Make a Multi-Varibale Responses request
Make a Large Responses request, which you mentioned, and handle those bytes32 values. Keep in mind that, currently, any return value must fit within 32 bytes. If the value is bigger than that, make multiple requests.

Why Gas Used By Txn is different when invoking the same function in the same smart contract?

I have seen some transactions in etherscan.io.But I have found that even invoking the same function in the same smart contract, the gas used by txn are different.I try to found that maybe the input data result in it.Really?
The input data might be different, but also the state stored in the smart contract might be different (and change e.g. the number of times a loop iterates). Also, if storing nonzero data in a state variable that previously held zero data, or vice versa, will change the gas usage. For example, a simple function that toggles a boolean variable will not use the same amount of gas on any two consecutive calls.
Check out https://ethereum.stackexchange.com/ for future questions like this!
Each time you invoke a function in contract that requires state change in the block,it would cost x amount of gas , so every time you call different or same function in a contract that requires a state change,you would see that x amount of gas is being deducted along with its taxation Id.This is the reason you see different Txn on same function.
More about Gas and Transaction in the link below. http://solidity.readthedocs.io/en/develop/introduction-to-smart-contracts.html

How do interpreters load their values?

I mean, interpreters work on a list of instructions, which seem to be composed more or less by sequences of bytes, usually stored as integers. Opcodes are retrieved from these integers, by doing bit-wise operations, for use in a big switch statement where all operations are located.
My specific question is: How do the object values get stored/retrieved?
For example, let's (non-realistically) assume:
Our instructions are unsigned 32 bit integers.
We've reserved the first 4 bits of the integer for opcodes.
If I wanted to store data in the same integer as my opcode, I'm limited to a 24 bit integer. If I wanted to store it in the next instruction, I'm limited to a 32 bit value.
Values like Strings require lots more storage than this. How do most interpreters get away with this in an efficient manner?
I'm going to start by assuming that you're interested primarily (if not exclusively) in a byte-code interpreter or something similar (since your question seems to assume that). An interpreter that works directly from source code (in raw or tokenized form) is a fair amount different.
For a typical byte-code interpreter, you basically design some idealized machine. Stack-based (or at least stack-oriented) designs are pretty common for this purpose, so let's assume that.
So, first let's consider the choice of 4 bits for op-codes. A lot here will depend on how many data formats we want to support, and whether we're including that in the 4 bits for the op code. Just for the sake of argument, let's assume that the basic data types supported by the virtual machine proper are 8-bit and 64-bit integers (which can also be used for addressing), and 32-bit and 64-bit floating point.
For integers we pretty much need to support at least: add, subtract, multiply, divide, and, or, xor, not, negate, compare, test, left/right shift/rotate (right shifts in both logical and arithmetic varieties), load, and store. Floating point will support the same arithmetic operations, but remove the logical/bitwise operations. We'll also need some branch/jump operations (unconditional jump, jump if zero, jump if not zero, etc.) For a stack machine, we probably also want at least a few stack oriented instructions (push, pop, dupe, possibly rotate, etc.)
That gives us a two-bit field for the data type, and at least 5 (quite possibly 6) bits for the op-code field. Instead of conditional jumps being special instructions, we might want to have just one jump instruction, and a few bits to specify conditional execution that can be applied to any instruction. We also pretty much need to specify at least a few addressing modes:
Optional: small immediate (N bits of data in the instruction itself)
large immediate (data in the 64-bit word following the instruction)
implied (operand(s) on top of stack)
Absolute (address specified in 64 bits following instruction)
relative (offset specified in or following instruction)
I've done my best to keep everything about as minimal as is at all reasonable here -- you might well want more to improve efficiency.
Anyway, in a model like this, an object's value is just some locations in memory. Likewise, a string is just some sequence of 8-bit integers in memory. Nearly all manipulation of objects/strings is done via the stack. For example, let's assume you had some classes A and B defined like:
class A {
int x;
int y;
};
class B {
int a;
int b;
};
...and some code like:
A a {1, 2};
B b {3, 4};
a.x += b.a;
The initialization would mean values in the executable file loaded into the memory locations assigned to a and b. The addition could then produce code something like this:
push immediate a.x // put &a.x on top of stack
dupe // copy address to next lower stack position
load // load value from a.x
push immediate b.a // put &b.a on top of stack
load // load value from b.a
add // add two values
store // store back to a.x using address placed on stack with `dupe`
Assuming one byte for each instruction proper, we end up around 23 bytes for the sequence as a whole, 16 bytes of which are addresses. If we use 32-bit addressing instead of 64-bit, we can reduce that by 8 bytes (i.e., a total of 15 bytes).
The most obvious thing to keep in mind is that the virtual machine implemented by a typical byte-code interpreter (or similar) isn't all that different from a "real" machine implemented in hardware. You might add some instructions that are important to the model you're trying to implement (e.g., the JVM includes instructions to directly support its security model), or you might leave out a few if you only want to support languages that don't include them (e.g., I suppose you could leave out a few like xor if you really wanted to). You also need to decide what sort of virtual machine you're going to support. What I've portrayed above is stack-oriented, but you can certainly do a register-oriented machine if you prefer.
Either way, most of object access, string storage, etc., comes down to them being locations in memory. The machine will retrieve data from those locations into the stack/registers, manipulate as appropriate, and store back to the locations of the destination object(s).
Bytecode interpreters that I'm familiar with do this using constant tables. When the compiler is generating bytecode for a chunk of source, it is also generating a little constant table that rides along with that bytecode. (For example, if the bytecode gets stuffed into some kind of "function" object, the constant table will go in there too.)
Any time the compiler encounters a literal like a string or a number, it creates an actual runtime object for the value that the interpreter can work with. It adds that to the constant table and gets the index where the value was added. Then it emits something like a LOAD_CONSTANT instruction that has an argument whose value is the index in the constant table.
Here's an example:
static void string(Compiler* compiler, int allowAssignment)
{
// Define a constant for the literal.
int constant = addConstant(compiler, wrenNewString(compiler->parser->vm,
compiler->parser->currentString, compiler->parser->currentStringLength));
// Compile the code to load the constant.
emit(compiler, CODE_CONSTANT);
emit(compiler, constant);
}
At runtime, to implement a LOAD_CONSTANT instruction, you just decode the argument, and pull the object out of the constant table.
Here's an example:
CASE_CODE(CONSTANT):
PUSH(frame->fn->constants[READ_ARG()]);
DISPATCH();
For things like small numbers and frequently used values like true and null, you may devote dedicated instructions to them, but that's just an optimization.

What is (or should be) the cyclomatic complexity of a virtual function call?

Cyclomatic Complexity provides a rough metric for how hard to understand a given function is, or how much potential for containing bugs it has. In the implementations I've read about, usually all of the basic control flow constructs (if, case, while, for, etc.) increase the complexity of a function by 1. It appears to me given that cyclomatic complexity is intended to determine "the number of linearly independent paths through a program's source code" that virtual function calls should increase the cyclomatic complexity of a function as well, because of the ambiguity of which implementation will be called at runtime (the call creates another branch in the path of execution).
However, penalizing the function the same amount that one would if it contained an equivalent switch statement (one point for every 'case' keyword, with one case keyword for every class in the hierarchy implementing the virtual function in question) feels overly harsh, because a virtual function call is generally regarded as much better programming practice.
What should the cost in cyclomatic complexity of a virtual function call be? I'm not sure if my reasoning is an argument against the utility of cyclomatic complexity as a metric or one against the use of virtual functions or something different.
Edit: After people's responses I realized that it shouldn't add to cyclomatic complexity because we could consider the virtual function call equivalent to a call to a global function that contains the massive switch statement. Even though that function will get a bad score, it only exists once in the program, whereas replacing each virtual function call directly with switch statement would cause the cost many times.
Cyclomatic complexity usually does not apply across function call boundaries, but is an intra-function metric. Hence, virtual calls do not count any more than non-virtual, static function calls.
A virtual function call does not increase the cyclomatic complexity, because the "ambiguity [over] of which implementation will be called" is external to the function call. Once the objects value is set, there is no ambiguity. We know exactly what methods will be called.
BaseClass baseObj = null;
// this part has multiple paths & add to CC
if (x == y)
baseObj = new Derived1();
else
baseObj = new Derived2();
// this part has one path and does not add to the CC
baseObj.virtualMethod1();
baseObj.virtualMethod2();
baseObj.virtualMethod3();
virtual function calls should increase
the cyclomatic complexity of a
function as well, because of the
ambiguity of which implementation will
be called at runtime
Ah, but it isn't ambiguous at runtime (unless you're doing metaprogramming / monkey patching); it's completely determined by the type/class of the receiver.
I'm not a big fan of cyclomatic complexity, but in this case you're calling a function. It will do approximately the same thing (unless the class hierarchy design is really screwed up), with some variations depending on that calls it. Thing is, if you call any function, you can get some varied behavior depending on the arguments you pass in, and this isn't counted in CC.
Therefore, I'd completely ignore that cost.