How is the below Ethereum contract function call data is constructed? - ethereum

0x5537f99e000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000072268656c6c6f2200000000000000000000000000000000000000000000000000
5537f99e is the function name, which is 'setstring'
2268656c6c6f22 is the argument to the function, which is 'hello',
Please explain how this raw data to a ethereum contract is consturcted. I'm confused at those offsets.

You can find the reference here https://solidity.readthedocs.io/en/develop/abi-spec.html
if your function is
function setstring(string string_value) {
}
first 4bytes 0x5537f99e
First 4 bytes of data is derived as the first 4 bytes of the Keccak hash of the ASCII form of the signature setstring(string)
next 32 bytes 0x0000000000000000000000000000000000000000000000000000000000000020
This means the location of the data part of your string_value, measured in bytes from the start of the arguments block. In this case, the next block
next 32 bytes
0000000000000000000000000000000000000000000000000000000000000007
This means size of your string, 7. "hello"
next 32 bytes
2268656c6c6f2200000000000000000000000000000000000000000000000000
The contents of the "hello" encoded in UTF-8.

Related

How to decode an ETH contract output hex as string?

When I make an eth_call to the Usdt smart contract on Eth MainNet, I get a 96-byte hex output.
0000000000000000000000000000000000000000000000000000000000000020 // What is this?
000000000000000000000000000000000000000000000000000000000000000a // Size of the output
5465746865722055534400000000000000000000000000000000000000000000 // Output ("Tether USD")
I understand that the 3rd 32 bytes contain the actual string output with right padding, and the 2nd 32 bytes contain the output size in bytes with left padding. What do the 1st 32 bytes contain?
Rpc Call
{"jsonrpc":"2.0","method":"eth_call","params":[{"To":"0xdAC17F958D2ee523a2206206994597C13D831ec7","Data":"0x06fdde03"},"latest"],"id":1}
The first 32byte slot is an offset that points to the length slot, which is immediately followed by slot(s) containing the actual param value.
The offset is useful in cases when a function returns multiple dynamic-length arrays (a string is represented as a dynamic-length byte array), like in this example:
pragma solidity ^0.8;
contract MyContract {
function foo() external pure returns (string memory, string memory) {
return ("Tether USD", "Ethereum");
}
}
Returned data:
# offset pointing to the length of the 1st param
0x0000000000000000000000000000000000000000000000000000000000000040
# offset pointing to the length of the 2nd param
0x0000000000000000000000000000000000000000000000000000000000000080
# 1st param length
0x000000000000000000000000000000000000000000000000000000000000000a
# followed by 1st param value
0x5465746865722055534400000000000000000000000000000000000000000000
# 2nd param length
0x0000000000000000000000000000000000000000000000000000000000000008
# followed by 2nd param value
0x457468657265756d000000000000000000000000000000000000000000000000
If you had a fixed-length param between those two, the returned data structure would look like this:
offset to the length of the 1st param
the (fixed length) 2nd param actual value
offset to the length of the 3rd param
the rest is the same as above
Docs: https://docs.soliditylang.org/en/latest/abi-spec.html#use-of-dynamic-types

Solidity and Web3 sha3() methods return something else

In my contract, I have a function that returns the sha3 hash of a certain set of values. While running some tests I found that the value returned from this function differs from the hash value generated by web3.utils.sha3() (with identical arguments).
Here is the code:
Solidity
function hashInfo() public onlyOwner view returns (bytes32) {
bytes32 hash = sha3(
'0x969A70A4fa9F69D2D655E4B743abb9cA297E5328',
'0x496AAFA2960f3Ff530716B5334c9aFf4612e3c27',
'jdiojd',
'oidjoidj',
'idjodj',
12345
)
return hash;
}
JS (web3)
async function testHash(instance){
const contractHash = await instance.methods.hashInfo().call({from: '0x969A70A4fa9F69D2D655E4B743abb9cA297E5328'});
const localHash = web3.utils.sha3(
'0x969A70A4fa9F69D2D655E4B743abb9cA297E5328',
'0x496AAFA2960f3Ff530716B5334c9aFf4612e3c27',
'jdiojd',
'oidjoidj',
'idjodj',
12345
)
console.log(contractHash);
console.log(localHash);
console.log('local == contract: ' + (contractHash == localHash));
}
The resulting console output is:
0xe65757c5a99964b72d217493c192c073b9a580ec4b477f40a6c1f4bc537be076
0x3c23cebfe35b4da6f6592d38876bdb93f548085baf9000d538a1beb31558fc6d
local == contract: false
Any ideas? Does this have something to do with passing multiple arguments to the functions? I have also tried to convert everything to a string and concatenate them into one single string, but also without success.
Thanks in advance!
UPDATE
I found out there also if a web3 method called web3.utils.soliditySha3(). This too did not work and gave the following result:
0xe65757c5a99964b72d217493c192c073b9a580ec4b477f40a6c1f4bc537be076
0x0cf65f7c81dab0a5d414539b0e2f3807526fd9c15e197eaa6c7706d27aa7a0f8
local == contract: false
I'm happy I came after your update as I was just gonna suggest solditySHA3. Now that you've got the right function your problem is most likely with Soldity packing it's parameters.
As you can see here, sha3 is an alias to keccak256 which tightly packs it's arguments. Following the link on that page takes you here which fully explains how it's handled. Basically just take the inputs to soliditySHA3 and pack the bits as if they were the sizes of the variables you used. So if you hashed two uint32s (32 bits each, 64 total) you need to take the 2 64 bit Javascript numbers and compress them into 1 Javascript number.
For cases where more than 64 bits are needed I believe you can pass sequential ints (sets of 64 bits) to soliditySHA3 or you could use a BigInt. Personally, I usually try to only hash 256 bit variables together to avoid having to manually pack my bits on the JS end, but we all know that space constraints are huge in Solidity. I hope I helped, and let me know if you have further questions.

Denary to binary conversion program

How does this denary to binary program work? I am finding it hard to comprehend what is happening behind the code.
Can someone explain the lines 6 onwards?
Number = int(input("Hello. \n\nPlease enter a number to convert: "))
if Number < 0:
print ("Can't be less than 0")
else:
Remainder = 0
String = ""
while Number > 0:
Remainder = Number % 2
Number = Number // 2
String = str(Remainder) + String
print (String)
The idea is to separate out the last part of the binary number, stick it in a buffer, and then remove it from "Number". The method is general and can be used for other bases as well.
Start by looking at it as a dec -> dec "conversion" to understand the principle.
Let's say you have the number 174 (base10). If you want to parse out each individual piece (read as "digit") of it you can calculate the number modulo the base (10), then do an integer division to "remove" that digit from the number. I.e. 174%10 and 174//10 => (Number) 17|4 (Reminder). Next iteration you have 17 from the division and when you perform the same procedure, it'll split it up into 1|7. On the next iteration you'll get 0|1, and after that "Number" will be 0 (which is the exit condition for the loop (while Number > 0)).
In each iteration of the loop you take the remainder (which will be a single digit for the specific base you use (it's a basic property of how bases work)), convert it to a string and concatenate it with the string you had from previous iterations (note the order in the code!), and you'll get the converted number once you've divided your way down to zero.
As mentioned before, this works for any base; you can use base 16 to convert to hex (though you'll need to do some translations for digits above 9), octal (base 8), etc.
Python code for converting denary into binary
denary= int(input('Denary: '))
binary= [0,0,0,0]
while denary>0:
for n,i in enumerate(binary):
if denary//(2**(3-n))>=1:
binary[n]= 1
denary -= 2**(3-n)
print(denary)
print (binary)

What do the functions lowByte() and highByte() do?

I've made this small experimental program in Arduino to see how the functions lowByte() and highByte() work. What exactly are they supposed to return when passed a value?
On entering the character '9' in the serial monitor it prints the following:
9
0
218
255
How does that come? Also, the last 2 lines are being printed for all values inputted. Why is this happening?
int i=12;
void setup()
{
Serial.begin(9600);
}
void loop()
{
if(Serial.available())
{
i = Serial.read() - '0'; // conversion of character to number. eg, '9' becomes 9.
Serial.print(lowByte(i)); // send the low byte
Serial.print(highByte(i)); // send the high byte
}
}
If you have this data:
10101011 11001101 // original
// HighByte() get:
10101011
// LowByte() get:
11001101
An int is a 16-bit integer on Arduino. So you are reading the high and low part as a byte.
As the actual buffer is "9\n", that is why the second bit prints out 'funny' numbers due to subtracting the result with '0'.
Serial.print needs to be formatted to a byte output if that's what you want to see.
Try:
Serial.print(lowByte, BYTE)
In addition to Rafalenfs' answer, Should you provide a larger data type:
00000100 10101011 11001101 // original
// HighByte() will NOT return: 00000100, but will return:
10101011
// LowByte() will still return:
11001101
Highbyte() returns the second lowest bit (as specified by the documentation: https://www.arduino.cc/reference/en/language/functions/bits-and-bytes/highbyte/)

IndexDWord Always Returning False for a DWORD pattern I know is in the buffer (FreePascal)

Further to Marco's suggestion in this thread, I have a situation where I have a buffer of raw byte data, 4096 bytes, read in a loop from a file.
I then want to search that buffer for a known hex string of 4 bytes. I have the following bits of code that compile OK, and I know there are bytes of data in the buffer, including the hex string I seek, but IndexDWord is always returning false (-1) and I can't see why. The odd "read then read again" behavior is because I need two seperate buffers - one contains raw byte data and one containing char data.
var
Buffer : array [1..4096] of char;
Buffer2 : array [1..4096] of byte;
const BinaryMarker : DWord = ($54435054); // (instead of Hex, Hex, Hex, etc. It now compiles)
begin
// Code relating to opening the file etc....then
BytesRead := SourceFile.Read(Buffer, SizeOf(Buffer)); // Read a buffer as an array of chars
SourceFile.Position := SourceFile.Position - BytesRead; // Go back to where you just were
BinaryBytesRead := SourceFile.Read(Buffer2, SizeOf(Buffer2)); // And read it again as byte buffer
J := IndexDWord(Buffer2, SizeOf(Buffer2), BinaryMarker);
if J > 0 then // this is never true, even though I know the byte hex pattern exists in the buffer
ShowMessage('Found at offset ' + IntToStr(J));
end;
Help! I am using FPC 2.7.1 and Lazarus 1.1. The page for IndexDWord is here