Cant swap USDC for USDT with UniswapV2Router02 - ethereum

I am trying to swap some USDC(0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48) token for USDT(0xdAC17F958D2ee523a2206206994597C13D831ec7) token using UniswapV2Router02 at address 0xd9e1cE17f2641f24aE83637ab66a2cca9C378B9F(this is the Sushiswap's router on Ethereum mainnet)
Since I both USDC and USDT have 6 digits precision as amountIn to the swapExactTokensForTokens method I am passing number 3000 * 10 ** 6(must be equal to $3000). As amountOutMin I am passing 2850 * 10 ** 6(must be equal to $2850, which is -5% from the amountIn. In my opinion this is prety enough slippage tolerance)
Everything looks right? Yes but no! Everythime I ma trying to run this code:
UniswapV2Router(0xd9e1cE17f2641f24aE83637ab66a2cca9C378B9F) // Sushiswap router on ETH mainnet
.swapExactTokensForTokens(3000 * 10 ** 6, // 3000 USDC
2850 * 10 ** 6, // 2850 USDT
[
"0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48", // USDC address on ETH mainnet
"0xdAC17F958D2ee523a2206206994597C13D831ec7" // USDT address on ETH mainnet
],
msg.sender,
<some deadline>);
I am getting the following error:
Revert message: UniswapV2Router: INSUFFICIENT_OUTPUT_AMOUNT
Please help what I am missing?
EDIT: I am using truffle and I have forked ETH mainnet
EDIT 2: The amount UniswapV2Router is trying to return is 1327704386 ~ 1327 USDT which is ~60% slippage but in the liquidity pair there are assets for $2,036,392,078,752.44

It happened so thath the Sushiswap's USDC/USDT pool has only 5k in liquidity and everything with my code is right.

Related

Storing large arrays on the Ethereum blockchain

In the process of studying Ethereum smart contracts, the question arose of how to store large structures inside a contract (array or map). There is a contract
pragma solidity ^ 0.8.6;
contract UselessWorker {
uint32 public successfullyExecutedIterations = 0;
mapping (uint32 => uint32 [6]) items;
event ItemAdded (uint32 result);
function doWork (int _iterations) public {
for (int32 i = 0; i <_iterations; i ++) {
items [successfullyExecutedIterations] = [uint32 (block.timestamp), successfullyExecutedIterations, 10, 10, 10, 10];
successfullyExecutedIterations ++;
}
emit ItemAdded (successfullyExecutedIterations);
}
}
The doWork method fills the map items with arrays of numbers, called by an external script. At the same time, the more records appear in items, the faster disk space is consumed (for 1,000,000 the blockchain size is about 350 MB, for 2,000,000 about 1.1 GB, for 19,000,000 about 22 GB.This is the size of the .ethereum/net/geth/chaindata/ folder).
I am testing it on a private network, so the price of gas does not bother me. I run it with the command
geth --mine --networkid 999 --datadir ~ / .ethereum / net
--rpccorsdomain "" --allow-insecure-unlock --miner.gastarget 900000000 --rpc --ws --ws.port 8546 --ws.addr "127.0.0.1" --ws.origins
"" --ws.api "web3, eth" --rpc.gascap 800000000
According to estimates, one record in the map should take about 224 bytes (7 * 32 bytes) and for 19M records it should be about 4.2 GB.
It feels like a memory leak is taking place. Or I don't understand well how memory is allocated for storing map.
Can anyone suggest why blockchain is consuming so much disk space?

How to set a range for DHCP IP address leasing in WiFi softAP()?

I am trying to make a mesh network using ESP32 module. The WiFi.h softAPConfig() can be used to set the starting address for leasing, but it progress upwards without reusing the already leased addresses which are no more in use. So I want to limit the leasing range between two addresses.
I found this piece of code from dhcpserver.h
/* Defined in esp_misc.h */
typedef struct {
bool enable;
ip4_addr_t start_ip;
ip4_addr_t end_ip;
} dhcps_lease_t;
This is the code I compiled and uploaded into the ESP32 module
#include "WiFi.h"
char *ssid = "AirMesh";
IPAddress local_IP(192,168,1,0);
IPAddress gateway(192,168,1,1);
IPAddress subnet(255,255,255,0);
void setup()
{
Serial.begin(9600);
Serial.println();
Serial.print("Setting soft-AP configuration ... ");
Serial.println(WiFi.softAPConfig(local_IP, gateway, subnet) ? "Ready" : "Failed!");
Serial.print("Setting soft-AP ... ");
Serial.println(WiFi.softAP("ESPsoftAP_01") ? "Ready" : "Failed!");
Serial.print("Soft-AP IP address = ");
Serial.println(WiFi.softAPIP());
WiFi.softAP(ssid);
}
void loop() {}
The first device when connected gives an IP 192.168.1.1, the second device an IP 192.168.1.2, when I disconnect the first device and reconnected it and it gives an IP 192.168.1.3 (every connection use different physical addresses)
This progression keeps going
EDIT:
After digging it up, I think I found the code responsible for ranging IP leasing, but couldn't figure out what it means.
lease.start_ip.addr = static_cast<uint32_t>(local_ip) + (1 << 24);
lease.end_ip.addr = static_cast<uint32_t>(local_ip) + (11 << 24);
After trial and error methord, I managed to find the answer
Change the code in WiFiAP.cpp file (i have forked and requested a pull replacing 11 with 10 since the maximum number of connections possible for ESP32 is 10)
lease.start_ip.addr = static_cast<uint32_t>(local_ip) + (1 << 24);
lease.end_ip.addr = static_cast<uint32_t>(local_ip) + (n << 24);
Where n is number of IP addresses that must be allocated to external devices.
For ex:-
lease.end_ip.addr = static_cast<uint32_t>(local_ip) + (20 << 24);
means if the starting IP is 192.168.1.0 the DHCP will assign address starting from 192.168.1.1 to 192.168.1.20, while 192.168.1.0 (starting IP will be the address of the ESP32 module)
Doesn't the access point use the first ip of 192.168.1.1 not 1.0?
So 2-11 is 10 connections.

Ethereum nonce behaviour on invalid transaction

Given the state:
Account X, nonce: 0
Account X sends a valid TX
Account X new nonce: 1
Account X sends an invalid TX that doesn't pass a validateTx() check in mempool (msg too large, more than 32kb for example)
Account X new nonce in the global state: 2 or 1 ?
If an account sends in invalid TX should the nonce of the next transaction be the same as of the invalid one or does it have to increase after every attept to submit a TX?

How do I get result of invoke direct?

I am trying to understand what is happening in the following smali code:
I am trying to log the result or stored value in key:
# creates new instance of SecretKeySpec in register v8
new-instance v8, Ljavax/crypto/spec/SecretKeySpec;
# store contant 0x0 in v0
const/4 v0, 0x0
aget-object v0, v9, v0
# store string AES in v1
const-string v1, "AES"
# calls new SecretKeySpec(v0,v1);
invoke-direct {v8, v0, v1}, Ljavax/crypto/spec/SecretKeySpec;-><init>([BLjava/lang/String;)V
.line 115
.local v8, "key":Ljavax/crypto/spec/SecretKeySpec;
The invoke-direct call there is calling the constructor. Object creation in Java (and Dalvik) bytecode takes two instructions. The first, new-instance allocates an uninitialized object, while invoke-direct calls the constructor to initialize this object. The object is stored in v8, as you can see from the new-instance instruction.

divide by zero exception handling in Linux

I am curious to understand the divide by zero exception handling in linux. When divide by zero operation is performed, a trap is generated i.e. INT0 is sent to the processor and ultimately SIGFPE signal is sent to the process that performed the operation.
As I see, the divide by zero exception is registered in trap_init() function as
set_trap_gate(0, &divide_error);
I want to know in detail, what all happens in between the INT0 being generated and before the SIGFPE being sent to the process?
Trap handler is registered in the trap_init function from arch/x86/kernel/traps.c
void __init trap_init(void)
..
set_intr_gate(X86_TRAP_DE, &divide_error);
set_intr_gate writes the address of the handler function into idt_table x86/include/asm/desc.h.
How is the divide_error function defined? As a macro in traps.c
DO_ERROR_INFO(X86_TRAP_DE, SIGFPE, "divide error", divide_error, FPE_INTDIV,
regs->ip)
And the macro DO_ERROR_INFO is defined a bit above in the same traps.c:
193 #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
194 dotraplinkage void do_##name(struct pt_regs *regs, long error_code) \
195 { \
196 siginfo_t info; \
197 enum ctx_state prev_state; \
198 \
199 info.si_signo = signr; \
200 info.si_errno = 0; \
201 info.si_code = sicode; \
202 info.si_addr = (void __user *)siaddr; \
203 prev_state = exception_enter(); \
204 if (notify_die(DIE_TRAP, str, regs, error_code, \
205 trapnr, signr) == NOTIFY_STOP) { \
206 exception_exit(prev_state); \
207 return; \
208 } \
209 conditional_sti(regs); \
210 do_trap(trapnr, signr, str, regs, error_code, &info); \
211 exception_exit(prev_state); \
212 }
(Actually it defines the do_divide_error function which is called by the small asm-coded stub "entry point" with prepared arguments. The macro is defined in entry_32.S as ENTRY(divide_error) and entry_64.S as macro zeroentry: 1303 zeroentry divide_error do_divide_error)
So, when a user divides by zero (and this operation reaches the retirement buffer in OoO), hardware generates a trap, sets %eip to divide_error stub, it sets up the frame and calls the C function do_divide_error. The function do_divide_error will create the siginfo_t struct describing the error (signo=SIGFPE, addr= address of failed instruction,etc), then it will try to inform all notifiers, registered with register_die_notifier (actually it is a hook, sometimes used by the in-kernel debugger "kgdb"; kprobe's kprobe_exceptions_notify - only for int3 or gpf; uprobe's arch_uprobe_exception_notify - again only int3, etc).
Because DIE_TRAP is usually not blocked by the notifier, the do_trap function will be called. It has a short code of do_trap:
139 static void __kprobes
140 do_trap(int trapnr, int signr, char *str, struct pt_regs *regs,
141 long error_code, siginfo_t *info)
142 {
143 struct task_struct *tsk = current;
...
157 tsk->thread.error_code = error_code;
158 tsk->thread.trap_nr = trapnr;
170
171 if (info)
172 force_sig_info(signr, info, tsk);
...
175 }
do_trap will send a signal to the current process with force_sig_info, which will "Force a signal that the process can't ignore".. If there is an active debugger for the process (our current process is ptrace-ed by gdb or strace), then send_signal will translate the signal SIGFPE to the current process from do_trap into SIGTRAP to debugger. If no debugger - the signal SIGFPE should kill our process while saving the core file, because that is the default action for SIGFPE (check man 7 signal in the section "Standard signals", search for SIGFPE in the table).
The process can't set SIGFPE to ignore it (I'm not sure here: 1), but it can define its own signal handler to handle the signal (example of handing SIGFPE another). This handler may just print %eip from siginfo, run backtrace() and die; or it even may try to recover the situation and return to the failed instruction. This may be useful for example in some JITs like qemu, java, or valgrind; or in high-level languages like java or ghc, which can turn SIGFPE into a language exception and programs in these languages can handle the exception (for example, spaghetti from openjdk is in hotspot/src/os/linux/vm/os_linux.cpp).
There is a list of SIGFPE handlers in debian via codesearch for siagaction SIGFPE or for signal SIGFPE