I have a local hardhat node spawned simply as:
npx hardhat node
I have a smart contract code that I want to test that mints 100 NFTs with a simple loop. The code basically adds the address to a map and calls OpenZeppelin's _safeMint function in a pretty standard manner.
However, in my tests it takes a few seconds to run this function, which is a bit too much for 100-iteration for loop. I've tried enabling/disabling autominting from Hardhat config but doesn't seem to change anything.
I will need to run this function for many iterations (10000) in my tests, so the duration of the call is unacceptable. I'm also on M1 Max so I doubt it's my CPU that's the bottleneck for a 100-iteration for loop which should probably take a few nanoseconds.
How can I make hardhat execute contract code faster?
(Solidity ^0.8.0, hardhat 2.8.2)
The solution below is a hack, but I've used it extensively in my tests with for loops up to 5000 iterations and it can run just fine (takes only a couple minutes instead of hours when using automining). The gist of it is to disable automining and interval mining and instead manually mine the blocks on demand.
// enable manual mining
await network.provider.send("evm_setAutomine", [false]);
await network.provider.send("evm_setIntervalMining", [0]);
// create the transactions, which will be pending
await Promise.all(promises); // promises consists of all the txs
// mine the needed blocks, below we mine 256 blocks at once (how many blocks to
// mine depends on how many pending transactions you have), instead of having
// to call `evm_mine` for every single block which is time consuming
await network.provider.send("hardhat_mine", ["0x100"]);
// re-enable automining when you are done, so you dont need to manually mine future blocks
await network.provider.send("evm_setAutomine", [true]);
Related
I'm trying to deploy my smart contract to the Ethereum Mainnet using truffle.js. When migrating with a low gas price, the transaction for deploying the contract sometimes takes longer than 750 seconds and causes truffle to timeout.
Is there some way to disable the 750 second timeout when deploying smart contracts (migrating) to the mainnet? I would like to deploy my contract with a low gas price to reduce the cost, and am ok with waiting a long time for the TX to be mined.
Also, if the timeout IS hit and the TX gets mined later, can I still generate the same exact artifact files for the TX? Thanks.
Is there some way to disable the 750 second timeout when deploying smart contracts (migrating) to the mainnet? I would like to deploy my contract with a low gas price to reduce the cost, and am ok with waiting a long time for the TX to be mined.
No. Truffle uses web3 lib with default wait set to 50 blocks. (So it will wait 50 blocks for the tx to be mined before timing out). You can likely get away with increasing this a lot to acheieve what you want--see: https://www.trufflesuite.com/docs/truffle/reference/configuration
However, when the gas price is set very low there is a possibility that it never gets picked up by miners on the network. So without a timeout the process could hang forever.
Also, if the timeout IS hit and the TX gets mined later, can I still generate the same exact artifact files for the TX? Thanks.
I'm not sure what you mean here. The artifact files are generated after a contract is compiled. Maybe you are referring to getting the transaction hash? It is always best to check a service like etherscan or trueblocks for the state of your transaction.
I have an Azure function triggered via service bus which basically just does a long await (1 - 4.5 minutes (managed with cancellation token to prevent function timeout and host restarting)).
I want to process as many of these long await messages as I can. (Ideally about 1200 at the same time..)
First I ran my function on an App Service Plan (with Concurrent Calls = 1200), but I think each trigger creates a thread, and 1200 threads causes some issues.
So I decided to run it on Consumption, with Batch Size 32, with the idea what I can avoid creating tons of threads and scale out the consumption function instead when it sees the queue build up.
Unfortunately exactly the opposite happens, the Consumption function will process 32 messages, but never scales out even though the queue has 1000+ items in it. Even worse, some times the function just goes to sleep although there are still many items in the queue.
I feel my best option would be to group work in a message, so instead of 1 message = 1 long await, 1 message could be 100 awaits for example, but my architecture doesn't really allow to me group messages easily (because if some tasks fail but some succeed this is easily managed with dead letters, but with grouping I need to maintain state to track this). Is there an existing way to efficiently have many (independent) long running awaits on an azure function, either consumption or service plan.
This is the code that I'm using to send signed transactions to mainnet programmatically:
import Web3 from 'web3'
import EthereumTx from 'ethereumjs-tx'
const web3 = new Web3(new Web3.providers.HttpProvider(INFURA_URL))
const Contract = new web3.eth.Contract(ABI, CONTRACT_ADDRESS)
const createItem = (name, price, nonce, callback) => {
console.log(`Nonce: ${nonce}. Create an Item with Name: ${name}, Price: ${price}`)
const data = Contract.methods.createItem(name, price).encodeABI()
const tx = new EthereumTx({
nonce: nonce,
gasPrice: web3.utils.toHex(web3.utils.toWei('4', 'gwei')),
gasLimit: 400000,
to: CONTRACT_ADDRESS,
value: 0,
data: data,
})
tx.sign(new Buffer(MAINNET_PRIVATE_KEY, 'hex'))
const raw = '0x' + tx.serialize().toString('hex')
web3.eth.sendSignedTransaction(raw, callback)
}
export default createItem
I have to mass create (i.e. populate) items in my contract, and I want to do it programatically. However, while the code works well in ropsten, it fails to send all the transactions in mainnet; it only sends the first few transactions and doesn't send the rest. The errors are not helpful because this error is usually guaranteed to occur:
Unhandled rejection Error: Transaction was not mined within 50 blocks, please make sure your transaction was properly sent. Be aware that it might still be mined!
I wonder how other people do when they have to send a lot of transactions to Ethereum mainnet today. Is there anything I'm doing wrong?
You basically cannot reliably do what you guys are trying to do. Here's a set of problems that arises:
Transactions only succeed in nonce order. If an early nonce is pending (or worse, went missing), the other transaction will pend until the early nonce has been consumed (or it gets removed from the mempool).
There's no hard rule for when a transaction will drop from the mempool. This is scary when you've made a nonce error or an intermediary nonce has not reached the network for some reason because you don't know what is going to happen when you finally post that nonce.
Many transactions with the same nonce can be sent. They are very likely to be selected by gas price (because miners are incentivized to do exactly that). One useful trick when something strange has happened is to clear out your nonces by sending a bunch of high gas price, zero value transactions. You might call this an increment method. Remember this comes with a cost.
A lot of tools do one of two things to handle nonces: a live read from getTransactionCount() or a read from getTransactionCount() followed by incrementing for each additional transaction you send. Neither of these work reliably: for the first, transactions are sometimes pending but not visible in the pool yet. This seems to especially happen if gas price is below safemin, but I'm not entirely sure what is happening here. For the second, if any other system sends a transaction with the same address, it will not work.
So, how do we work around this?
A smart contract is a straightforward way to reduce a transaction from many sender nonces to few sender nonces. Write a contract that sends all your different transactions, then send the budget to that contract. This is a relatively high cost (in terms of time/effort/expertise) way to solve the problem.
Just do it anyway, batch style. When I've had to send many transactions manually, I've batched them in to sets of 10 or so and gone for it. Incrementing the nonce manually each time (because the transaction is usually not on the network yet) and then waiting sufficiently long for all the transactions to confirm. Don't rely on pending transactions on Etherscan or similar to determine whether this is working, as things often vanish unpredictably from this level. Never reuse a nonce for a different transaction that isn't a 0ing high gas transaction - you will screw it up and you'll end up sending the same transaction twice by mistake.
Serialize. One-by-one you post a transaction, wait for it to confirm, increment your nonce. This is probably the best of the easy-to-implement automated solutions. It will not fail. It might buffer forever if you have a constant stream of transactions. It also assures you can never have more than one transaction per block, limiting your throughput to 4 a minute or so.
Fire and retry. This is a little sketchy because it involves reusing nonces for different transactions. Send all (or some large batch) of your transactions to the network. If any fail, restart from the failure nonce and send again. There's possibly a more intelligent solution where you just try to swap out the missing nonce. You'll need to be very careful you never send a transaction that is secretly in the pending-but-not-visible pool.
New address for every transaction. A buffering step of distributing funds to your own addresses ensures you never screw it up for other people. It does double your transaction time and cost though.
I think some variant of fire and retry is what most of the big services (like pools and exchanges) do. Some of these can be improved by splitting the budget over a few addresses as available (reducing the collision frequency).
I'd like to create a package that does no useful work but uses lots of CPU. This is to test an SSIS load balancing solution.
I've got a pretty good idea how to do nothing for a given amount of time, I'd just to consume lots of CPU doing nothing while not making this overly complex. I'm thinking of doing something like pulling a string apart and some rigorous computations in a loop container. Or maybe hitting some web service? Any suggestions?
Consuming CPU time doesn't necessarily require executing something complex, just that the CPU is kept busy. Multiple web services calls may take time to execute but won't necessarily tie up the CPU as the processor may do other things while it waits for the I/O involved with the service calls to complete.
What about a Script Task containing this?
var executeForSeconds = 10;
var limit = DateTime.Now.AddSeconds(executeForSeconds);
while (DateTime.Now <= limit)
{
var x = 1*1;
}
After successfully using JMeter to profile our platform's performance I got the request to simulate a 24h load based on minute-by-minute transaction data extracted from the last year's logs.
At this point, having the static nature of thread creation in jmeter I am wondering if this is easily achievable. I studied the usual plugins together with those at jmeter-plugins.org but still I could not find a straightforward way to do this kind of shaping.
I am looking at the alternative to write a groovy script that dynamically feeds a throughput shaping timer but I am not sure if this is the proper way to go.
Any suggestions?
UPDATE:
I tried the follwing combination (as also Alon and Dan suggested):
- One thread group with one looping thread and a 60 seconds delay timer; this thread reads every minute from csv the number of requests for the next minute and passes it to the next thread group (using a groovy script and global props)
- the second thread group has a fixed number of threads and a Constant Throughput Timer that is updated every minute by the first thread group.
It works partially but the limitation here is that the load/min is divided among all active threads, so part of the threads will still wait to be executed even if the load request changed in the meanwhile.
I think that in order to have a correct simulation there should be a way that all threads that were not executed within the minute be interrupted and started again.
So for a concrete example:
I have 100 requests in the first minute and 5000 in the second (it is real data with big variations)
In the first minute 300 threads have been started (this is my max nr of concurrent connections accepted), but, because they execute very fast they are going to be delayed for more than a minute in order to fulfill the calculated throughput,
so the 5000 requests for the next minute don't have a chance to be executed because lots of threads are still sleeping.
So I am looking for a way to interrupt sleeping threads when more throughput is needed. Probably from Groovy or by modifying some JMeter code.
Thanks,
Dikran
You should use JMeter's constant throughput timer for this. In combination with a CSV file that includes all of the values, it should work perfectly.
See these links:
http://jmeter.apache.org/usermanual/component_reference.html#Constant_Throughput_Timer
http://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config
Best,
Alon.
Use JSR 223 + Groovy for Scripting
You have a lot of options to do scripting with JMeter:
Beanshell
BSF and all supported languages Javascript, Scala , Groovy, Java ...
JSR223 and all supported languages Javascript, Scala , Groovy, Java
...
Although you can be lazy and choose the language you know, FORGET ABOUT IT.
Use the most efficient option, which is JSR223 + Groovy + Caching (supported since JMeter 2.8 in external script and in next upcoming JMeter 2.9 also supported with embedded scripts).
Using Groovy is as simple as adding
groovy-VERSION-all.jar in <JMETER_HOME>/lib folder.
But of course ensure your script is necessary and efficiently written, DON'T OVERSCRIPT
View more over here - http://blazemeter.com/blog/jmeter-performance-and-tuning-tips