Using truffle, how to just get the tx hash of a submitted transaction and not wait for the receipt - ethereum

I have a token contract that is already deployed, and I am calling the transfer function on the instance.
XYZContract.at(ADDRESS).then(instance => {
return instance.transfer(toAddress, amount);
}).then(result => {
console.log("transferTokens",result);
resolve(result);
})
Now my understanding is that the 'transfer' promise resolves only after the transaction is recorded into a block.
Is there a way to just get the transaction hash right after submission, but not wait for the receipt?
For e.g web3 has a sendTransaction that fires events when transaction is submitted, transaction receipt is obtained, etc.
In truffle documentation, it's mentioned to use instance.sendTransaction to only send ether
(https://truffleframework.com/docs/truffle/getting-started/interacting-with-your-contracts). So curious what is the right approach here
My end goal is, I want to run the transfer in a batch and would like to later check using the transaction hashes how many actually succeeded and also the number of confirmations.

Related

How to detect contract creation timestamp in Ethereum blockchain nodes?

I am trying to detect timestamp in Ethereum node but there are no methods/functions.
How do we find this?
I am getting swap logs and parsing them but couldn't find direct way to detect contract creation date.
Contract creation date is not available using a direct method.
Blockchain explorers usually collect it the following way:
They scan every transaction on every block on the chain
If the transaction receipt contains non-null contractAddress property, it means that this transaction deployed a contract.
Time of the transaction (and effectively time of the contract deployment) is available as part of the block in which the tx was produced
This is quite complicated. There is no method to get the Transaction that deployed the Contract.
First of all, there are multiple ways to deploy a contract:
Via the Deployment Transaction, here you will get the contractAddress in the tx receipt.
Via another contract, with create2/create method. For example in this way Uniswap: Factory Contract creates Pair contracts. And there is no API in RPC to find out this type of deployment. You need a node that can return such "Internal Calls".
There are some workarounds. In your case, do you just need the timestamp? Then you have just to find the block when the contract was deployed.
The archive node.
Use binary search to find the block number, when eth_getCode returns not the empty data, and the previous one is empty.
Get the timestamp of the block.
If you need the transaction:
Use the previous method to find the block
Load transactions and their receipts.
Search for a Deployment Transaction, which has your contractAddress in the receipt.
If not found, almost always the contract emits an Event when creating another contract. So look into the address, data and topics to match your contract address of every log entry.
Not a universal workaround: if you know the Event which gets emitted in the constructor or by the parent's contract, you can search for it with getPastLogs.
You can use the 3rd party API, like https://docs.etherscan.io/api-endpoints/contracts to get the Get Contract Creator and Creation Tx Hash. After that:
Get the blockNumber from transaction fetched by txHash.
Get the timestamp from block fetched by blockNumber.

My messages are delivered out of flow sequence order and how do I compensate?

I wish to use Twilio in the context of an adventure game. As the gamer (Geocacher) progresses on an actual treasure (cache) hunt, clues are given by text when certain code words or numbers are entered as part of the thread. I am very comfortable creating the flow in Studio and have a working flow that is debugged and ready to go.
Not so fast, grasshopper! Strange things started to happen when beta testing the flow. Basically texts that show as being sent arrive to the user out of sequence in the thread. The SM logs show everything is working correctly (message sent) but, what I call Zombie messages arrive to the user after a previous message has arrived. The Zombies are legitimate messages from the Flow but out of the correct sequence and that makes the thread unusable for my purposes.
I learned too late in my "programming" that Twilio states, "If you send multiple SMS messages to the same user in a short time, Twilio cannot guarantee that those messages will arrive in the order that you sent them." Ugh!
So , I started with the Help Techs at Twillio and one solution is to create a subflow that basically is inserted after a Send Message Widget. This sub flow basically Fetches the message via the SMS SID to check for SMS status. If status is "delivered", we can safely say the message has been received by the recipient and then permit the next message in the flow.
That sound great but I am not a programmer and will never be able to integrate the suggested code much less debug it when things don't work. There might be many other approaches that you guys can suggest. The task is 1.) Send a message, 2.) Run a subflow that checks for message delivery, 3.) send the next message in the sequence.
I need to move on to implementation and this type of sub flow is out of my wheelhouse. I am willing to pay for programming help.
I can post the JSON code that was created as a straw man but have no idea how to use it and if it is the optimum solution if that is of help. It would seem that a lot of folks experience this issue and would like a solution. A nice tight JSON subflow with directions on how to insert would seem to be a necessary part of the Widget toolkit provided by Twillio in Studio.
Please Help Me! =)
As you stated, the delivery of the message cannot be guaranteed. Checking the status of the sent message is the most reliable, using a subflow, a Twilio Function, or a combination. Just keep in mind that Twilio Functions have a 10s execution time limit. I don't expect delivering the SMS will take longer than 10s is most cases. If you're worried about edge cases, you'd have to loop the check for the status multiple times. I'll share a proof of concept later for this.
An easier way, but it still doesn't guarantee delivery order, would be to add some delay between each message. There's no built-in delay widget, but here's code on how to create a Twilio Function to add delays, up to 10s.
A more hacky way to implement delays without having to use this Twilio Function, is to use the Send & Wait For Reply Widget and configure the "Stop Gathering After" property to the amount of delay you'd like to add. If the user responds, connect to the next widget, if they don't also connect to the widget.
As mentioned earlier, here's th Subflow + Function proof of concept I hacked together:
First, create a Twilio Functions Service, in the service create two functions:
/delay:
// Helper function for quickly adding await-able "pauses" to JavaScript
const sleep = (delay) => new Promise((resolve) => setTimeout(resolve, delay));
exports.handler = async (context, event, callback) => {
// A custom delay value could be passed to the Function, either via
// request parameters or by the Run Function Widget
// Default to a 5 second delay
const delay = event.delay || 5000;
// Pause Function for the specified number of ms
await sleep(delay);
// Once the delay has passed, return a success message, TwiML, or
// any other content to whatever invoked this Function.
return callback(null, `Timer up: ${delay}ms`);
};
/get-message:
exports.handler = function(context, event, callback) {
const messageSid = event.message_sid,
client = context.getTwilioClient();
if(!event.message_sid) throw "message_sid parameter is required.";
client.messages(messageSid)
.fetch()
.then(message => callback(null, message))
.catch((error) => {
console.error(error);
return callback(error);
});
};
Then, create a Studio Flow named something like "Send and Wait until Delivered".
In this flow, you send the message, grabbing the message body passed in from the parent flow, {{trigger.parent.parameters.message_body}}.
Then, you run the /get-message Function, and check the message status.
If delivered, set status variable to delivered. This variable will be passed back to the parent flow. If any of these accepted,queued,sending,sent, then the message is still in route, so wait a second using the /delay function, then loop back to the /get-message function.
If any other status, it is assumed there's something wrong and status is set to error.
Now you can create your parent flow where you call the subflow, specifying the message_body parameter. Then you can check the status variable for the subflow, whether it is 'delivered' or 'error'.
You can find the export for the subflow and the parent flow in this GitHub Gist. You can import it and it could be useful as a reference.
Personally, I'd add the /delay function, and use that after every message, adding a couple of seconds delay. I'd personally assume the delay adds enough buffer for no zombie messages to appear.
Note: The code, proof of concept, and advice is offered as is without liability to me or Twilio. It is not tested against a production workload, so make sure you test this thoroughly for your use case!

hardhat ethers: deploy vs deployed

In following code:
console.log("Deploying contract....");
const simpleStorage = await simpleStorageFactory.deploy();
await simpleStorage.deployed();
Line 2 deploys the contract and we get hold of it.
Why do we need to call deployed method in Line3?
calling deploy() will create the transaction, and you can get the contract address immediately, however this does not mean the transaction has been processed and included within a block.
deployed() will wait until it has been. Under the hood it will poll the blockchain until the contract has been succesfully processed. See: https://github.com/ethers-io/ethers.js/blob/master/packages/contracts/src.ts/index.ts#L824
I don't think you technically have to call deployed(), but if you need to do anything after the contract has been deployed and need to make sure it has been included in a mined block, then waiting for deployed() is advised.

Scoped promises for cancellation

Imagine the following code:
addEventListener("start", (user) => handler(user),);
async function handler(user) {
await fetch();
await fetch();
await fetch()
.then(x => fetch());
}
For this code I want to have carried timeout. I have a 15 seconds time out budget at start and each promise reduces this time and when we reach zero we just throw exception.
This is trivial to implement if we were handling an event at a time. But I need to handle multiple event concurrently.
I was wondering if there is a way to create a scope per event that contains a start time and when ever request is polled or finished we check if the timeout is reached and if so, we reject all promises in the scope and throw an exception.
NOTE: I'm using latest version of the embedded v8 and have access to native/cpp side

When we call a solidity function via web3js , how does the code flows along with data formats during all the process

When we call a solidity function via web3js, how does the code flows along with data formats during all the process?
For example, if I call a solidity function through web3js, how does it get executed. Can anyone explain the complete flow?
First of all, I recommend taking the time to read How does Ethereum work, anyway?
But for now, a short explanation
When you call a method on a contract through web3.js, the library
will encode your method call as data attribute on the transaction.
Here's a good explanation about ethereum transactions and data
attribute
The ethereum node your web3.js is connected to will receive your transactions and do some basic checks of nonce and balance
Once the basic checks pass, the node will broadcast the transaction to the rest of the network
When a network node receives a transaction with data attribute, it will execute the transaction using the Ethereum EVM. The outcome
of the transaction is modified state of the contract storage. More
about contract storage
The expectation is that the transaction will produce the same state change on every single node in the network. This is how
consensus is reached and the transaction (and the contract state
change) become part of the canonical chain (mined and not belonging
to an uncle block)