On Etherscan it shows Swap, Remove Liq, Transfer, migrate etc... How do I decode this information from a TransactionReceipt using web3 API?
Currently, I see patterns on Liq removal such as a Null Address in the logs. Is this the way etherscan does it or is there a way to extract method type?
What you are looking for is called event signature in Solidity terminology. The event log first 4 bytes match the hash of an event ABI descriptor.
To decode, you need to have the ABI files of the events you are decoding. With the ABI file, you can decode the rest of the event data and convert it to human-readable info.
More here.
Related
I'm trying to read a sequence of JSON objects from a network stream. This involves finding complete JSON objects and returning them one by one. As soon as a complete JSON object was received, I need to have it. Anything else that follows that JSON object is for the next object and must only be used when the next complete object was received.
I would have thought that the Utf8JsonReader class could do that but apparently it cannot accept a Stream of any kind. It even seems to be unwanted to have that possibility.
Now I'm wondering if it's possible at all to use .NET's shiny new JSON parser to read from a stream when I don't know when data arrives and how much of it. Do I need to split the JSON object messages manually or can the already existing reader just stop when it has something and continue when the next thing is available? I mean, if it can do that on a predefined array of bytes, it could surely also do it with some waiting in between until more data is available. That just doesn't seem to be exposed in the public API. Or did I miss something?
The JsonDocument.ParseAsync(Stream) method cannot be used because it would read the stream to the end. That doesn't make sense for a network stream that stays open for a long time and just reads some data from time to time.
I'm writing a contract (function code below) that pulls data from an API via the Chainlink GET function. I have read that Provable (Oraclize) has an option to encrypt API request parameters. Does Chainlink offer anything similar? I've been googling a lot, but haven't been able to find anything helpful so far. I'd really like to avoid sending my API key on a public chain for obvious reasons.
function requestVolumeData(string memory apiurl, string memory jsonpath) public returns (bytes32 requestId)
{
Chainlink.Request memory request = buildChainlinkRequest(jobId, address(this), this.fulfill.selector);
// Set the URL to perform the GET request on
request.add("get", apiurl);
request.add("path", jsonpath);
// Multiply the result by 1000000000000000000 to remove decimals
int timesAmount = 10**18;
request.addInt("times", timesAmount);
// Sends the request
return sendChainlinkRequestTo(oracle, request, fee);
}
Ideally, you'd not want to put your API keys on-chain at all, but here are your options for working with sensitive data with a Chainlink oracle.
1. Pass your API key to a node operator
This of course, this a trusted operation since you'll have to trust the node operator with your key. However, this will prevent the world from seeing your key, and the node operator can just use it on the backend.
2. Encrypt your key before you use it
You'll will still need to give the Chainlink node operators a way to decrypt the data on the back end, and this is considered less safe because you're still giving people a way to access your data, and you're putting it on-chain.
3. Make a protected API that can only be called by node operators.
So you'd run an API that wraps around another API.
4. DECO (not live yet)
There are plans to have DECO come out at some time which will help keep private data safe even from Chainlink node operators.
The only safe way to do this is with confidential computing. That's what we do at Verifiably.
I'm guessing Chainlink will eventually add this capability. I'm not sure why they didn't do this after their Town Crier acquisition, seemed like the natural thing to do.
Now the ABI code can be obtained through solc compilation or etherscan. Currently we have deployed ethereum full node, is there a way to directly obtain the corresponding ABI based on the contract address.
You cannot get the ABI JSON from just the compiled bytecode. You need the source code for that.
It's because of what the ABI JSON represents. It represents info on public and external functions, that you can use to calculate hash signatures of these functions.
But the compiled bytecode only contains these hash signatures. And you can't "unhash" them back to the original info objects.
I'm trying to store the Soap Input Request (Soap UI Request) in the database for log in ESQL Langage. I'm noob in ESQL .
My flow is Soap Input ==> Compute Node ==> Soap Reply .
I have no idea to do this. Please Help.
Not sure if you still require this or have already found a solution, but thought i'd post anyway.
This is something that has been quite common in several places I have worked. The way we tended to achieve this was by casting the incoming message as a bitstream and then casting it as a character -
DECLARE blobInputMsg BLOB ASBITSTREAM(InputBody CCSID 1208 ENCODING 546);
DECLARE charInputMsg CHAR CAST(blobInputMsg AS CHARACTER CCSID 1208 ENCODING 546);
The CCSID and ENCODING should be taken from the incoming message e.g. InputProperties.CodedCharSetId and InputProperties.Encoding, or defaulted to values suitable for your interfaces.
Have a go at Monitoring. Do the step by step stuff outlined here.
https://www.ibm.com/developerworks/community/blogs/546b8634-f33d-4ed5-834e-e7411faffc7a/entry/auditing_and_logging_messages_using_events_in_ibm_integration_bus_message_broker?lang=en
Be careful with the subscription in MQ as things get concatenated. Use MQExplorer to check your subscription including topic after you've defined it.
Also make sure you run the IIB queue definition scripts as per the install instructions for your version as one of the MQSC commands defines the topic.
Use a separate flow to write the events to your DB. Note in this day and age on Unix systems I'd probably write them to syslog and use ELK or Splunk
We want to send some events to Application Insights with data showing which features a user owns, and are available for the session. These are variable, and the list of items will probably grow/change as we continue deploying updates. Currently we do this by building a list of properties dynamically at start-up, with values of Available/True.
Since AI fromats each event data as JSON, we thought it would be interesting to send through custom data as JSON so it can be processed in a similar fashion. Having tried to send data as JSON though, we bumped into an issue where AI seems to send through escape characters in the strings:
Eg. if we send a property through with JSON like:
{"Property":[{"Value1"},..]}
It gets saved in AI as:
{\"Property\":[{\"Value1\"},..]} ).
Has anyone successfully sent custom JSON to AI, or is the platform specifically trying to safeguard against such usage? In our case, where we parse the data out in Power BI, it would simplify and speed up a lot some queries by being able to send a JSON array.
AI treats custom properties as strings, you'd have to stringify any json you want to send (and keep it under the length limit for custom property sizes), and then re-parse it on the other side.