I was reading through the Geth docs and noticed it mentioning traces. It covered when traces occur and mentioned that logs are created anytime there are traces.
The simplest type of transaction trace that Geth can generate are raw EVM opcode traces.
For every VM instruction the transaction executes, a structured log entry is emitted,
containing all contextual metadata deemed useful. This includes the program counter,
opcode name, opcode cost, remaining gas, execution depth and any occurred error.
Are these logs different than the event logs emitted from the LOG opcode? Which opcodes result in traces? Can anyone provide some clarity on the logs created from opcodes and the LOG opcode?
The structured logs it is referring to are different from those emitted by the log opcode. The text is refers to profiling information which shows internals of the EVM during executing. Using the command-line evm tool provide with Geth you can see this quite easily. For example, a command like this:
evm --code="0x60006000f3" --json run
This produces the following trace information:
{"pc":0,"op":96,"gas":"0x2540be400","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH1"}
{"pc":2,"op":96,"gas":"0x2540be3fd","gasCost":"0x3","memSize":0,"stack":["0x0"],"depth":1,"refund":0,"opName":"PUSH1"}
{"pc":4,"op":243,"gas":"0x2540be3fa","gasCost":"0x0","memSize":0,"stack":["0x0","0x0"],"depth":1,"refund":0,"opName":"RETURN"}
{"output":"","gasUsed":"0x6","time":76459}
Here you can see information about the state of the EVM as it executes the bytecode program, such as its currently pc value, stack contents, etc. With the evm tool you can also see other information such as memory with the command --nomemory=false, etc.
Related
I am running private geth node and I am wondering if there is any way to find the root cause of transaction exception. When I send the transaction, all I can see is:
transaction failed [ See:
https://links.ethers.org/v5-errors-CALL_EXCEPTION ]
And when I run the same transaction in hardhat network, I get more details:
VM Exception while processing transaction: reverted with panic code
0x11 (Arithmetic operation underflowed or overflowed outside of an
unchecked block)
Is it possible to get the same info from my geth node?
The revert reason is extracted using transaction replay, see the example implementation. This sets requirements for what data your node must store in order to be able to replay the transaction. See your node configuration and detailed use case for further diagnosis.
Your node must support EIP-140 and EIP-838. This has been case for many years now so it is unlikely your node does not support this.
Unless a smart contract explicitly reverts, the default reverts (payable function called with value, math errors) JSON-RPC error messages depend on the node type and may vary across different nodes
Hardhat is internally using Ganache simulated node, not GoEtheruem
I am developing an UEFI application for ARM64 (ARMv8-A) and I have come across the issue: "Synchronous Exceptions at 0xFF1BB0B8."
This value (0x0FF1BB0B8) is exception link register (ELR). ELR holds the exception return address.
There are a number of sources of Synchronous exceptions (https://developer.arm.com/documentation/den0024/a/AArch64-Exception-Handling/Synchronous-and-asynchronous-exceptions):
Instruction aborts from the MMU. For example, by reading an
instruction from a memory location marked as Execute Never.
Data Aborts from the MMU. For example, Permission failure or alignment checking.
SP and PC alignment checking.
Synchronous external aborts. For example, an abort when reading translation table.
Unallocated instructions.
Debug exceptions.
I can't update BIOS firmware to output more debug information. Is there any way to detect more precisely what causes Synchronous Exception?
Can I use the value of ELR (0x0FF1BB0B8) to locate the issue? I compile with -fno-pic -fno-pie options.
When Studying exception handing in AArch64, I find that there is no information about the exception prioritization comparing between synchronous and asynchronous.
So when synchronous and asynchronous exceptions occur at the same time, what will processer do?
Whether the detecting or asynchronous exception(Interrupt) do after executing an instruction? If yes, it it impossible to recive two kinds of exception at the same time. Is that right?
The specification handles this in a way that doesn't really allow for concurrency.
From section D1.13.4 of the manual, "Prioritization and recognition of interrupts":
Any interrupt that is pending before a Context synchronization event in the following list, is taken before the first instruction after the context synchronizing event, provided that the pending interrupt is not masked:
Execution of an ISB instruction.
Exception entry, if FEAT_ExS is not implemented, or if FEAT_ExS is implemented and the appropriate SCTLR_ELx.EIS bit is set.
[...]
So it essentially asks the question "is there a pending interrupt by the time exception entry happens?", to which the answer is either yes or no, which implicitly gives rise to a sequential order.
There is one exception to that though:
If the first instruction after the context synchronizing event generates a synchronous exception, then the architecture does not define whether the PE takes the interrupt or the synchronous exception first.
And it further has to say this:
In the absence of a specific requirement to take an interrupt, the architecture only requires that unmasked pending interrupts are taken in finite time.
Apart from the above, implementations are free to do whatever.
If I register a callback via cudaStreamAddCallback(), what thread is going to run it ?
The CUDA documentation says that cudaStreamAddCallback
adds a callback to be called on the host after all currently enqueued items in the stream have completed. For each cudaStreamAddCallback call, a callback will be executed exactly once. The callback will block later work in the stream until it is finished.
but says nothing about how the callback itself is called.
Just to flesh out comments so that this question has an answer and will fall off the unanswered queue:
The short answer is that this is an internal implementation detail of the CUDA runtime and you don't need to worry about it.
The longer answer is that if you look carefully at the operation of the CUDA runtime, you will notice that context establishment on a device (be it explicit via the driver API, or implicit via the runtime API) spawns a small thread pool. It is these threads which are used to implement features of the runtime like stream command queues and call back operations. Again, an internal implementation detail which the programmer doesn't need to know about.
From Computer Organization and Design, by Patterson et al
Why is "I/O device request" external interrupt?
Does "I/O device request" mean that a user program request I/O device services by system calls? If yes, isn't a system call an internal exception?
Thanks.
It's referring to peripheral devices signaling that they require attention, eg. disk controller hardware that is now ready to satisfy a read request that it received earlier, (or has finished DMAing in data for the read request).
The path in to the operating system is an array of pointers. This carry may have different names depending upon the system. I will call it the "dispatch table." The dispatch table handles everything that needs the attention of the operating system: Interrupts, faults, and traps. The last two are collectively "exceptions".
An exception is caused by executing an instruction. They synchronous.
An interrupt is as caused by by something occurring outside the executing process/thread.
A user invokes the the operating system synchronously by executing an instruction that causes a trap (On intel chips they misname such a trap a "software interrupt"). Such an even is a synchronous, predictable result of the instruction stream.
Such a trap would be used to queue an I/O request to the device. "Invoke the operating system from user program" in your table.
The device wold cause an interrupt when the request is completed. That what is meant by an "I/O Device Request" in your table.
The confusion is that interrupts, faults and traps are all handled the same way by the operating system through the dispatch table. And, as I said, in Intel land they call both traps and interrupts "Interrupts".
Because the interrupt isn't generated by the processor or the program. It is a physical wire connected to the interrupt controller whose state changes. Driven by the controller for the device, external to the processor. The interrupt handler is usually located in a driver that knows how to handle the device controller's request for service.
"Invoke the operating system" is a software interrupt, usually switches the processor into protected mode to handle the request.
"Arithmetic overflow" is typically a trap that's generated by the floating point unit on the processor.
"Using an undefined instruction" is another trap, generated by the processor itself when it can't execute code anymore because the instruction is invalid.
Processor usually have more traps like that. Like division by zero. Or executing a privileged instruction. Or a page fault when virtual memory isn't mapped to physical memory yet. Or a protection fault when the program reads an unmapped virtual memory address.