Does flattening smart contacts reduce deployment costs? - ethereum

I want to reduce deployment cost of my ERC-721A contract.
In general; does flattening smart contract help to reduce it's cost?
Currently I am using ethers.js's contractFactory.deploy method with hardhat.

TLDR: No, it doesn't reduce the cost.
LONG:
Contract deployment is a process of sending transaction to zero address (0x0000...00) with data field fills with bytecode of contract. The bytecode is generated by solc compiler and it doesn't depend on way how source code is formatted. Flattening puts only all dependencies into single file. It doesn't change size of bytecode and the same doesn't reduce the size of transaction, which indicates how many gas is need for deployment.

The compiled contract has the same bytecode whether you flatten it or not.
So no, unfortunately, flattening will not help any cost reduction.
You can use Remix IDE for inspecting bytecode, assembly and deployment cost of your contracts

Related

Some total newbie questions on NFT and Ethereum

I'm interested in the conceptual topic of creating rights managements systems on the the Ethereum block chain with digital assets represented by an NFT.
I am just reading up on how to write programs that run on Etherium but I have some very basic questions just to get to started.
I read that NFT are created on the Ethereum blockchain. I don't really understand if that is the same block chain on which the currency Ether is maintained? Seems like the ledger will become impossibly large huge if both the every currency transaction and every digital asset and copy thereof that migrates to Ethereum is stored in one single giant ledger and that each miner on the chain has to download the entire ledger to one single machine in order to validate transactions? Have I got big misunderstanding there? I know there is talk about "sharding" in the future, but it seems like that isn't coming very soon.
Cost of running a smart contract on the blockchain? Assuming that the we are talking about the same block chain, from what I can see the price of "Gas" is quite high. I'm reading that the price of ETH transfer from one party to another is 21,000 Gwei, about $0.03 today. Just trying to understand the basics, how much does it cost to create a NFT? And roughly how much does it cosst to execute a simple function on the blockchain (without loops). Let say the equivalent of 5 statement function which takes a few simple params, reads a few blocks, doesn't write to the block chain but just performs some simple math and a few if statements and returns a string? Does that also cost, like, more than penny? Is the conversion to ETH2 switch from proof of work to proof of stake going to bring those costs down by orders of magnitude?
Any good resources or reference on how to write programs which create and manipulate NFTS on Etherium? Most of what I have seen in the bookstores seem to cover financial transactions with Ether.
Yes, it's the same blockchain.
You can see in the stats that full node (stores current state) currently takes about 400 GB and archive node (stores current and historical states as well) takes about 6.6 TB.
My observation is that most web apps using blockchain data don't verify and trust a third-party service running a node (such as Infura). And I believe that most end users or businesses who want/need to verify, usually have the capacity to store 400+ GB and are able to scale.
But if this amount of data is okay or "impossibly large huge", I'll leave that to your decision. :)
Deployment of a token smart contract usually costs between 500k to 3M gas. My estimate is that most token contracts with basic features that were compiled with an optimizer, cost around 1M gas to deploy. With current prices of ~200 Gwei/gas and $1800/ETH, that's about $350. But I remember just few months ago the average gas prices were ~20 and ETH cost $500, so that would be around $10. So yea, the cost of deploying a contract is very volatile.
Simple function that performs validations and transformations in memory is going to cost the base 21k + few hundred gas. (Working with memory data is cheap gas-wise, accessing the storage is much more expensive.) So in current prices around $7, few months ago it could have been $0.25.
As for the question, whether ETH2.0 is going to bring lower gas price: My opinion is that L2 (which should be released earlier than PoS) is going to have some effect on the price since it allows for sidechain transactions (similar to Lightning network on Bitcoin). But this is a development forum, so I'm not not going to dive deeper into price speculations.
I recommend OpenZeppelin docs where they cover their opensource implementations of ERC standards (including ERC-721 NFTs) or googling the topic you're interested in and read articles that catch your eye (at least that's my current approach).
And if you're new to Solidity in general, I recommend at least few chapters from CryptoZombies tutorial. In my opinion, the first few chapters are great and you'll learn a lot, but then the quality slowly fades.

What is the use of task graphs in CUDA 10?

CUDA 10 added runtime API calls for putting streams (= queues) in "capture mode", so that instead of executing, they are returned in a "graph". These graphs can then be made to actually execute, or they can be cloned.
But what is the rationale behind this feature? Isn't it unlikely to execute the same "graph" twice? After all, even if you do run the "same code", at least the data is different, i.e. the parameters the kernels take likely change. Or - am I missing something?
PS - I skimmed this slide deck, but still didn't get it.
My experience with graphs is indeed that they are not so mutable. You can change the parameters with 'cudaGraphHostNodeSetParams', but in order for the change of parameters to take effect, I had to rebuild the graph executable with 'cudaGraphInstantiate'. This call takes so long that any gain of using graphs is lost (in my case). Setting the parameters only worked for me when I build the graph manually. When getting the graph through stream capture, I was not able to set the parameters of the nodes as you do not have the node pointers. You would think the call 'cudaGraphGetNodes' on a stream captured graph would return you the nodes. But the node pointer returned was NULL for me even though the 'numNodes' variable had the correct number. The documentation explicitly mentions this as a possibility but fails to explain why.
Task graphs are quite mutable.
There are API calls for changing/setting the parameters of task graph nodes of various kinds, so one can use a task graph as a template, so that instead of enqueueing the individual nodes before every execution, one changes the parameters of every node before every execution (and perhaps not all nodes actually need their parameters changed).
For example, See the documentation for cudaGraphHostNodeGetParams and cudaGraphHostNodeSetParams.
Another useful feature is the concurrent kernel executions. Under manual mode, one can add nodes in the graph with dependencies. It will explore the concurrency automatically using multiple streams. The feature itself is not new but make it automatic becomes useful for certain applications.
When training a deep learning model it happens often to re-run the same set of kernels in the same order but with updated data. Also, I would expect Cuda to do optimizations by knowing statically what will be the next kernels. We can imagine that Cuda can fetch more instructions or adapt its scheduling strategy when knowing the whole graph.
CUDA Graphs is trying to solve the problem that in the presence of too many small kernel invocations, you see quite some time spent on the CPU dispatching work for the GPU (overhead).
It allows you to trade resources (time, memory, etc.) to construct a graph of kernels that you can use a single invocation from the CPU instead of doing multiple invocations. If you don't have enough invocations, or your algorithm is different each time, then it won't worth it to build a graph.
This works really well for anything iterative that uses the same computation underneath (e.g., algorithms that need to converge to something) and it's pretty prominent in a lot of applications that are great for GPUs (e.g., think of the Jacobi method).
You are not going to see great results if you have an algorithm that you invoke once or if your kernels are big; in that case the CPU invocation overhead is not your bottleneck. A succinct explanation of when you need it exists in the Getting Started with CUDA Graphs.
Where task graph based paradigms shine though is when you define your program as tasks with dependencies between them. You give a lot of flexibility to the driver / scheduler / hardware to do scheduling itself without much fine-tuning from the developer's part. There's a reason why we have been spending years exploring the ideas of dataflow programming in HPC.

Chisel Output with SystemVerilog Interfaces/Structs

I'm finding when generating Verilog output from the Chisel framework, all of the 'structure' defined in the chisel framework is lost at the interface.
This is problematic for instantiating this work in larger SystemVerilog designs.
Are there any extensions or features in Chisel to support this better? For example, automatically converting Chisel "Bundle" objects into SystemVerilog 'struct' ports.
Or creating SV enums, when the Chisel code is written using the Enum class.
Currently, no. However, both suggestions sound like very good candidates for discussion for future implementation in Chisel/FIRRTL.
SystemVerilog Struct Generation
Most Chisel code instantiated inside Verilog/SystemVerilog will use some interface wrapper that deals with converting the necessary signal names that the instantiator wants to use into Chisel-friendly names. As one example of doing this see AcceleratorWrapper. That instantiates a specific accelerator and does the connections to the Verilog names the instantiator expects. You can't currently do this with SystemVerilog structs, but you could accomplish the same thing with a SystemVerilog wrapper that maps the SystemVerilog structs to deterministic Chisel names. This is the same type of problem/solution that most people encounter/solve when integrating external IP in their project.
Kludges aside, what you're talking about is possible in the future...
Some explanation is necessary as to why this is complex:
Chisel is converted to FIRRTL. FIRRTL is then lowered to a reduced subset of FIRRTL called "low" FIRRTL. Low FIRRTL is then mapped to Verilog. Part of this lowering process flattens all bundles using uniquely determined names (typically a.b.c will lower to a_b_c but will be uniquified if a namespace conflict due to the lowering would result). Verilog has no support for structs, so this has to happen. Additionally, and more critically, some optimizations happen at the Low FIRRTL level like Constant Propagation and Dead Code Elimination that are easier to write and handle there.
However, SystemVerilog or some other language that a FIRRTL backend is targeting that supports non-flat types benefits from using the features of that language to produce more human-readable output. There are two general approaches for rectifying this:
Lowered types retain information about how they were originally constructed via annotations and the SystemVerilog emitter reconstructs those. This seems inelegant due to lowering and then un-lowering.
The SystemVerilog emitter uses a different sequence of FIRRTL transforms that does not go all the way to Low FIRRTL. This would require some of the optimizing transforms run on Low FIRRTL to be rewritten to work on higher forms. This is tractable, but hard.
If you want some more information on what passes are run during each compiler phase, take a look at LoweringCompilers.scala
Enumerated Types
What you mention for Enum is planned for the Verilog backend. The idea here was to have Enums emit annotations describing what they are. The Verilog emitter would then generate localparams. The preliminary work for annotation generation was added as part of StrongEnum (chisel3#885/chisel3#892), but the annotations portion had to be later backed out. A solution to this is actively being worked on. A subsequent PR to FIRRTL will then augment the Verilog emitter to use these. So, look for this going forward.
On Contributions and Outreach
For questions like this with (currently) negative answers, feel free to file an issue on the respective Chisel3 or FIRRTL repository. And even better than that is an RFC followed by an implementation.

Is this a clean way to withdraw from a contract in Solidity?

I have been looking high and low about how to withdraw the funds from an Ethereum contract with no prevail. The Remix editor is giving the warning that this function may cause an infinite loop.
Gas requirement of function KOTH.cleanTheKingsChest() high: infinite. If the gas requirement of a function is higher than the block gas limit, it cannot be executed. Please avoid loops in your functions or actions that modify large areas of storage (this includes clearing or copying arrays in storage)
And...
Should I use Open-Zeppelin's safe math for this function?
function cleanTheKingsChest() public isOwner {
uint bal = address(this).balance;
address(owner).transfer(bal);
}
This will transfer all ether held by the contract to the owner’s address. There is no issue with the way you’re doing it.
The reason for the warning is because you are making a call out to another address. That address could itself be a contract with a custom defined transfer or fallback function (if no transfer method is defined). Since Remix doesn’t know what that implementation may do, it’ can’t estimate the gas usage. This isn’t a concern since transfer calls are limited to a 2100 gas stipend.
You don’t need SafeMath for that function since you’re not doing anything that can cause an overflow. However, in general, it’s a good idea to use it.

How come the macro is used as a function, but is not implemented anywhere?

The following code is in MySQL 5.5 storage/example/ha_example.cc:
MYSQL_READ_ROW_START(table_share->db.str, table_share->table_name.str, TRUE);
rc= HA_ERR_END_OF_FILE;
MYSQL_READ_ROW_DONE(rc);
I search the MYSQL_READ_ROW_START definition in the whole project, and find it in the include/probes_mysql_nodtrace.h:
#define MYSQL_READ_ROW_START(arg0, arg1, arg2)
#define MYSQL_READ_ROW_START_ENABLED() (0)
#define MYSQL_READ_ROW_DONE(arg0)
#define MYSQL_READ_ROW_DONE_ENABLED() (0)
It is just an empty macro definition here.
My question is, How came this macro MYSQL_READ_ROW_START is not associate with any function, but used as a function in the above code?
Thanks.
These aren't traditional macros: they're probe points for DTrace,
an observability framework for Solaris, OS X, FreeBSD and various
other operating systems.
DTrace revolves around the notion that different providers offer
certain probes at which one can observe running executables or
even the operating system itself. Some providers are time-based;
by firing at regular intervals the probes can, for example, be
used to profile the use of a CPU. Other providers are code-based,
and their probes might, for example, fire at the entrance to and
exit from functions.
The code you highlight is an example of the USDT (User-land
Statically Defined Tracing) provider. The canonical use of the
USDT provider is to expose meaningful events within transactions.
For example, the beginning and end of a transaction might well
occur somewhere deep within different functions; in this case
it's best for the developer to identify exactly what he wants to
reveal and when.
A USDT probe is more than a switchable printf() although it can
of course be used to reveal information, e.g. some local value
such as the intermediate result of a transaction. A USDT probe
can also be used to trigger behaviour. For example, one might
want to activate some network probes for only the duration of a
certain transaction.
Returning to your question, USDT probes are implemented by writing
macros in the code that correspond to a description of the
provider in a ".d" file elsewhere. This is parsed by the
dtrace(1) utility, which generates a header file that is suitable
for compilation. On a system that lacks DTrace it would make
sense to define a header file in which the USDT macros became null
ops, and judging by the given filename (probes_mysql_nodtrace.h)
this is what you are observing.
See http://dev.mysql.com/tech-resources/articles/getting_started_dtrace_saha.html.
To quote:
DTrace probes are implemented by kernel modules called providers, each
of which performs a particular kind of instrumentation to create
probes. Providers can thus described as publishers of probes that can
be consumed by DTrace consumers (see below). Providers can be used for
instrumenting kernel and user-level code. For user-level code, there
are two ways in which probes can be defined- User-Level Statically
Defined Tracing (USDT) or PID provider.
So it appears to be up to DTrace providers to implement such a macro.