I Need to setup ethereum for thesis work and I need to use smart contract too. The Etereum should be private and I can make changes, Please give me step by step solution
Related
I want to check my assumptions on things that I didn't fully understand during the deployment of the Gnosis Safe contracts on an EVM-based chain.
I would appreciate it if you could help me verify my assumptions about the deployment.
Three steps below are needed to complete the Safe deployment.
Make a request for a new deployment at https://github.com/safe-global/safe-singleton-factory.
Deploy the Safe contracts on a custom network.
Add the newly supported network to the safe-deployments repository located at https://github.com/safe-global/safe-deployments.
The purpose of the first step is to employ a deterministic deployment proxy which allows the contracts' addresses to be predetermined.
The second step requires having coins from the custom network, and this is the only purpose for adding the MNEMONIC to the .env file.
Format of the MNEMONIC variable in the .env file is:
MNEMONIC="antique assume recycle glance agent habit penalty forum behave danger crop weekend"
The only purpose of including ETHERSCAN_API_KEY in .env is to update the Safe contracts code on the Etherscan.
Below is something that I cannot even begin to guess the purpose of:
What is the purpose of the third step? Is the purpose of this to document the deployments of the custom networks?
You got it right. Adding your deployment to that repository will inform everyone that your chain has the Gnosis Safe singleton contract.
This repository is associated with an npm package, which the Gnosis Safe SDK depends on. This means that after adding your network, the SDK will be able to deploy and use contracts form your chain.
Openzeppelin Initializable suggests using initialize (initializer) method. Hardhat upgrades suggest to use a initialize method (even the name can be anything) and do that invocation during deployment/upgrade time. Does it mean these two means the same thing and achieve the upgradeable contract pattern in two ways? Can we use both of these together?
Came across this doubt when I was referring both of the patterns when I wanted to implement Upgradeable in my project.
I am taking a udemy course and I encounter a code like this
https://github.com/acloudfan/Blockchain-Course-Basic-Solidity/blob/93ca256bcf8c436c144425291257dcff5c3b269f/test/constants_payable.js#L45
I am confuse why the call to a method is called directly instead of using .call or something, wherein if I do google, the way to call a method of a contract is either using .call or .send but at this point the author just calls it directly, is this allowed, why?
here is the contract code
https://github.com/acloudfan/Blockchain-Course-Basic-Solidity/blob/master/contracts/ConstantsPayable.sol
More or less, what is the context of calling smart contract method from a truffle test here? is it like the real environment where it waits for the transaction to be mined before returning or do tests just directly calls it like an ordinary function?
I am posting it here since the author of the udemy course is non responsive and its almost a week and more than a dozen Q&A question are not answered, so the author probably is busy or forgets about the course already (as it is kinda old course but reviewed well).
Before Truffle returns the contract instance (line 41), it uses the ABI interface (provided by the Solidity compiler) to build a map of JS functions for interacting with the contract, including receiveEthers().
what is the context of calling smart contract method from a truffle test here
Even though Truffle JS tests can be connected to a public testnet or mainnet, it's usually used together with another Truffle tool - local EVM and blockchain emulator called Ganache (see the config file where the author defines connection to a local blockchain). By default, Ganache mines a block after each transaction so that you (as a developer or a tester) don't need to worry about mining and other processes in setting up the network, and the response from the local blockchain it returned almost instantly.
if I do google, the way to call a method of a contract is either using .call or .send
Answering only about Truffle. Other packages such as Web3js or Ethers.js might have slightly different rules. And there are .call() and .send() methods in Solidity (for interacting with other contracts or addresses), that also behave differently than explained here:
You can interact with a contract in two different ways:
transactions (can make state changes - change contract storage, emit events)
calls (only read the contract data - no state changes)
By default, if you don't specify whether you want to make a transaction or a call, Truffle makes a transaction. You can override this decision and make a call instead by using the .call() method.
The .send() method is only used for low-level built transactions. A common use case is sending ETH - you need to build the transaction data field, fill the (ETH) value, and call the .send() method (assuming you have configured Truffle to use your private key to sign the transaction).
I am experimenting with Ethereum. I have successfully setup a private testnet via the instructions on the site. However, I am having trouble adding peers from different machines. On any node I create, the admin.nodeInfo.NodeUrl parameter is undefined. I have gotten the enode address by calling admin.nodeInfo and when I try the admin.addPeer("enode://address") command (with the public IP,) it returns true but the peers are not listed when calling admin.peers.
I read on another thread (here) that the private testnet is only local, but I am seeing plenty of documentation that suggests otherwise (here and here.) I have tried the second tutorial adding the command-line flags for custom networkid and genesis block.
Any advice would be much appreciated. Please let me know if I can provide more details.
It is difficult to find in the available documentation but a key function is admin.addPeer().
https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console
There are a few ways you could do it I suppose, but I have 1 node running on my local PC and one node running on a remote server. This saves me Ether while testing contracts and keeps me from polluting the Ethereum blockchain with junk. The key when running the admin.addPeer() is to find the "enode" for each of the notes such that you will run the function to look something like this on one of the nodes: admin.addPeer(enode#ipaddress:port). If you run admin.peers and see something other than an empty list, you were probably successful. The main thing to check for is that the enode ID and ip address from admin.peers match what you were expecting.
The geth configuration settings are a little tricky as well. You will have to adopt it for your particular uses, but here are some of the parameters I use:
geth --port XYZ --networkid XYZ --maxpeers X
Replace XYZ and X with the numbers you want to use and make sure you run the same parameters when starting both notes. There could be more parameters involved, but that should get you pretty far.
Disclaimer: I'm new to Geth myself as well as using computers for anything more than facebook, so take my answer with a grain of salt. Also, I haven't given you my full command line with starting up Geth because I'm not 100% sure on whether some of the parameters are related to a private testnet and which are not. I've only given you the ones that I'm sure are related to running a private testnet.
Also, you may find that can't execute any transactions which running a private test net. That's because you need one of them to start mining. So run: miner.start(X) when you are ready to start deploying contracts.
I apologize for this not being fully complete, but just passing on my experience after spending 1-2 weeks trying to figure out myself because the documentation isn't full clear on how to do this. I think it should be actively discouraged in the spirit of Ethereuem, but in my case, I run primarily not to pollute the blockchain.
PS. As I was just getting ready to hit submit, I found this that also sheds more light.
connecting to the network
I have a question concerning the use of EasyMock in junits. We have configured a framework for junits which uses inmemory derby database and EasyMock to test our service project. We use in memory derby for dao layer completely. The question arises on weather to use EasyMock completely or easymock and derby together in the service layer. Below is the scenario :
//class under test is in user-service project
class ServiceClassUnderTest {
IUserService userService;
IAddressService addressService;
public Address getUsersAddress(String id) {
User user = userService.getUserById(id);
// some logic goes here
Address address = addressService.getAddresdByUser(user);
// some validations goes here
return address;
}
}
The class under test is in user-service project and so is the IUserService interface. While IAddressService interface is in address-service project used as a dependency in user-service project.
Now the problem is in the change of approach suggested by some colleagues.
Approach we used to follow
Prepare test data for userService as its in the same project and mock addressService as its part of a dependency project and we might not have much idea about its behaviour and table structure
Advantage : cleaner approach as we have mimimal code for mocking and test data is in separate sql files
Suggested approach
Mock all services irrespective of its being in the same project or part of a dependency project
Disadvantage : more mocking relevant code then actual test related code, making it difficult to maintain and compromises readability.
The code example given is to only explain the scenario where as in real project we have a lot more complex structure with several service beans in a single class.
Could you please give me your suggestions on which approach is better and why considering the arguments provided by me for both approaches ??
A definitive is hard without have the complete big picture. Assuming you really want unit tests, I usually do this:
Test only the query done to the DB with an actual DB
Mock everything that is used by my tested class.
This "everything" should be no more than 3 or 4 dependencies. Otherwise, I will refactor until I get something that is readable.
Having more test code than production code is normal.
If I end up having trivial code in my tested method, I just don't test it. However, a test can also be used to document. So this is a blurry line.