Things that I didn't understand during the deployment of the Gnosis Safe contracts on an EVM-based chain - ethereum

I want to check my assumptions on things that I didn't fully understand during the deployment of the Gnosis Safe contracts on an EVM-based chain.
I would appreciate it if you could help me verify my assumptions about the deployment.
Three steps below are needed to complete the Safe deployment.
Make a request for a new deployment at https://github.com/safe-global/safe-singleton-factory.
Deploy the Safe contracts on a custom network.
Add the newly supported network to the safe-deployments repository located at https://github.com/safe-global/safe-deployments.
The purpose of the first step is to employ a deterministic deployment proxy which allows the contracts' addresses to be predetermined.
The second step requires having coins from the custom network, and this is the only purpose for adding the MNEMONIC to the .env file.
Format of the MNEMONIC variable in the .env file is:
MNEMONIC="antique assume recycle glance agent habit penalty forum behave danger crop weekend"
The only purpose of including ETHERSCAN_API_KEY in .env is to update the Safe contracts code on the Etherscan.
Below is something that I cannot even begin to guess the purpose of:
What is the purpose of the third step? Is the purpose of this to document the deployments of the custom networks?

You got it right. Adding your deployment to that repository will inform everyone that your chain has the Gnosis Safe singleton contract.
This repository is associated with an npm package, which the Gnosis Safe SDK depends on. This means that after adding your network, the SDK will be able to deploy and use contracts form your chain.

Related

Should I use a MarketPlace action instead of a plain bash `cp` command to copy files?

I am noticing there are many actions in the GitHub marketplace that do the same. Here is an example:
https://github.com/marketplace/actions/copy-file
Is there any benefit of using the GitHub marketplace action instead of plain bash commands? Do we have recommended practices guideline that helps to decide whether I use MarketPlace actions versus plain bash or command line
These actions don't seem to have any real value in my eyes...
Other than that, these run in docker and don't need cp, wget or curl to be available on the host, and they ensure a consistent version of their tools is used. If you're lucky these actions also run consistently the same way on Windows, Linux and Mac, where as your bash scripts may not run on Windows. But the action author would have to ensure this, it's not something that comes by default.
One thing that could be a reason to use these actions from the marketplace is that they can run as a post-step, which the run: script/bash/pwsh steps can't.
They aren't more stable or safer, unless you pin the actions on a commit-hash or fork it, the owner of the action can change the behavior of the action at any time. So, you are putting trust in the original author.
Many actions provide convenience functions, like better logging or output variables or the ability to safely pass in a credential, but these tasks seem to be more of an exercise in building an action by the author and they don't really serve a great purpose.
The documentation that comes with each of these actions, doesn't provide a clear reason to use these actions, the actions don't follow the preferred versioning scheme... I'd not use these.
So, when would you use an action from the marketplace...? In general actions, like certain cli's provide a specific purpose and an action should contain all the things it needs to run.
An action could contain a complex set of steps, ensure proper handling of arguments, issue special logging commands to make the output more human-readable or update the environment for tasks running further down in the workflow.
An action that adds this extra functionality on top of existing cli's makes it easier to pass data from one action to another or even from one job to another.
An action is also easier to re-use across repositories, so if you're using the same scripts in multiple repos, you could wrap them in an action and easily reference them from that one place instead of duplicating the script in each action workflow or adding the script to each repository.
GitHub provides little guidance on when to use an action or when an author should publish an action to the marketplace or not. Basically, anyone can publish anything to the marketplace that fulfills the minimum metadata requirements for the marketplace.
GitHub does provide guidance on versioning for authors, good actions should create tags that a user can pin to. Authors should practice semantic versioning to prevent accidentally breaking their users. Actions that specify a branch like main or master in their docs are suspect in my eyes and I wouldn't us them, their implementation could change from under you at any time.
As a consumer of any action, you should be aware of the security implications of using any actions. Other than that, the author has 2FA enabled on their account, GitHub does little to no verification on any actions they don't own themselves. Any author could in theory replace their implementation with ransomware or a bitcoin miner. So, for actions you haven't built a trust relation with its author, it's recommended to fork the action to your own account or organization and that you inspect the contents prior to running them on your runner, especially if that's a private runner with access to protected environments. My colleague Rob Bos has researched this topic deeply and has spoken about this topic frequently on conferences, podcasts and live streams.

Connecting Ethereum nodes that are on different machines

I am experimenting with Ethereum. I have successfully setup a private testnet via the instructions on the site. However, I am having trouble adding peers from different machines. On any node I create, the admin.nodeInfo.NodeUrl parameter is undefined. I have gotten the enode address by calling admin.nodeInfo and when I try the admin.addPeer("enode://address") command (with the public IP,) it returns true but the peers are not listed when calling admin.peers.
I read on another thread (here) that the private testnet is only local, but I am seeing plenty of documentation that suggests otherwise (here and here.) I have tried the second tutorial adding the command-line flags for custom networkid and genesis block.
Any advice would be much appreciated. Please let me know if I can provide more details.
It is difficult to find in the available documentation but a key function is admin.addPeer().
https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console
There are a few ways you could do it I suppose, but I have 1 node running on my local PC and one node running on a remote server. This saves me Ether while testing contracts and keeps me from polluting the Ethereum blockchain with junk. The key when running the admin.addPeer() is to find the "enode" for each of the notes such that you will run the function to look something like this on one of the nodes: admin.addPeer(enode#ipaddress:port). If you run admin.peers and see something other than an empty list, you were probably successful. The main thing to check for is that the enode ID and ip address from admin.peers match what you were expecting.
The geth configuration settings are a little tricky as well. You will have to adopt it for your particular uses, but here are some of the parameters I use:
geth --port XYZ --networkid XYZ --maxpeers X
Replace XYZ and X with the numbers you want to use and make sure you run the same parameters when starting both notes. There could be more parameters involved, but that should get you pretty far.
Disclaimer: I'm new to Geth myself as well as using computers for anything more than facebook, so take my answer with a grain of salt. Also, I haven't given you my full command line with starting up Geth because I'm not 100% sure on whether some of the parameters are related to a private testnet and which are not. I've only given you the ones that I'm sure are related to running a private testnet.
Also, you may find that can't execute any transactions which running a private test net. That's because you need one of them to start mining. So run: miner.start(X) when you are ready to start deploying contracts.
I apologize for this not being fully complete, but just passing on my experience after spending 1-2 weeks trying to figure out myself because the documentation isn't full clear on how to do this. I think it should be actively discouraged in the spirit of Ethereuem, but in my case, I run primarily not to pollute the blockchain.
PS. As I was just getting ready to hit submit, I found this that also sheds more light.
connecting to the network

How about an Application Centralized Configuration Management System?

We have a build pipeline to manage the artifacts' life cycle. The pipline is consist of four stages:
1.commit(runing unit/ingetration tests)
2.at(deploy artifact to at environment and runn automated acceptance tests)
3.uat(deploy artifact to uat environment and run manual acceptance tests)
4.pt(deploy to pt environment and run performance tests)
5.//TODO we're trying to support the production environment.
The pipeline supports environment varialbles so we can deploy artifacts with different configurations by triggerting with options. The problem is sometimes there are too many configuration items making the deploy script contains too many replacement tasks.
I have an idea of building a centralized configuration managment system (ccm for short name), so we can maintain the configuration items over there and leave only a url(connect to the ccm) replacement task (handling different stages) in the deploy script. Therefore, the artifact doesnt hold the configuration values, it asks the ccm for them.
Is this feasible or a bad idea of the first place?
My concern is that the potential mismatch between the configuration key (defined in the artifact) and value (set in the ccm) is not solved by this solution and may even worse.
Configuration files should remain with the project or set as configuration variables where they are run. The reasoning behind this is that you're adding a new point of failure in your architecture, you have to take into account that your configuration server could go down thus breaking everything that depends on it.
I would advice against putting yourself in this situation.
There is no problem in having a long list of environment variables defined for a project, besides that could even mean you're doing things properly.
If for some reason you find yourself changing configuration files a lot (for ex. database connection strings, api ednpoints, etc...) then the problem might be this need to change a lot configurations, which should stay almost always the same.

How to retrieve appid when deployed to cloudbees?

In the Cloudbees wiki, this page explains how to add configuration parameter for an app deployment, using cloudbees-web.xml.
But, is the content of:
<appid>APP_ID</appid>
injected as a well ? How can I retrieve this value from my application's code ?
My preference is to avoid coding an application to contain explicit references to the container within which it runs. So I would favour using techniques that do not tie your code to CloudBees (a.k.a. us).
Thus I would use a container specific descriptor file that configures a context parameter, then your application just reads the context parameter and uses that parameter directly.
There are two techniques for doing this:
Application Environments personally I love this way... though if you want to deploy the application to your own test environment that you have just spun up yourself, your cloudbees-web.xml will likely be missing the required environment definition... so better is to use the newer
Configuration Parameters so that when you need your own test instance you just define the configuration parameters for that test environment and then deploy the exact same artifact to that instance... it also prevents the issue of deploying to the test instance with the production environment turned on.
I am sure one of the RUN# team may well have some other trick such as a System property that tells you the app id... but keep in mind that when running locally, e.g. using a local jetty/tomcat/bees:run container your code will then blow up!

Reliable way to tell development server apart from production server?

Here are the ways I've come up with:
Have an unversion-controlled config file
Check the server-name/IP address against a list of known dev servers
Set some environment variable that can be read
I've used (2) on some of my projects, and that has worked well with only one dev machine, but we're up to about 10 now, it may become difficult to manage an ever-changing list.
(1) I don't like, because that's an important file and it should be version controlled.
(3) I've never tried. It requires more configuration when we set up each server, but it could be an OK solution.
Are there any others I've missed? What are the pros/cons?
(3) doesn't have to require more configuration on the servers. You could instead default to server mode, and require more configuration on the dev machines.
In general I'd always want to make the dev machines the special case, and release behavior the default. The only tricky part is that if the relevant setting is in the config file, then developers will keep accidentally checking in their modified version of the file. You can avoid this either in your version-control system (for example a checkin hook), or:
read two config files, one of which is allowed to not exist (and only exists on dev machines, or perhaps on servers set up by expert users)
read an environment variable that is allowed to not exist.
Personally I prefer to have a config override file, just because you've already got the code to load the one config file, it should be pretty straightforward to add another. Reading the environment isn't exactly difficult, of course, it's just a separate mechanism.
Some people really like their programs to be controlled by the environment (especially those who want to control them when running from scripts. They don't want to have to write a config file on the fly when it's so easy to set the environment from a script). So it might be worth using the environment from that POV, but not just for this setting.
Another completely different option: make dev/release mode configurable within the app, if you're logged into the app with suitable admin privileges. Whether this is a good idea might depend whether you have the kind of devs who write debug logging messages along the lines of, "I can't be bothered to fix this, but no customer is ever going to tell the difference, they're all too stupid." If so, (a) don't allow app admins to enable debug mode (b) re-educate your devs.
Here are a few other possibilities.
Some organizations keep development machines on one network, and production machines on another network, for example, dev.example.com and prod.example.com. If your organization uses that practice, then an application can determine its environment via the fully-qualified hostname on which it is running, or perhaps by examining some bits in its IP address.
Another possibility is to use an embeddable scripting language (Tcl, Lua and Python come to mind) as the syntax of your configuration file. Doing that means your configuration file can easily query environment variables (or IP addresses) and use that to drive an if-then-else statement. A drawback of this approach is the potential security risk of somebody editing a configuration file to add malicious code (for example, to delete files).
A final possibility is to start each application via a shell/Python/Perl script. The script can query its environment and then use that to driven an if-then-else statement for passing a command-line option to the "real" application.
By the way, I don't like to code an environment-testing if-then-else statement as follows:
if (check-for-running-in-production) {
... // run program in production mode
} else {
... // run program in development mode
}
The above logic silently breaks if the check-for-running-in-production test has not been updated to deal with a newly added production machine. Instead, if prefer to code a bit more defensively:
if (check-for-running-in-production) {
... // run program in production mode
} else if (check-for-running-in-development) {
... // run program in development mode
} else {
print "Error: unknown environment"
exit
}