Connecting Ethereum nodes that are on different machines - ethereum

I am experimenting with Ethereum. I have successfully setup a private testnet via the instructions on the site. However, I am having trouble adding peers from different machines. On any node I create, the admin.nodeInfo.NodeUrl parameter is undefined. I have gotten the enode address by calling admin.nodeInfo and when I try the admin.addPeer("enode://address") command (with the public IP,) it returns true but the peers are not listed when calling admin.peers.
I read on another thread (here) that the private testnet is only local, but I am seeing plenty of documentation that suggests otherwise (here and here.) I have tried the second tutorial adding the command-line flags for custom networkid and genesis block.
Any advice would be much appreciated. Please let me know if I can provide more details.

It is difficult to find in the available documentation but a key function is admin.addPeer().
https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console
There are a few ways you could do it I suppose, but I have 1 node running on my local PC and one node running on a remote server. This saves me Ether while testing contracts and keeps me from polluting the Ethereum blockchain with junk. The key when running the admin.addPeer() is to find the "enode" for each of the notes such that you will run the function to look something like this on one of the nodes: admin.addPeer(enode#ipaddress:port). If you run admin.peers and see something other than an empty list, you were probably successful. The main thing to check for is that the enode ID and ip address from admin.peers match what you were expecting.
The geth configuration settings are a little tricky as well. You will have to adopt it for your particular uses, but here are some of the parameters I use:
geth --port XYZ --networkid XYZ --maxpeers X
Replace XYZ and X with the numbers you want to use and make sure you run the same parameters when starting both notes. There could be more parameters involved, but that should get you pretty far.
Disclaimer: I'm new to Geth myself as well as using computers for anything more than facebook, so take my answer with a grain of salt. Also, I haven't given you my full command line with starting up Geth because I'm not 100% sure on whether some of the parameters are related to a private testnet and which are not. I've only given you the ones that I'm sure are related to running a private testnet.
Also, you may find that can't execute any transactions which running a private test net. That's because you need one of them to start mining. So run: miner.start(X) when you are ready to start deploying contracts.
I apologize for this not being fully complete, but just passing on my experience after spending 1-2 weeks trying to figure out myself because the documentation isn't full clear on how to do this. I think it should be actively discouraged in the spirit of Ethereuem, but in my case, I run primarily not to pollute the blockchain.
PS. As I was just getting ready to hit submit, I found this that also sheds more light.
connecting to the network

Related

Things that I didn't understand during the deployment of the Gnosis Safe contracts on an EVM-based chain

I want to check my assumptions on things that I didn't fully understand during the deployment of the Gnosis Safe contracts on an EVM-based chain.
I would appreciate it if you could help me verify my assumptions about the deployment.
Three steps below are needed to complete the Safe deployment.
Make a request for a new deployment at https://github.com/safe-global/safe-singleton-factory.
Deploy the Safe contracts on a custom network.
Add the newly supported network to the safe-deployments repository located at https://github.com/safe-global/safe-deployments.
The purpose of the first step is to employ a deterministic deployment proxy which allows the contracts' addresses to be predetermined.
The second step requires having coins from the custom network, and this is the only purpose for adding the MNEMONIC to the .env file.
Format of the MNEMONIC variable in the .env file is:
MNEMONIC="antique assume recycle glance agent habit penalty forum behave danger crop weekend"
The only purpose of including ETHERSCAN_API_KEY in .env is to update the Safe contracts code on the Etherscan.
Below is something that I cannot even begin to guess the purpose of:
What is the purpose of the third step? Is the purpose of this to document the deployments of the custom networks?
You got it right. Adding your deployment to that repository will inform everyone that your chain has the Gnosis Safe singleton contract.
This repository is associated with an npm package, which the Gnosis Safe SDK depends on. This means that after adding your network, the SDK will be able to deploy and use contracts form your chain.

netdata alarm on server login

so I'm pretty new to netdata. I have an instance running on my server and I wanted to setup a few alarms to my telegram.
The examples given by netdata worked pretty well (cpu load, disk usage, ...). I also wanted to add an alarm when someone logs in to the server even though I'm using SSH keys for login. Why? Just because I want to learn how to do it.
Anyway. I don't really understand their documentation since I'm mostly a developer and no Linux-sysadmin.
So far I added config files to /etc/netdata/health.d/. But I don't understand how the system_login.conf file should look like.
I tried using this blog post but I don't quite understand how to apply it: How to monitor and troubleshoot systemd-logind
What do I have to put into the alarm property? For the on property I'd guess it is systemd-logind. On further research I found another blog post (systemd-logind monitoring with Netdata
) which says to edit some python.d.conf file that I don't have. I'm pretty confused right now and have no clue how to write the config file.

What is "Code over configuration"?

I have seen this terms many times on the google code over configuration or configuration over code. I tried on by searching it on google, but still got nothing. Recently i started work it on gulp again the mystery came code over configuration.
Can you please tell me what is both and what is the difference on them?
Since you tagged this with gulp, I'll give you a popular comparision to another tool (Gruunt) to tell the difference.
Grunt’s tasks are configured in a configuration object inside the
Gruntfile, while Gulp’s are coded using a Node style syntax.
taken from here
So basically with configuration you have to give your tool the information it needs to work like it thinks it has to work.
If you focus on code you tell your tool what steps it has to complete directly.
There's quite a bunch of discussion about which one is better. You'll have to have a read and decide what fits your project best.
Code over configuration (followed by gulp) and the opposite configuration over code (followed by grunt) are approaches/principles in software/program development where both gulp and grunt are using for the same thing as automate tasks. The approach refers to developing programs according to typical programming conventions, versus programmer defined configurations. Both approaches has its own context / purpose and not a question of which one is better.
In gulp each tasks are a java-script function, necessarily no configuration involved up-front (although functions can normally take configuration values) and chaining multiple functions together to create a build script. Gulp use node stream. A stream basically continuously flow of data and can be manipulated asynchronously. However in grunt all the tasks are configured in a configuration object in a file and those are run in sequence.
Reference: https://deliciousbrains.com/grunt-vs-gulp-battle-build-tools/
Because you talked about "code" I'll try and give a different perspective.
While answering a question on figuring out IP address from inside of a docker container Docker container IP address from .net project
there are two codes possible
var ipAddress = HttpContext.Request.HttpContext.Connection.LocalIpAddress;
This will give you the IP address at runtime, but, it won't have control over it.
It can also lead to more code in the future to do something with the IP Address. Like feeding a load balancer or the likes.
I'd prefer a configuration over it.
such as
An environment variable with pre-configured IP addresses for each container service. Such as:
WEB_API_1_IP=192.168.0.10
WEB_API_2_IP=192.168.0.11
.
.
.
NETWORK_SUBNET=192.168.0.0/24
a docker-compose that ties the environment variable to IP address of the container. Such as:
version: '3.3'
services:
web_api:
.
.
.
networks:
public_net:
ipv4_address: ${WEB_API_1_IP}
.
and some .net code that links the two and give access within the code.
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
The code that we wrote is about reading through the configuration.
But it gives way better control. Depending on what environment you are running it you could have different environment files.
The subnet the number of machine they are all configured options rather than tricky code which requires more maintenance and is error prone.

Preventing reverse engineering with binary code and secret key

I am working on a software program that has to be deployed on private cloud server of a client, who has root access. I can communicate with the software through a secure port.
I want to prevent client from reverse engineering my program, or at least make it "hard enough". Below is my approach:
Write code in Go and compile the software into binary code (may be with obfuscation)
Make sure that program can only be initiated with secret key that can be sent through the secure port. The secret key can be changing depending on time.
Every time I need to start/stop the program, I can send commands with the secret keys through the secured port.
I think this approach can prevent a root user from either:
Using a debugger to reverse engineer my code
Running the program repeatedly to check outputs
My question is: What are the weak spots of this design? How can a root user attack it?
I want to prevent client from reverse engineering my program,
You can't prevent this fully when software runs on hardware you don't own. To run the software, the CPU must see all instructions of the program, and they will be stored in the computer memory.
https://softwareengineering.stackexchange.com/questions/46434/how-can-software-be-protected-from-piracy
Code is data. When the code is runnable, a copy of that data is un-protected code. Unprotected code can be copied.
Peppering the code with anti-piracy checks makes it slightly harder, but hackers will just use a debugger and remove them. Inserting no-ops instead of calls to "check_license" is pretty easy.
(answers in https://softwareengineering.stackexchange.com/questions/46434 may be useful for you)
Hardware owner controls the OS and memory, he can dump everything.
or at least make it "hard enough".
You can only make it a bit harder.
Write code in Go and compile the software into binary code (may be with obfuscation)
IDA will decompile any machine code. Using native machine code is a bit stronger than bytecode (java or .NET or dex)
Make sure that program can only be initiate with secret key that can be sent through the secure port. The secret key can be changing depending on time.
If the copy of the same secret key (keys) is in code or memory of the program, user may dump it and simulate your server. If part of your code, or part of data needed for code to run is stored encrypted, and deciphered with such external key, user may either eavesdrop the key (after it will be decoded from SSL but before it will be used to decrypt secret part of code), or dump decrypted code/data from the memory (It is very easy to see new executable code created in memory even with default preinstalled tools like strace in Linux, just search for all mmaps with PROT_EXEC flags)
Every time I need to start/stop the program, I can send commands with the secret keys through the secured port.
This is just a variation of online license/antipiracy check ("phone home")
I think this approach can prevent a root user to: use a debugger to reverse engineer my code, or
No, he can start debugger at any time; but you can make it a bit harder to use interactive debugger, if the program communicates with your server often (every 5 seconds). But if it communicates so often it is better to move some part of computations to your server; this part will be protected.
And he still can use non-interactive debuggers, tracing tools and memory dumping. Also he can run program in virtual machine, wait until online check is done (using tcpdump and netstat to monitor network traffic), then do live snapshot of the VM (there are several variants to enable "live migration" of VM; only short pause may be recorded by your program if it has external timing), continue to run the first copy online, and take snapshot for offline debugging (with all keys and decrypted code in it).
run the program repeatedly to check outputs
Until he cracks the communications...

Stopping FlexUnit test run, if a test fails?

I use FlexUnit 4.1 with Adobe's TestRunnerBase to run a suite of integration tests to verify the integrity of a 3-tier BlazeDS/Java EE/MySQL server.
To bypass the security checks enforced by Apache Shiro while running those tests, I have configured two separate test runs: One that logs in as root, one that performs the actual integration tests.
Because of the way that BlazeDS handles duplicate sessions (this is an issue for another question, or rather, it has been already), sometimes the login mechanism fails - in which case I would like the TestRunner to suspend all further activities.
I have looked all over for some way to configure FlexUnitCore to stop on a test failure, but to no avail. Also, there seem to be events only for TEST_START and TEST_COMPLETE, but not for TEST_FAIL.
Is there some other way to find out if a test failed, to stop the runner?
First time for me - I stumbled upon the solution to my problem while I was writing my question: There is an IRunListener interface that can be implemented to react to all sorts of information sent by the TestRunner. Then we simply use FlexUnitCore#addListener() to initialize it, the same way we do it with the UIListener, TraceListener, CIListener, etc. that Adobe provides.