Deploy Ethereum network in several machines - ethereum

After my first approach deploying an Ethereum network with testrpc, I´m wondering if I can deploy several nodes using docker containers, so I can deploy a real network with several machines/nodes, and I have some questions:
¿Can I use testrpc for this this task or I need to use Geth instead? Because reading the docs I think testrpc is too basic for it
¿Does Truffle or another framework help you to do all this stuff?
Any information related with all this is welcomed because, as I said, I´m in a very initial stage.

TestRPC is just a tool that will simulate an Ethereum network. Nothing else.
If you want to create an ethereum network you'll have to use an ethereum client like Geth, deploy it on various machines in order to makes those machines nodes.
If you want to be a separate network you'll have to change some parameters before launching your client.
I'll leave here some documentation that explains it : See the doc

Related

What is the difference between infura and geth?

I understand both methods are used for running dapps. What I don't understand is the clear cut difference between the two or how one is more advantageous over the other? I'm new to blockchain, so please explain with a simple terminology.
The difference is:
Infura has geth installation running for you, exposing most used, most low-CPU-consuming methods for you via Web.
You can install geth yourself but you will need a server with about 500GB of SSD disk, and wait 1 month to download the entire State.
If you are not going to do any serious monetary transfers I recommend using Etherscan, it is more complete than Infura.
To execute transactions and/or queries against blockchains, you need connections.
Infura is an API gateway to main network and some test networks. It supports a subset of web3 interface. When you like to execute a transaction against the Ethereum blockchain, you may use infura as connection to the blockchain. So in this case, you are not directly connected to Ethereum, but infura has a connection. The Metamask Browser Plugin works with infura.
The alternative approach is to have an Ethereum client like geth or parity running on your machine. In this case, the Ethereum Client connects to several public nodes of the blockchain and forwards your transactions to the blockchain.
Depending on your architecture and requirements, both approaches could be the best solution.

What is the difference between Serverless containers and App Engine flexible with custom runtimes?

I came across the article : Bringing the best of serverless to you
where I came to know about upcoming product called Serverless containers on Cloud Functions which is currently in Alpha.
As described in the article:
Today, we’re also introducing serverless containers, which allow you
to run container-based workloads in a fully managed environment and
still only pay for what you use.
and in GCP solutions page
Serverless containers on Cloud Functions enables you to run your own containerized workloads on
GCP with all the benefits of serverless. And you will still pay only
for what you use. If you are interested in learning more about
serverless containers, please sign up for the alpha.
So my question is how this serverless containers different from app engine flexible with custom runtime, which also use a docker file?
And it's my suspicion, since mentioned named is Serverless containers on Cloud Functions, the differentiation may include role of cloud functions. If so what is the role played by cloud functions in the serverless containers?
Please clarify.
What are Cloud Funtions?
From the official documentation:
Google Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your function is triggered when an event being watched is fired. Your code executes in a fully managed environment. There is no need to provision any infrastructure or worry about managing any servers.
In simple words, the Cloud Function is triggered by some event (HTTP request, PubSub message, Cloud Storage file insert...), runs the code of the function, returns a result and then the function dies.
Currently there are available four runtime environments:
Node.js 6
Node.js 8 (Beta)
Python (Beta)
Go (Beta)
With the Serverless containers on Cloud Functions product it is intended that you can provide your own custom runtime environment with a Docker Image. But the life cycle of the Cloud Function will be the same:
It is triggered > Runs > Outputs Result > Dies
App Engine Flex applications
Applications running in the App Engine flexible environment are deployed to virtual machines, i.e Google Cloud Compute Engine instances. You can choose the type of machine you want use and the resources (CPU, RAM, disk space). The App Engine flexible environment automatically scales your app up and down while balancing the load.
As well as in the case of the Cloud Functions there runtimes provided by Google but if you would like to use an alternative implementation of Python, Java, Node.js, Go, Ruby, PHP, .NET you can use Custom Runtimes. Or even you can work with another language like C++, Dart..., you just need to provide a Docker Image for your Application.
What are differences between Cloud Functions and App Engine Flex apps?
The main difference between them are its life cycle and the use case.
As commented above a Cloud Function has a defined life cycle and it dies when it task concludes. They should be used to do 1 thing and do it well.
On the other hand an Application running on the GAE Flex environment will always have at least 1 instance running. The typical case for this applications are to serve several endpoints where users can do REST API calls. But they provide more flexibility as you have full control over the Docker Image provided. You can do "almost" whatever you want there.
What is a Serverless Container?
As stated on the official blog post (search for Serverless Containerss), it's basically a Cloud Function running inside a custom environment defined by the Dockerfile.
It is stated on the official blog post:
With serverless containers, we are providing the same underlying
infrastructure that powers Cloud Functions, but you’ll be able to
simply provide a Docker image as input.
So, instead of deploying your code on the CF, you could also just deploy the Docker image with the runtime and the code to execute.
What's the difference between this Cloud Functions with custom runtimes vs App Engine Flexible?
There are 5 basic differences:
Network: On GAE Flexible you can customize the network the instances run. This let's you add firewalls rules to restrict egress and ingress traffic, block specific ports or specify the SSL you wish to run.
Time-Out: Cloud Functions can run for a maximum of 9 minutes, Flexible on the other hand, can run indefinitely.
Ready only environment: Cloud Functions environment is read-only while Flexible could be written (this is only intended to store spontaneous information as once the Flexible instance is restarted or terminated, all the stored data is lost).
Cold Boot: Cloud Functions are fast to deploy and fast to start compared to Flexible. This is because Flexible runs inside a VM, thus this extra time is taken in order for the VM to start.
How they work: Cloud functions are event driven (ex: upload of photo to cloud storage executing a function) on the other hand flexible is request driven.(ex: handling a request coming from a browser)
As you can see, been able to deploy a small amount of code without having to take care of all the things listed above is a feature.
Also, take into account that Serverless Containers are still in Alpha, thus, many things could change in the future and there is still not a lot of documentation explaining in-depth it's behavior.

Ethereum DAPP - understanding

I began to understand how to develop smart contracts on the Ethereum blockchain and how to write a web-script for interacting with a smart contract (buying, selling, statistics ...) And I came to the conclusion what to do. I wanted to know if I understood everything correctly.
We write the contract on http://remix.ethereum.org, check whether
all functions work correctly.
We are raising TRUFFLE + GANACHE to test a contract on our own
private blockchain.
We write a simple front-end to interact with the contract, we will
do everything through Metamask.
Deploy everything into the Ropsten Ethereum test network and test
everything there.
After successful testing in the test network, we fill everything
into the main blockchain of Ethereum.
Did I understand everything correctly, and did I take the right steps?
The steps you outlined look good. I would actually say that you don't need to do the first step, as you can use truffle during all steps of the development process.
Create a new Truffle project (truffle init) and write the smart contracts and migration scripts.
Write thorough unit tests using JavaScript (and/or Solidity) and run these tests on a local Ganache instance (truffle test). My library truffle-assertions can be used to assist in writing these unit tests.
Write a frontend to the contract which uses the artefacts generated by Truffle (truffle compile and truffle migrate). This frontend can be manually tested in the browser with Metamask.
Add connection configuration to the truffle.js file to connect with Ethereum Testnets (Rinkeby, Kovan, Ropsten) and Mainnet through truffle-hdwallet-provider and Infura, so the contracts can be deployed to these networks. Further explanation.
Deploy to a testnet of choice (truffle migrate --network ropsten) and do more testing as in step 3.
After you've thoroughly tested all functionality across the multiple development steps, deploy to the mainnet (truffle migrate --network mainnet).
Of course most of these steps can still be completed without Truffle, but Truffle really simplifies a big part of the process, and there is a lot of documentation/resources available for it.

Hyperledger Fabric v1 Network Configuration On Physical

I'm trying to deploy hyperledger fabric network on two physical machines.
I've deployed it on single machine using this Guide.
But it is not clear to me what should i change in configuration files to deploy it on different nodes.
What does host field in configtx.yaml mean? (for example - Host: peer0.org1.example.com) Is it name only of virtual host, or i should replace it with real ip?
Please consider using follow script to roll over the Fabric network on bare metal machines. In fact you can also use it to learn how to setup your network and what parameters and configuration you need to take care about.
UPDATE
The Host value of the configtx.yaml file is the endpoint of anchor peers for different organization. Where the key role of the anchor peers is to get two organization which participating on one channel get connected together, basically anchor peers used as a advertisement billboard allowing organization to share theirs membership in scope of particular channel.
While you going to deploy your network on bare metals you have to use real host names or IP addresses and make sure they are accessible. Basically this is not that different compared to the docker-compose configuration available in fabric-samples.
The name in the configtx.yaml, are the domains that will have your nodes (peer, orderer, CA), those will be hosted in your docker containers with those domains, they will comunicate using those.
You can replace them for your IP Addresses if it's easier to understand, but I recommend to leave them like that.
Also if you are preparing a multihost solution you will need to add an extra-host section to your docker-compose files.
extra_hosts:
- "peer1.org1.example.com:<Second machine IP address>"
Don't know if you could accomplish to start your hyperledger fabric but, I figured out how to setup a multihost Hyperledger Fabric not using Docker Swarm, and using the basic-network example included in the Hyperledger Fabric examples.
You can review it here, hope it helps you.
https://medium.com/1950labs/setup-hyperledger-fabric-in-multiple-physical-machines-d8f3710ed9b4

Differences between OpenShift and Kubernetes

What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
In addition to the additional API entities, as mentioned by #SteveS, Openshift also has advanced security concepts.
This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as root in Kubernetes, but run under an arbitrary user with a high ID (e.g. 1000090000) in Openshift. This means that many containers from DockerHub do not work as expected. For some popular applications, The Red Hat Container Catalog supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.
To get an idea of the system, I strongly suggest starting out with Kubernetes. Minikube is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.
OpenShift includes a distribution of Kubernetes, so if you don't need any of those added features of OpenShift you can choice to ignore them such as: Web Console, Builds, advanced deployment models and much, much more.
Here's a summary of items available on the OpenShift website.
Kubernetes comes with Ingress Rules but Openshift comes with Routes
Kubernetes has IngressController but Openshift has Router as HAProxy
To swtich namespace in cli for openshift is very easy but in
kubernetes you need to create contex and switch between context
Openshift UI has more interactive and informative then Kubernetes
To bake docker image inside Openshift has BuildConfig but kubernetes
don't has any thing you need to build image and push to registry
Openshift has Pipeline where u don't need any jenkins to deploy any
app but Kubernetes don't has.
The easiest way to differentiate between them is to understand that while vanilla K8S is community project, OpenShift is more focused towards making it a enterprise ready product. Resources like Imagestreams, BC, Builds, DC, Routes etc along with leveraging functionalities like S2I, Router etc make it easier for Developers and admin alike to use OCP for development, deployment and lifecycle management. You can refer to the URL https://cloud.redhat.com/learn/topics/kubernetes/ for getting more information on key differences between them.
OCP makes your life much easier by giving easy actions using CLI command OC and fine grained webconsole.
You can try OCP and get first hand experience of the features using https://developers.redhat.com/developer-sandbox
where you can quick get access to sandboxed environment in a shared cluster.