Organizations in configtx.yaml - configuration

I am currently automating the creation of an hypeledger fabric network. The fabric-ca-server has no affiliation when i start and then I add them one by one.
Can the orderer get the organizations somewhere else than the configtx.yaml (e.g. querying the affiliations in the fabric-ca-server) ?

No, you have to define your organizations in the configtx.yaml file only, as it uses that file to generate the genesis blocks for your system and application channel. So whichever organizations you are using in your network that must be the part of the orderer genesis block which is generated only through configtx.yaml file.

Related

How a DAML application is adapted from Fabric network?

I was reading this documentation https://github.com/digital-asset/daml-on-fabric but I am confused. I am trying to understand the workflow and the architecture for the daml on fabric network. The network has 2 orgs and each org has 1 peer.On step 4 are allocating 5 daml parties. Quickstart example has 3 signatories(issuer,owner,buyer). To sum up, what is the match between daml parties and fabric orgs? I think that could be allocated even more daml parties without to change fabric network. They interact from one node? What is the purpose to add other nodes on step 11?
Parties don't have a cryptographic identity in DAML Ledgers, only nodes do. As part of the shared ledger state, every DAML-enabled infrastructure maintains a mapping from Party to Node. This relationship is usually described as a Node "hosting" a Party. The Node hosting a Party is able to submit transactions using that Party's authority and is guaranteed to receive any transactions visible to that Party.
In the tutorial you are referring to, all parties are allocated on a single Node, which then hosts them. This does indeed not make that much sense in practice. It would be more sensible to set up a network with three orgs and allocate the three parties on the peer nodes in the three orgs, respectively. Given the way the example is set up, that should be straightforward.

hyperledger Fabric - channel.tx and genesis.block very unclear

Next week I'm starting on a new blockchain project using Hyperledger Fabric. I have a question regarding the configtx binary.
We use this binary to create a channel.tx and a genesis.block. I have read the documentation, watched tutorials and looked on the internet but I still don't understand why the genesis.block and channel.tx is needed an why it's created like this. For example: shouldn't the genesis.block be in the blockchain including the channel configuration?
A simplified answer:
A genesis block is referred to as the first block.
The first (genesis) block in a channel, is a channel.tx (channel configuration transaction). It doesn't contain much more than the name of the channel, and the consortium that is allowed to use the channel.
The orderer genesis block is what configures an orderer when it starts. It contains MSP IDs for each organization, which MSP IDs are part of a consortium, and a trusted certificate for each MSP ID.
The orderer needs information about organizations, because the orderer approves the creation of new channels. A channel creation request must come from a trusted entity (part of an organization), or else the channel will not be created.
Since you cannot modify (execute a transaction) in a channel without an orderer approving, it makes sense to only let the orderer have the network information. This way you don't risk having inconsistent information between channels/orderers in case anything changes.
All Fabric blocks are encoded/serialized using protobuf, as the internal communication relies on gRPC. A block is thus in a binary format.
Think like this, what would you do if you want to change the configuration of the blockchain system? shutdown all the host, edit their config and restart them one by one? That will be rediculous, because we say that blockchain is decentralized, nobody can control all the hosts. The only way to change configurarion dynamically is consensus online. So how to make consensus online, obviously, the answer is using transactions(tx). As for the initialization of the blockchain in frabric, we can use the same way , that is channel.tx and genesis.block, to eliminating the cost of initialization by reuse the logic of editting configuration. Meanwhile, tx should be placed in a block, so that is why genesis.block exist.

How will the DAML Code affecting two distinct DA Nodes be deployed and how will its integrity maintained?

I am looking for DA’s recommendation/best practices regarding writing and deploying DAML code and object (.daml and .dar) in production grade solutions.
Let us take a scenario - Central Authority (CA) operating node may issue a new role under a contract to Participant 1 (P1) by writing a simple DAML code, below are few questions related to the DAML deployment –
a. Assuming DAML code will be written by CA, can we say that only CA needs to have this code and its build on its node and CA will simply execute the contract workflow allowing Party on P1 Node to simply accept/reject the role without having to know the content of the DAML code (business logic and other contract templates) written by CA ?
b. Will the DAML code file (.daml) written by CA Node needs to be transmitted to Participant 1 (P1) node so that P1 can verify and agree with the DAML file code (contract templates, parties and choices) and put the code and its build (.dar) into its node as well?
c. If the answer to above question is yes, how will the integrity of the DAML code be maintained e.g. what if DAML code is changed by P1 or CA at the time of deployment, which may cause conflict later?
The contract model, in the form of a dar file has to be supplied to all nodes that participate in the workflows modeled in that dar file.
A dar file can contain multiple DAML "packages" and each package is identified by its name and a hash.
On ledger, contract types (called template) are fully qualified, including package hash. If you change your templates, the package hash changes and thus the new templates are seen as completely different from the old ones by the ledger.
To change an existing contract model, you have to upgrade existing contracts using a DAML workflow. Of course, all signatories of the existing contracts need to agree to the upgrade workflow. You can only unilaterally upgrade data that you are in full control of. In the crypto currency world, you can consider all miners as signatories. Either they all agree on an upgrade or they hard fork, leading to two slightly different models of the same currency.
This sort of model upgrade process in DAML is described in detail here: https://github.com/digital-asset/ex-upgrade

What is the difference between composer network and composer identity and also registries and networks

In hyperledger composer network permissions, we always come across these terminologies
Composer Network and Composer Identity.
Network access and Business access.
Registries and Networks.
What is the difference between them, how far I know registries are for participants and assets, you can access participants and assets using registries and define permissions?
For permissions, you can read about the ACLs here -> https://hyperledger.github.io/composer/reference/acl_language.html
'Composer Network' represents the business network entity. 'Composer Identity' refers to a specific blockchain identity that is mapped to a single Participant - defined in a Participant Registry that is contained within the business network in question.
Registries maintain a particular type of view of an Asset, Participant . Registries are also maintained by Composer for Identity or Historical transactions. It allows someone in that business network (given the right authority) to see the current status and history of the ledger, and Registries classify that much like a database table might do - ie depends on the level of details required (eg. Backroom Traders (Participant), Front Office Traders (Participant), Metal Commodities (Assets), Agricultural Commodities (Asset) etc etc) - or could just be rolled up as 'Traders'(Participant) and 'Commodities' (Asset) types if less detail is required. The salient point is you store Participant - or Asset Instances - in their respective type registries.
See the tutorials for examples of Assets and Participants in action:
https://hyperledger.github.io/composer/tutorials/queries.html
https://hyperledger.github.io/composer/tutorials/developer-guide.html

What would be an ideal way to share writable volume across containers for a web server?

The application in question is Wordpress, I need to create replicas for rolling deployment / scaling purposes.
It seem can't create more then 1 instance of the same container, if it uses a persistent volume (GCP term):
The Deployment "wordpress" is invalid: spec.template.spec.volumes[0].gcePersistentDisk.readOnly: Invalid value: false: must be true for replicated pods > 1; GCE PD can only be mounted on multiple machines if it is read-only
What are my options? There will be occasional writes and many reads. Ideally writable by all containers. I'm hesitant to use the network file systems as I'm not sure whether they'll provide sufficient performance for a web application (where page load is rather critical).
One idea I have is, create a master container (write and read permission) and slaves (read only permission), this could work - I'll just need to figure out the Kubernetes configuration required.
In https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes you can see a table with the possible storage classes that allow ReadWriteMany (the option you are looking for).
AzureFile (not suitable if you are using GCP)
CephFS
Glusterfs
Quobyte
NFS
PortworxVolume
The one that I've tried is that of NFS. I had no issues with it, but I guess you should also consider potential performance issues. However, if the writes are to be occassional, it shouldn't be much of an issue.
I think what you are trying to solve is having a central location for wordperss media files, in that case this would be a better solution: https://wordpress.org/plugins/gcs/
Making your kubernetes workload truly stateless and you can scale horizontally.
You can use Regional Persistent Disk. It can be mounted to many nodes (hence pods) in RW more. These nodes can be spread across two zones within one region. Regional PDs can be backed by standard or SSD disks. Just note that as of now (september 2018) they are still in beta and may be subject to backward incompatible changes.
Check the complete spec here:
https://cloud.google.com/compute/docs/disks/#repds