How will the DAML Code affecting two distinct DA Nodes be deployed and how will its integrity maintained? - daml

I am looking for DA’s recommendation/best practices regarding writing and deploying DAML code and object (.daml and .dar) in production grade solutions.
Let us take a scenario - Central Authority (CA) operating node may issue a new role under a contract to Participant 1 (P1) by writing a simple DAML code, below are few questions related to the DAML deployment –
a. Assuming DAML code will be written by CA, can we say that only CA needs to have this code and its build on its node and CA will simply execute the contract workflow allowing Party on P1 Node to simply accept/reject the role without having to know the content of the DAML code (business logic and other contract templates) written by CA ?
b. Will the DAML code file (.daml) written by CA Node needs to be transmitted to Participant 1 (P1) node so that P1 can verify and agree with the DAML file code (contract templates, parties and choices) and put the code and its build (.dar) into its node as well?
c. If the answer to above question is yes, how will the integrity of the DAML code be maintained e.g. what if DAML code is changed by P1 or CA at the time of deployment, which may cause conflict later?

The contract model, in the form of a dar file has to be supplied to all nodes that participate in the workflows modeled in that dar file.
A dar file can contain multiple DAML "packages" and each package is identified by its name and a hash.
On ledger, contract types (called template) are fully qualified, including package hash. If you change your templates, the package hash changes and thus the new templates are seen as completely different from the old ones by the ledger.
To change an existing contract model, you have to upgrade existing contracts using a DAML workflow. Of course, all signatories of the existing contracts need to agree to the upgrade workflow. You can only unilaterally upgrade data that you are in full control of. In the crypto currency world, you can consider all miners as signatories. Either they all agree on an upgrade or they hard fork, leading to two slightly different models of the same currency.
This sort of model upgrade process in DAML is described in detail here: https://github.com/digital-asset/ex-upgrade

Related

Detect data anomalies in data pipe and trigger scheduled datapipeline

In Foundry, we have a data pipeline where we want to insert a code node (repo or workbook) that detects anomalies and then sends and email or some other alert about the problem.
Having trouble finding this in the documentation, can someone point me to it?
Ideally we would love to have the code trigger the Scheduler to do a pipeline run to create a REPORT, (maybe even Quiver, to do some timeline analysis). Is this possible? Are there examples in the documentation?
Check out the documentation in the Data Health section of the platform documentation. There are a number of patterns possible, including defining data expectations in your code.
Whether defined as expectations or dataset health checks, failures can be set up to create Issues within the platform, which can have default assignees (individuals or groups) that will also send notifications, which are both in platform and over email (depending on per-user configuration).
Health check failures will also automatically populate the data health tab in the Project Catalog view, which can serve as a dashboard to view the overall health of the project. You can also surface these in the Data Lineage view with a coloring based on Data Health to understand issues across the breadth of the pipeline.
For a comprehensive approach to pipeline health, review the Pipelines and best practices section in the Code Repositories documentation.

How a DAML application is adapted from Fabric network?

I was reading this documentation https://github.com/digital-asset/daml-on-fabric but I am confused. I am trying to understand the workflow and the architecture for the daml on fabric network. The network has 2 orgs and each org has 1 peer.On step 4 are allocating 5 daml parties. Quickstart example has 3 signatories(issuer,owner,buyer). To sum up, what is the match between daml parties and fabric orgs? I think that could be allocated even more daml parties without to change fabric network. They interact from one node? What is the purpose to add other nodes on step 11?
Parties don't have a cryptographic identity in DAML Ledgers, only nodes do. As part of the shared ledger state, every DAML-enabled infrastructure maintains a mapping from Party to Node. This relationship is usually described as a Node "hosting" a Party. The Node hosting a Party is able to submit transactions using that Party's authority and is guaranteed to receive any transactions visible to that Party.
In the tutorial you are referring to, all parties are allocated on a single Node, which then hosts them. This does indeed not make that much sense in practice. It would be more sensible to set up a network with three orgs and allocate the three parties on the peer nodes in the three orgs, respectively. Given the way the example is set up, that should be straightforward.

hyperledger Fabric - channel.tx and genesis.block very unclear

Next week I'm starting on a new blockchain project using Hyperledger Fabric. I have a question regarding the configtx binary.
We use this binary to create a channel.tx and a genesis.block. I have read the documentation, watched tutorials and looked on the internet but I still don't understand why the genesis.block and channel.tx is needed an why it's created like this. For example: shouldn't the genesis.block be in the blockchain including the channel configuration?
A simplified answer:
A genesis block is referred to as the first block.
The first (genesis) block in a channel, is a channel.tx (channel configuration transaction). It doesn't contain much more than the name of the channel, and the consortium that is allowed to use the channel.
The orderer genesis block is what configures an orderer when it starts. It contains MSP IDs for each organization, which MSP IDs are part of a consortium, and a trusted certificate for each MSP ID.
The orderer needs information about organizations, because the orderer approves the creation of new channels. A channel creation request must come from a trusted entity (part of an organization), or else the channel will not be created.
Since you cannot modify (execute a transaction) in a channel without an orderer approving, it makes sense to only let the orderer have the network information. This way you don't risk having inconsistent information between channels/orderers in case anything changes.
All Fabric blocks are encoded/serialized using protobuf, as the internal communication relies on gRPC. A block is thus in a binary format.
Think like this, what would you do if you want to change the configuration of the blockchain system? shutdown all the host, edit their config and restart them one by one? That will be rediculous, because we say that blockchain is decentralized, nobody can control all the hosts. The only way to change configurarion dynamically is consensus online. So how to make consensus online, obviously, the answer is using transactions(tx). As for the initialization of the blockchain in frabric, we can use the same way , that is channel.tx and genesis.block, to eliminating the cost of initialization by reuse the logic of editting configuration. Meanwhile, tx should be placed in a block, so that is why genesis.block exist.

How to write a hyperledger composer function that requires two parties to confirm

I want to construct a function in Hyperledger Composer that won't except a transaction until both parties confirm. For example, a supply chain transaction would not go through until the data comes in that says the shipment was fulfilled and then the supplier confirms that they got it. Is this possible in Hyperledger composer? Or Hyperledger fabric?
I am sure it is possible in Composer, but I am not familiar with its mechanics. In Fabric, it is a simple chaincode endorsement policy. Example, AND(consumer.member, supplier.member); that is, the policy requires 2 signatures (1 from consumer and 1 from supplier). Obviously in order to sign the transaction proposal, both consumer and supplier make sure the smart contract (chaincode) satisfies the set condition: "the data comes in that says the shipment was fulfilled and then the supplier confirms that they got it." If either 1 or both signatures not valid, the transaction won't go through.

What do you call a modification that is made on a Environment that is not DEV?

In application life-cycle management, it's common to have some environments. For example:
DEV -> Staging -> Production
Normally, you would develop in the DEV environment and stage your developments to Staging and Production.
But it's possible to directly modify the PRD environment (to quickly fix a bug, for instance).
How do you call this procedure (the modification of your code in an environment that is not the DEV environment)?
I thought it was called "hotfix" but I see no related search results in Google.
As opposed to your reference entity Environment the entity is, in my opinion, Branch within your SCM.
With this in mind, you are absolutely right: In my experience it was always a Hotfix branch. For planet TFS where I currently reside, this is described in various branching guidelines including this one - which is considered to be among the best (if not THE best).I had similar experiences in a UNIX/ClearCase planet, again with Hotfix branches - they were named as "MaintenanceRelease"-branches. Those contained one or more Hotfixes, occasionally a highly anticipated Feature could be merged into that as well.I wouldn't ever expect to see in any company a "Hotfix"- Environment. 'Hotfixes' address any possible crisis that a customer has experienced and that is per definition pretty vague. So having such an environment, is possibly a Utopia. In one occasion, they had a "BLS" - lab ("Back Level Support") which was used by Support-People to reproduce customer scenarios. Hotfixes provided by Development were deployed in this Lab before release. This is in some extend a "Hotfix" environment - still, beware that this installation costed millions.