Multi-member governance tools for resources - ethereum

For a consortium with multiple enterprise parties operating a permissioned blockchain, how does governance of the shared infrastructure work with Kaleido?
I assume that one party can launch the blockchain platform (with a fixed set of nodes), invite members, give invited members limited capabilities to manage the shared resources (e.g. they can set up private channels and invite other members, and perhaps add/remove their own nodes/peers?).
Does the party who launches the blockchain consortium instance have more "powers" than invited members (e.g. which AWS region to deploy to)?
Can an invited member add more peers or remote nodes than the rest of the consortium, and then perform something like a 51% attack?
Can payments be split between consortium members?
The encrypted storage: how is this governed between multiple members of a consortium?
I would appreciate any feedback.
Kind Regards,
Zaid

Does the party who launches the blockchain consortium instance have more "powers" than invited members (e.g. which AWS region to deploy to)?
In the current open beta functionality, Kaleido does exposed the ability for the original creator of a consortium/environment to delete it. Including all nodes owned by all members. This is a convenience feature for PoC stage consortium. Please drop support#kaleido.io an email note directly if you are at a stage with a project where you need to discuss a fully decentralized governance model where this ability is removed.
Can an invited member add more peers or remote nodes than the rest of the consortium, and then perform something like a 51% attack?
Each consensus algorithm has different byzantine fault tolerance characteristics, and you can read about them here:
https://kaleido.io/consensus-algorithms-poa-ibft-or-raft/
In the current open beta, members invited/permissioned into the private chain are able to add multiple nodes that participate in forming consensus (Clique signers / IBFT validators). Again please contact Kaledio if you have specific requirements in this area.
Can payments be split between consortium members?
Kaleido is not currently charging for the open beta. However, the ownership model of the Kaleido cloud resources, is that each Kaleido organization owns its own nodes. Each member running nodes in a Kaleido private chain has the control over the lifecycle and operations of their own nodes. As such, it would follow that each participant would pay for their own nodes in such a model.
The encrypted storage: how is this governed between multiple members of a consortium?
The Kaleido tenancy model is described here:
https://docs.kaleido.io/getting-started/overview/kaleido-tenancy-model/
A further option for encryption of sensitive key materials, is to use use per-tenant master encryption keys stored outside of the Kaleido platform in the AWS Key Management Service (KMS):
https://kaleido.io/why-your-keys-are-safe-in-kaleido/
If you are interested in further details of the virtualization technologies Kaleido uses to dedicate isolated storage to each node, please reach out directly to support.
Many thanks for your questions, and I hope this response gives some additional clarity on the features available in the Kaleido open beta.
Please do reach out to support#kaleido.io directly if you'd like to learn more.
Regards, Peter

Related

How is a service mesh different from a 2010 ESB solution like IBM IIB or Oracle ESB

Back in the days, I used to be a IBM Integration Bus (IIB) - then known as IBM WebSphere Message Broker - developer. I would develop message flows to connect various input, output and processing nodes. This development style, of course, extends to other ESB vendors too; so, this question does not lose generality.
The messaging engine for IIB is WebSphere MQ (WMQ) that provides communication in the form of messages on a queue or as topics. Together with internal logic in IIB, the nodes communicate with each other passing on messages.
A typical IIB/WMQ has well-documented HA installation mechanism too. Besides, if a message flow exposes a HTTP(S) end-point, it could do so behind a load balancer too.
Similarly, one can speak about other technologies that comprised the SOA era. Therefore, my question is, if I
develop micro-services that communicated with say, WMQ
deployed each micro-service to a container
used an ESB to orchestrate these micro-services
relied on ESB (and its ancillary technologies) for access control, traffic management, etc.
then, what do I need Istio for - apart from a 'pure containers based architecture'?
https://developer.ibm.com/integration/blog/2014/07/02/ibm-integration-bus-high-availability-overview/
https://developer.ibm.com/integration/docs/ibm-integration-bus/learn-play/an-introduction-to-ibm-integration-bus/
Istio implements the side-car pattern to be coupled to each microservice. The microservices (not necessarily but usually) are deployed in infrastructures that allow elastic scaling, in which the system is delegated the task of adjusting the number of instances of each microservice based on the scaling strategy configured. This means that the number of containers at any given moment is predictable and at the same time unknown in the short term.
Istio solves the problem of abstracting microservices from purely infrastructure tasks and leaving them to focus solely on the functional plane, and at the same time it is able to elastically scale together with the containers to which it is attached.
Delegating this task to an ESB is not impossible, but in my opinion it would introduce a fairly high complexity factor. Maybe you've found a business opportunity ;-)
The TLDR answer is that istio is more flexible and not trying to get the microservices fully dependent on istio, while the IIB stack was mostly "once you go in, you can't go out without a migration project".
IIB previously had a monolithic architecture and your IIB related links provided would help in creating a High Availability architecture. The recent offerings of ESB(any vendor) has been to deploy the ESB as a microservices. Specifically, with respect to IIB, we can run each execution group(Integration server) as a container. With this you have all kinds of advantages of a microservice architecture. Of course as mentioned, you can have these ESB microservice to do orchestration as well.
But for any Enterprise that has microservices based architecture across its various applications and not just ESB as containers, its very difficult to manage, secure, observe etc. Specially when microservices keep growing with thousands of it running in an enterprise. This is where Istio would help.
https://istio.io/docs/concepts/what-is-istio/

Using Azure Service Bus to communicate between 2 organizations

We wish to decouple the systems between 2 separate organizations (as an example: one could be a set of in house applications and the other a set of 3rd party applications). Although we could do this using REST based APIs, we wish to achieve things like temporal decoupling, scalability, reliable and durable communication, workload decoupling (through fan-out), etc. And it is for these reasons, we wish to use a message bus.
Now one could use Amazon's SNS and SQS as the message bus infrastructure, where our org would have an SNS instance which would publish to the 3rd party SQS instance. Similarly, for messages the 3rd party wished to send to us, they would post to their SNS instance, which would then publish to our SQS instance. See: Cross Account Integration with Amazon SNS
I was thinking of how one would achieve this sort of cross-organization integration using Azure Service Bus (ASB), as we are heavily invested in Azure. But, ASB doesnt have the ability to publish from one instance to another instance belonging to a different organization (or even to another instance in the same organization, not yet at least). Given this limitation, the plan is that we would give the 3rd party vendor separate sets of connections strings that would allow them to listen and process messages that we posted and also a separate set of connection strings that would let them post messages to a topic which we could then subscribe to and process.
My question is: Is this a good idea? Or would this be considered an anti-pattern? My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
Is this a valid concern?
Have you done this?
What other issues might I run into?
Using Azure Service Bus connections string with different Shared Access Policy for senders and receivers (Send and Listen) is intended to be used by senders and receivers with limitted permissions. Just like you intend to use it.
My biggest concern is the fact, that while the point of using a message bus was to achieve decoupling, the infrastructure piece of ASB is making us tightly coupled to the point that we need to communicate between the 2 organizations on not just the endpoints, but also, how the queue/topic was setup (session, or no session, duplicate detection etc) and the consumer is tightly coupled to how the sender sends messages (what was used as the session id, message id, etc).
The coupling always exists. You're coupled to the language you're using. Datastore technology used to persist your data. Cloud vendor you're using. This is not type of coupling I'd be worried, unless you're planning to change those on a monthly basis.
Not more specific to the communication patterns. Sessions would be a business requriement and not a coupling. If you required ordered message delivery, then what else would you do? On Amazon you'd be also "coupling" to FIFO queue to achieve order. Message ID is by no means coupling either. It's an attribute on a message. If receiver chooses to ignore it, they can. Yes, you're coupling to use BrokeredMessage/Message envelope and serialization, but how else would you send and receive messages? This, is more of a contract for partied to agree upon.
One name for the pattern for connecting service buses between organizations is "Shovel" (that's what they are called in RabbitMq)
Sometimes it is necessary to reliably and continually move messages
from a source (e.g. a queue) in one broker to a destination in another
broker (e.g. an exchange). The Shovel plugin allows you to configure a
number of shovels, which do just that and start automatically when the
broker starts.
In the case of Azure, one way to achieve "shovels" is by using logic apps, as they provide the ability to connect to ASB entities in different namespaces.
See:
What are Logic Apps
Service Bus Connectors
Video: Use Azure Enterprise Integration Services to run cloud apps at scale

Big Fiware Deployments in IoT

Can someone please post a link to examples of bigger Fiware deployments in IoT domain?
Is Santander the biggest deployment of this kind, as it has 12k sensors - which is big, but not impressive - impressive would be 12M as then you would need clustered broker to accept all these connections. I guess 12k connections can be handled with a single PC machine (no need for clustering).
I am interested in benchmarks (latency and throughput) and stability of Orion and other Fiware components, and want to know if some bigger commercial system is deployed on Fiware, or is it just in experimental phase and not suitable for professional deployments yet.
BR,
Drasko
In the FIWARE LAB Global instance of Orion we are currently processing about 160,000 instances at present.
FIWARE components are also part of private companies commercial portfolios, not only public instances for testing like the FIWARE Lab mentioned above. For instance, Orion CB and the IoT Agents are used by Telefonica Smartcity commercial product at present.
If you are interested in the components performance/scalability more than a existing deployments (that depends more on the current customers of the companies rather than the technology limits) you may check the performance tests that will be published during this year.
Cheers,

Difference between using Listeners and MBean to send Notifications?

I've been reading about how the GemFire distributed data store/management/cache system performs notifications. While this reading, i had this question.
Gemfire seems to be using MBeans to create notifications during events. How different/suitable is using MBeans to create notifications instead of implementing a Listener based aproach ? (not just in GemFire but, generally)
Note: I am very new to the topic of MBean. Just with the understanding that it's main purpose is to expose resources to be managed.
CONTEXT
...topic of MBean... it's main purpose is to expose resources to be managed.
That is correct. (GemFire) Resources exposed as MBeans can both be queried and altered, depending on what the MBean exposes for the resource (e.g. Region, DiskStore, Gateway, AEQ, etc), using JMX.
GemFire's JMX interface can then be consumed by applications and tools that use the JMX API. GemFire's Gfsh (command-line shell and management tool) along with Pulse (web monitoring tool) are both examples of JMX clients and the kinds of applications you could write that use JMX.
You can also use the standard JDK tools like jconsole or jvisualvm to connect to a GemFire Manager (managing node in the cluster that federates the view of all the members in the cluster as well as the ability to control any single member from the Manager). See GemFire's section in the User Guide on Management for more details.
Contrasting that with GemFire Callbacks, callbacks (e.g. CacheListener) can be used by peer/client cache applications to register interests in certain types of events, like Region entry creation/updates, etc. Other callbacks like CacheLoaders can used to read-through to an external data source (e.g. RDBMS) on a Cache miss. Likewise, the CacheWriter can be used to 'write-through' to an external data source on a Cache (Region) create/update, or perhaps asynchronously with a AEQ/AsyncEventListener performing a 'write-behind' to the external data source.
There are many other callbacks and ways in which these callbacks can be used, but nearly all are used programmatically in an GemFire client/peer Cache application to "receive" notifications of some type.
For more details, see the GemFire User Guide on Events and Event Handling.
ANSWER
Now, when it comes to "sending" notifications, GemFire does a fair amount of distribution on your application's behalf. JMX is primarily used to send notifications about management changes... a Region was add, the eviction policy changed, a Function was deployed, etc. In contrast, GemFire sends distribution events when data changes to other members in the cluster that are interested in the event. "Interested" members typically includes other nodes in the cluster that host the same Region and have the same key/values, which need to be updated, and in certain cases atomically (in a TX) for consistency sakes.
Now, if you want to send notifications from your application, then you are better off using Spring and Spring Data GemFire to configure and access GemFire. Spring provides exceptional support for application messaging.
Of course, other options are available including JMS, which Spring provides integration support.
All and all, the events/notifications that are sent and the distribution mechanism used highly depends on the event/notification type. As well, the manner in which to be notified (JMX Notification vs. GemFire Callback) is also dependent on the type of message and purpose.
Sorry for the lengthy explanation; it is loaded/broad question and complex subject that can vary greatly depending on the use case.
Hope this helps (a little ;-)

Vulnerability Scan Authorization for Google Compute

What is the official and required process to perform our own independent vulnerability scans against virtual machines in the Google Compute Engine? These will be Penetration tests (our own) that will scan the public IP for open ports and report results back to us.
Microsoft Azure requires authorization and so does Amazon. Does Google?
No, Google does not need to be notified before you run a security scan on your Google Compute Engine projects. You will have to abide by Google Cloud Platform Acceptable Use Policy and the Terms of Service.
Please also be aware of Google's Vulnerability Rewards Program Rules.
By default all incoming traffic from an outside network is blocked. Each customer has the responsibility to create the appropriate rules to allow access to the GCE instances as he considers appropriate:
https://cloud.google.com/compute/docs/networking#firewalls
If you sign up for a trial you can perform that test over your own project. Overall security configuration is up to the owner of the project and does not reside on Google.
In regards to internal infrastructure Google has its own security teams working 24x 7 to assure it keeps on the vanguard in the best security practices. http://googleonlinesecurity.blogspot.ca/