Big Fiware Deployments in IoT - fiware

Can someone please post a link to examples of bigger Fiware deployments in IoT domain?
Is Santander the biggest deployment of this kind, as it has 12k sensors - which is big, but not impressive - impressive would be 12M as then you would need clustered broker to accept all these connections. I guess 12k connections can be handled with a single PC machine (no need for clustering).
I am interested in benchmarks (latency and throughput) and stability of Orion and other Fiware components, and want to know if some bigger commercial system is deployed on Fiware, or is it just in experimental phase and not suitable for professional deployments yet.
BR,
Drasko

In the FIWARE LAB Global instance of Orion we are currently processing about 160,000 instances at present.
FIWARE components are also part of private companies commercial portfolios, not only public instances for testing like the FIWARE Lab mentioned above. For instance, Orion CB and the IoT Agents are used by Telefonica Smartcity commercial product at present.
If you are interested in the components performance/scalability more than a existing deployments (that depends more on the current customers of the companies rather than the technology limits) you may check the performance tests that will be published during this year.
Cheers,

Related

How to choose a FIWARE NGSI-LD context broker?

I need some help to decide which FIWARE context broker (Orion-LD, Scorpio, Stellio) I should choose for a smart city architecture. There is no existing component which uses NGSI-v2.
Is there any other reason why you should choose the Orion-LD context broker besides the fact that it is the only one that supports NGSI-v2?
Is there an advantage that the Orion broker is the main component of FIWARE?
The paper "Open-Source Publish-Subscribe Systems: A Comparative Study" says the scorpio broker is the most complete system overall.
The paper "Enabling Context-Aware Data Analytics in Smart Environments: An Open Source Reference Implementation" says the Orion-LD context broker is the most extended GE.
I also saw the performance comparison where Orion-LD is way faster at small batches of messages and slower than Scorpio and Stellio with larger batches.
Any suggestions?
Thanks!
My understanding is that Orion-LD is easy to operate, only the service and MongoDB. Scorpio uses Kafka which adds an extra complexity layer, operations-wise. I don't have any reference on Stellio.
Orion-LD supports NGSIv2 as it is a fork of Orion but it is not its main focus. If you are starting fresh, you can adopt directly LD, and you are done.

How is a service mesh different from a 2010 ESB solution like IBM IIB or Oracle ESB

Back in the days, I used to be a IBM Integration Bus (IIB) - then known as IBM WebSphere Message Broker - developer. I would develop message flows to connect various input, output and processing nodes. This development style, of course, extends to other ESB vendors too; so, this question does not lose generality.
The messaging engine for IIB is WebSphere MQ (WMQ) that provides communication in the form of messages on a queue or as topics. Together with internal logic in IIB, the nodes communicate with each other passing on messages.
A typical IIB/WMQ has well-documented HA installation mechanism too. Besides, if a message flow exposes a HTTP(S) end-point, it could do so behind a load balancer too.
Similarly, one can speak about other technologies that comprised the SOA era. Therefore, my question is, if I
develop micro-services that communicated with say, WMQ
deployed each micro-service to a container
used an ESB to orchestrate these micro-services
relied on ESB (and its ancillary technologies) for access control, traffic management, etc.
then, what do I need Istio for - apart from a 'pure containers based architecture'?
https://developer.ibm.com/integration/blog/2014/07/02/ibm-integration-bus-high-availability-overview/
https://developer.ibm.com/integration/docs/ibm-integration-bus/learn-play/an-introduction-to-ibm-integration-bus/
Istio implements the side-car pattern to be coupled to each microservice. The microservices (not necessarily but usually) are deployed in infrastructures that allow elastic scaling, in which the system is delegated the task of adjusting the number of instances of each microservice based on the scaling strategy configured. This means that the number of containers at any given moment is predictable and at the same time unknown in the short term.
Istio solves the problem of abstracting microservices from purely infrastructure tasks and leaving them to focus solely on the functional plane, and at the same time it is able to elastically scale together with the containers to which it is attached.
Delegating this task to an ESB is not impossible, but in my opinion it would introduce a fairly high complexity factor. Maybe you've found a business opportunity ;-)
The TLDR answer is that istio is more flexible and not trying to get the microservices fully dependent on istio, while the IIB stack was mostly "once you go in, you can't go out without a migration project".
IIB previously had a monolithic architecture and your IIB related links provided would help in creating a High Availability architecture. The recent offerings of ESB(any vendor) has been to deploy the ESB as a microservices. Specifically, with respect to IIB, we can run each execution group(Integration server) as a container. With this you have all kinds of advantages of a microservice architecture. Of course as mentioned, you can have these ESB microservice to do orchestration as well.
But for any Enterprise that has microservices based architecture across its various applications and not just ESB as containers, its very difficult to manage, secure, observe etc. Specially when microservices keep growing with thousands of it running in an enterprise. This is where Istio would help.
https://istio.io/docs/concepts/what-is-istio/

Multi-member governance tools for resources

For a consortium with multiple enterprise parties operating a permissioned blockchain, how does governance of the shared infrastructure work with Kaleido?
I assume that one party can launch the blockchain platform (with a fixed set of nodes), invite members, give invited members limited capabilities to manage the shared resources (e.g. they can set up private channels and invite other members, and perhaps add/remove their own nodes/peers?).
Does the party who launches the blockchain consortium instance have more "powers" than invited members (e.g. which AWS region to deploy to)?
Can an invited member add more peers or remote nodes than the rest of the consortium, and then perform something like a 51% attack?
Can payments be split between consortium members?
The encrypted storage: how is this governed between multiple members of a consortium?
I would appreciate any feedback.
Kind Regards,
Zaid
Does the party who launches the blockchain consortium instance have more "powers" than invited members (e.g. which AWS region to deploy to)?
In the current open beta functionality, Kaleido does exposed the ability for the original creator of a consortium/environment to delete it. Including all nodes owned by all members. This is a convenience feature for PoC stage consortium. Please drop support#kaleido.io an email note directly if you are at a stage with a project where you need to discuss a fully decentralized governance model where this ability is removed.
Can an invited member add more peers or remote nodes than the rest of the consortium, and then perform something like a 51% attack?
Each consensus algorithm has different byzantine fault tolerance characteristics, and you can read about them here:
https://kaleido.io/consensus-algorithms-poa-ibft-or-raft/
In the current open beta, members invited/permissioned into the private chain are able to add multiple nodes that participate in forming consensus (Clique signers / IBFT validators). Again please contact Kaledio if you have specific requirements in this area.
Can payments be split between consortium members?
Kaleido is not currently charging for the open beta. However, the ownership model of the Kaleido cloud resources, is that each Kaleido organization owns its own nodes. Each member running nodes in a Kaleido private chain has the control over the lifecycle and operations of their own nodes. As such, it would follow that each participant would pay for their own nodes in such a model.
The encrypted storage: how is this governed between multiple members of a consortium?
The Kaleido tenancy model is described here:
https://docs.kaleido.io/getting-started/overview/kaleido-tenancy-model/
A further option for encryption of sensitive key materials, is to use use per-tenant master encryption keys stored outside of the Kaleido platform in the AWS Key Management Service (KMS):
https://kaleido.io/why-your-keys-are-safe-in-kaleido/
If you are interested in further details of the virtualization technologies Kaleido uses to dedicate isolated storage to each node, please reach out directly to support.
Many thanks for your questions, and I hope this response gives some additional clarity on the features available in the Kaleido open beta.
Please do reach out to support#kaleido.io directly if you'd like to learn more.
Regards, Peter

Vulnerability Scan Authorization for Google Compute

What is the official and required process to perform our own independent vulnerability scans against virtual machines in the Google Compute Engine? These will be Penetration tests (our own) that will scan the public IP for open ports and report results back to us.
Microsoft Azure requires authorization and so does Amazon. Does Google?
No, Google does not need to be notified before you run a security scan on your Google Compute Engine projects. You will have to abide by Google Cloud Platform Acceptable Use Policy and the Terms of Service.
Please also be aware of Google's Vulnerability Rewards Program Rules.
By default all incoming traffic from an outside network is blocked. Each customer has the responsibility to create the appropriate rules to allow access to the GCE instances as he considers appropriate:
https://cloud.google.com/compute/docs/networking#firewalls
If you sign up for a trial you can perform that test over your own project. Overall security configuration is up to the owner of the project and does not reside on Google.
In regards to internal infrastructure Google has its own security teams working 24x 7 to assure it keeps on the vanguard in the best security practices. http://googleonlinesecurity.blogspot.ca/

Openshift use for Commercial web sites

Can we use OpenShift Express, which is free right now, for commercial web applications?
And if not, then which PAAS services are there which are free, and have no vendor lock-in.
You can use OpenShift Express for commercial web apps but be sure it will meet your requirements. Potential issues include:
currently no outgoing email support
currently applications do not scale to accommodate load
1GB disc space limit
shared hosting
limit 3 cartridges (DB, metrics, etc) per app
no official support from Red Hat. Documentation is good and community forum support is very active.
OpenShift would meet many commercial site requirements. I think it's a great option. For more info read the FAQ.
Openshift have opened SMTP Port now.
check : https://www.redhat.com/openshift/community/blogs/outbound-mail-ports-are-now-open-for-business-on-openshift
You can use Cloudify. It is build for orchestrating any application on any Cloud without changing the application code or architecture. Cloudify is opensource and free.
Cloudify offers many features such as pluggable monitoring, scale rules by any KPI, APIs for sharing runtime information between agents and even Chef integration
Due Diligence Im the product manager for Cloudify in GigaSpaces
I've been using it for some small services and clients.
There isn't any clause on there terms of use that states that you can't use it as commercial web apps. But attention to the following line:
"You may not post or transmit through this website advertising or commercial solicitations; promotional materials relating to website or online services which are competitive with Red Hat and/or this website."
Yes, OpenShift has a tier that is completely free to use, even for commercial applications. There are no plans to change this in the future. There are, however, some minor limitations to the FreeShift tier:
Scaling limited to 3 gears
Serves about 15 pages/second
3GB total storage space (1GB per gear)
No SSL certificate for your custom domain name
No support from Red Hat
An alternative is Heroku, which you should definitely check out if you haven't already. Having used both, I can tell you that it's a much more polished platform: The servers are about 4× faster, you can run as many apps as you want, and the Heroku Toolbelt is much more powerful than the OpenShift's Client Tools. Heroku is also completely free until you reach 10k rows in your database.
RedHat will provide support (and scaling) when they release their MegaShift tier.
(https://openshift.redhat.com/community/developers/pricing)
I don't think there is a date yet for this.
It won't be for free off course.