How can check the common vulnerabilities in FIWARE components? - fiware

I would like to check the common vulnerabilities in some of FIWARE components that we are using in our platform, components list is given below.
Cepheus
Cygnus
Orion
STH-Comet
QuantumLeap
IoT Agent for JSON
IoT Agent Node Lib
If any source is available over some FIWARE website or some other source, where we can verify the vulnerabilities in FIWARE component. Please provide the information if such information is available.

For a given Docker baseline we are using Anchore and Clair checks. For a given usual running Docker Container based on a Docker Compose file a Docker Benchmark Security recommendation is executed. Additionally, we are running SAST code analysis over the corresponding repositories. Plus npm audit for the node.js ones plus.
We are defining corresponding GitHub Actions to use inside the repositories.
There is a working project to provide security analysis of the components, the first version is not released yet. You can take a look on it in this repository FIWARE Security Scan

Related

Install AppDynamics in OpenShift 4.X

I am looking for a way to install AppDynamics in a OpenShift Cluster.
Unable to find proper documentation on how to install and what tools need to be installed.
Should My Application Docker file also include any images related to AppDynamics
If anyone familiar with this please share some steps or provide reference to documents.
Old docs: https://docs.appdynamics.com/22.2/en/infrastructure-visibility/monitor-containers-with-docker-visibility/use-docker-visibility-with-red-hat-openshift
New Docs: https://docs.appdynamics.com/22.2/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent
Note that there is not a prescribed way to instrument as such, you need to make some decisions.
i.e. (from the second doc link):
The first decision is to use the officially released pre-built
Appdynamics Operator images published on DockerHub and Redhat
Registry or If you want to build a custom Appdynamics Operator image.
See Build the Custom Cluster Agent Image.
The second decision is whether to use the officially released
pre-built Cluster Agent images published on DockerHub and Redhat
Registry or If you want to build a custom Cluster Agent image. See
Cluster Agent Container Image.
The third decision is whether to install the Cluster Agent using the
Kubernetes CLI or the Cluster Agent Helm Chart. See Install the
Cluster Agent with the Kubernetes CLI and Install the Cluster Agent
with Helm Charts.

What is the difference between application console vs cluster console?

What is the difference between application console vs cluster console in openshift enterprise version. I am new to openshift and confused with terminologies. I feel that openshift is like linux kernel in our system(an analogy). On top of that are containers and to orchestrate we have kubernetes. However , the architecture of openshift is exact opposite. Please correct me.
OpenShift is just one of the available Kubernetes distributions, which adds enterprise-level services like authentication, authorization and multitenancy.
The web console provides two perspectives: Administrator and Developer. The Developer perspective provides workflows specific to developer use cases like create, deploy and monitor applications, while Administrator perspective is responsible for managing the cluster resources, users, and projects. Depending on the user's role, you will see a different set of views available in the main menu.

Using VSTS with multiple Azure Environments

We are new to VSTS and will be using the online service and integrating with our production Azure AD tenant. Since we do development that involves Office 365, this meant that we have both production and development Office 365/Azure AD environments. We understand that our authentication can only be tied to one of these (which is fine) but can we use VSTS to perform tasks against both environments (e.g. staging, deploy, etc.)? Are there certain things that do/don't work we should consider or are there other suggestions on how we would leverage VSTS across these environments as we take code tested against development to production? Thanks!
One option to do this would be using powershell and service principal authentication. No point in copy\pasting documentation so I'll leave a link.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authenticate-service-principal
You can also just authenticate to the API, get oAuth token and do pretty much anything with that. Not super straight forward, but can be done ;)
You can add multiple azure service endpoints, then deploy app through release, simple steps:
Refer to this blog: Automating Azure Resource Group deployment using a Service Principal in Visual Studio Online to manual configure Azure service endpoint (Manual Configuration section)
Create a Release Definition in vsts
Add environments (e.g. Staging, Deploy)
Add Azure App Service Deployment task for each environment
Specify corresponding Azure Subscription for these tasks

Differences between OpenShift and Kubernetes

What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
In addition to the additional API entities, as mentioned by #SteveS, Openshift also has advanced security concepts.
This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as root in Kubernetes, but run under an arbitrary user with a high ID (e.g. 1000090000) in Openshift. This means that many containers from DockerHub do not work as expected. For some popular applications, The Red Hat Container Catalog supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.
To get an idea of the system, I strongly suggest starting out with Kubernetes. Minikube is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.
OpenShift includes a distribution of Kubernetes, so if you don't need any of those added features of OpenShift you can choice to ignore them such as: Web Console, Builds, advanced deployment models and much, much more.
Here's a summary of items available on the OpenShift website.
Kubernetes comes with Ingress Rules but Openshift comes with Routes
Kubernetes has IngressController but Openshift has Router as HAProxy
To swtich namespace in cli for openshift is very easy but in
kubernetes you need to create contex and switch between context
Openshift UI has more interactive and informative then Kubernetes
To bake docker image inside Openshift has BuildConfig but kubernetes
don't has any thing you need to build image and push to registry
Openshift has Pipeline where u don't need any jenkins to deploy any
app but Kubernetes don't has.
The easiest way to differentiate between them is to understand that while vanilla K8S is community project, OpenShift is more focused towards making it a enterprise ready product. Resources like Imagestreams, BC, Builds, DC, Routes etc along with leveraging functionalities like S2I, Router etc make it easier for Developers and admin alike to use OCP for development, deployment and lifecycle management. You can refer to the URL https://cloud.redhat.com/learn/topics/kubernetes/ for getting more information on key differences between them.
OCP makes your life much easier by giving easy actions using CLI command OC and fine grained webconsole.
You can try OCP and get first hand experience of the features using https://developers.redhat.com/developer-sandbox
where you can quick get access to sandboxed environment in a shared cluster.

Fiware IDAS & Orion Production Deployment

I would like to know what is the common deployment pattern for IDAS and Orion in a production environment. Are they usually deployed as docker images or as a native service? If they are as a docker images then do they usually go together in one container or separate containers?
Thank you.
I can provide an answer from the point of view of Orion Context Broker (I hope that some of my colleagues from IDAS team can answer also that part).
Deployment options (look for slides "How to get Orion" in this presentation) are the following ones:
Image in FIWARE Lab cloud
Docker contaniner
VirtualBox image
RPM installation (from FIWARE repositories)
Compiling from sources
For IDAS it depends on the specific IoT-Agent you are using.
If you are using Ultralight2.0/HTTP or MQTT to connect devices, all the information for installation is available here:
https://github.com/telefonicaid/fiware-IoTAgent-Cplusplus/blob/release/1.3.0/README.md
On the other hand, if you will use OMA LWM2M/CoAP to connect devcies, this info is here:
https://github.com/telefonicaid/lightweightm2m-iotagent/blob/master/docs/administrationGuide.md
Also, docker files are available here:
http://catalogue.fiware.org/enablers/backend-device-management-idas/creating-instances
Hope this helps.
Cheers,