What is the purpose of the multiagent parameter in the configuration of the Cygnus component?
According to Cygnus documentation is an option available in docker-based deployments, which purpose is as follows:
Enable multiagent cygnus: CYGNUS_MULTIAGENT environment variable. If enabled, each sink will run in a diferent port
So it seems to be a way of executing different sinks (MySQL, CKAN, HDFS, etc.) using the same docker container.
Related
I would like to check the common vulnerabilities in some of FIWARE components that we are using in our platform, components list is given below.
Cepheus
Cygnus
Orion
STH-Comet
QuantumLeap
IoT Agent for JSON
IoT Agent Node Lib
If any source is available over some FIWARE website or some other source, where we can verify the vulnerabilities in FIWARE component. Please provide the information if such information is available.
For a given Docker baseline we are using Anchore and Clair checks. For a given usual running Docker Container based on a Docker Compose file a Docker Benchmark Security recommendation is executed. Additionally, we are running SAST code analysis over the corresponding repositories. Plus npm audit for the node.js ones plus.
We are defining corresponding GitHub Actions to use inside the repositories.
There is a working project to provide security analysis of the components, the first version is not released yet. You can take a look on it in this repository FIWARE Security Scan
I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like mongos-1 mongod-server-1 mongod-shard-1,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to rs.addShard(config)?Encountered the same problem when installing mysql cluster using helm.
What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?
Yes, you can deploy MongoDB instances on Kubernetes clusters.
Use standalone instance if you want to test and develop and replica set for production like deployments.
Also to make things easier you can use MongoDB Enterprise Kubernetes Operator:
The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.
This guide has references to the official MongoDB documentation with more necessary details regarding:
Install Kubernetes Operator
Deploy Standalone
Deploy Replica Set
Deploy Sharded Cluster
Edit Deployment
Kubernetes Resource Specification
Troubleshooting Kubernetes Operator
Known Issues for Kubernetes Operator
So basically all you need to know in this topic.
Please let me know if that helped.
First let me give some background
We have our own VPS, so we do not wish to use Azure to host our web applications.
We have already successfully created a CI/CD pipeline to our VPS by installing an agent on it for a .NET Core project.
We use Azure DevOps (formerly known as VSTS) to host our code in GIT and handle our backlogs and CI/CD pipelines.
We have several .NET Framework projects where we use XTD transforms to transform our web.config files on delivery/deployment to have the correct connection strings and other configuration properties.
This makes it possible to pull the master branch from our remote repo and have it working in seconds on a previously unused (for this application) development environment without the need for any configuration.
Now to get to my question
The master branch of the .NET Core project for which we already have the CI/CD pipeline in place holds the configuration in the json files for the staging environment it is continuously delivered to. When a developer pull the master branch, he/she first needs to configure these to suite the local debug environment.
This is an undesirable situation for us.
How can we make it so that if we use .NET Core we can use a mechanism that will allow us to have the project work on a local debug environment without any configuration and in the CI/CD pipeline?
What have we already tried?
we have found that we can have multiple versions of the appsettings.json file for the different environments like appsettings.debug.json and than in the static method CreateWebhost of the Program class we can call on or the other. But how we can automate this is something that we haven't been able to figure out or find documentation about.
Okay, so here are some options you can take advantage of TODAY. (there are im sure more options/approaches)
Option A
Configure the master branch to have appsetting.development.json with connection string to DEV database( or lowest environment)
remove any connection string from appsettings.json
Merge master accordingly.
Create environment variables on each of the backend servers for the connection string; ex, system environment variable named ConnectionStrings:cartDB with connection string to the database for the environment for which the backend server used.
The result of this will be that when running using DEVELOPMENT as the environment variable, then it will be able to connect to database everyone can access.
However, since all OTHER web servers have environment variables with connection string, they will take highest level of precedence, and will therefore be the values set when calling something such as
string connectionString = Configuration.GetConnectionString("cartDB");
This will satisfy the requirements you mentioned above.
Option B:
Configure the master branch to have appsetting.development.json with connection string to DEV database( or lowest environment)
remove any connection string from appsettings.json
Place appsetting.staging.json, appsettings.prod.json in source control, and set environment name variable in web servers. :/ not the best of options/advised against.
(its worth mentioning since I have seen this happen, we all have)
Option C
Add appsetting.staging.json, appsettings.prod.json to source control and use a token in place of the connection string value. Then, leverage some type of Tokenization Task to replace those tokens with the appropriate values.
I'm building my staging environment using docker-compose, with application that was previously ran in Google Cloud using Kubernetes.
My application was configured, using ENV properties provided inside Kubernetes container, and now after switching to docker-composite, I have different naming convention for linked services.
I can think of few solutions, for my problem:
Change my application, to support alternative configurations, so it would support both docker-composite & Kubernetes
Create aliases in docker-compose or Kubernetes so that configuration would always be available in single format in both environments, and I would not need to touch my application configurations.
Maybe some other way, which I don't see
I want to go with the 2nd solution, but I don't know how exactly to configure it. Have ideas?
You could use the environment section to define 'docker-compose' variables like PARAM1=${PARAM2}. In this case, docker-compose will have the same variables that Kubernetes has.
Is there a way for my application to access the labels assigned to the pod / service during runtime?
Either via client API or via ENV / passed variables to the docker container?
The Downward API is designed to automatically expose information about the pod's configuration to the pod using environment variables. As of Kubernetes 1.0 is only exposes the pod's name and namespace. Adding labels to the Downward API is being discussed in #560 but isn't currently implemented.
In the mean time, your application can query the Kubernetes apiserver and introspect it's configuration to determine what labels have been set.