I want to install the FIWARE Sanity Checks in order to check my own FIWARE Lab node and I see that there are two subcomponents in the repository. Why are there two subcomponents on FIWARE Health repository? Which one should I install?
The FIWARE Health consists of two subcomponents:
a test execution engine which runs all Sanity Checks on each FIWARE Lab
node in order to validate its capabilities and get the 'Sanity
Status'.
and web dashboard to show all generated test reports and each node
status.
The execution engine is the FIHealth Sanity Checks subcomponent and the web dashboard is the FIHealth Dashboard subcomponent. The components are independently each other. You can install only the FIHealth Sanity Checks if you are not interested to publish your results in a web server (dashboard). Running FIHealth Sanity Checks you will get test results as xUnit and/or HTML. You can use optional scripts included in the repository to process these results and get the 'Sanity Status' of your node.
Of course, you can run all sanity tests only on your node (after setting in your environment the OpenStack credential variables), executing following script:
$ ./sanity_checks MyNodeName
To know more about these subcomponents and how can you install them, take a look at the FIHealth Sanity Checks and FIHealth Dashboard documentation.
Related
I would like to check the common vulnerabilities in some of FIWARE components that we are using in our platform, components list is given below.
Cepheus
Cygnus
Orion
STH-Comet
QuantumLeap
IoT Agent for JSON
IoT Agent Node Lib
If any source is available over some FIWARE website or some other source, where we can verify the vulnerabilities in FIWARE component. Please provide the information if such information is available.
For a given Docker baseline we are using Anchore and Clair checks. For a given usual running Docker Container based on a Docker Compose file a Docker Benchmark Security recommendation is executed. Additionally, we are running SAST code analysis over the corresponding repositories. Plus npm audit for the node.js ones plus.
We are defining corresponding GitHub Actions to use inside the repositories.
There is a working project to provide security analysis of the components, the first version is not released yet. You can take a look on it in this repository FIWARE Security Scan
I have 2 workflows workflow 1 in Integration service 1 and workflow 2 integration service 2.
How do I call workflow 2 from workflow 1 I am currently trying to call then using command prompt but it didn't work
Just to let you know these integration servicce 1 is informatica 9.2
and integration service 2 is informatica version 10.2
PowerCenter does not provide suppor for cross-workflows dependencies. Regardless of whether these are configured to use same or a different Integration Service.
The best way to solve this kind of challenges is to use separate scheduling tool, such as AirFlow, Control-M, Autosys - or any other.
It is also possible to expose the workflow as a webservice and call it from different workflows, if needed. Not really convenitent, but possible.
Lastly, it's possible to use command line interface pmcmd startworkflow in a command task of one workflow to have the other one started.
I have done something similar this way:
The other WF is a web service one/ or is executed along a web service.
Add an application connection.
The WSH where your WF runs should be the endpoint of that connection.
Add this WF inside the mapping of the other one as a Web Service transformation.
I was checking the FIWARE Sanity Checks status portal and I was wondering if it is possible to execute a single test on a FIWARE Lab node.
The feature available on the FIWARE Sanity Checks Dashboard for node administrators to re-run Sanity Checks will always run all tests on the given node, but you can execute individual test by yourself in your own local environment.
The provided sanity_test script let you run all designed Sanity Checks on a given node or a set of them. This script runs all tests using the nosetests tool and, so far, you cannot specify which ones must be executed or not as parameters of this script. It is something that it is not implemented in the current version of the component.
If you want to run specific test, you will have to execute the tool manually (after setting the OpenStack credential variables in your environment for your Node and configuring all needed Sanity Check properties). For instance:
$ nosetests tests.regions.test_spain2:TestSuite.test_deploy_instance_with_new_network_and_e2e_connection_using_public_ip
The command above will execute only the test test_deploy_instance_with_new_network_and_e2e_connection_using_public_ip on Spain2 and will show by console if any error or exception is raised. You can use all available options given by nosetests to run a custom set of sanity tests with your own configuration (output formats and reports, logger, etc).
We are going to use Hudson/Jenkins build server to both build our server applications (just calling maven) and run integration tests against it. We are going to prepare 3 Hudson/Jenkins jobs: for build, deploy and run integration tests, which call each other in this order. All these jobs (build, deploy, integration tests) will be running nightly.
The integration tests are written with JUnit and are invoked by mvn test, (which will be invoked by the "test" Hudson/Jenkins job in turn). Since they require the server to be up and running we have to run that "deploy" job.
Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?
It definitely makes sense, basically you are referring to a build pipeline. There is a Jenkins-plugin to help visualize the upstream/downstream projects (you create a new pipeline view in jenkins).
As for the deployment of the server component, this depends on what technology/stack you are running on. For instance you could write a script that deploys the application to a test environment using a post-build step in jenkins.
Another option is to use a maven plugin to deploy the application. You can separate the deployment step in profile, and run only the deploy goal on the deploy step etc.
Basically there are a lot of options, but the idea of a build pipeline makes a lot of sense. To read up on build pipelines and related topics I would suggest taking a look at Continuous Deployment.
For more information related to Jenkins, have a look at this video.
Does it make sense? Is there are any special server to deploy
application and run tests or Hudson/Jenkins is ok for that?
You can run the application on the same server as jenkins, but wether that makes sense depends on the application. If it depends heavily on a specific server setup, a better choice may be to run the server in a vm, and but the configuration in source control. There are plenty of tools to help automate this, of the top of my head you have Puppet, Chef and Vagrant
Depending on the technology of your server, you could do all of this in a single Hudson project, executing your integration tests using Maven's Failsafe plugin instead of Surefire.
This allows you to start and deploy prior to executing the integration tests, and shutdown your server after they have completed. It also allows you to separate your integration tests from your unit tests.
For Java EE applications, you can perform the start/deploy/stop steps using Cargo, or use an embedded Jetty containing and the Jetty Maven plugin.
I have different sites running with 4 to 5 server at each location. All the locations have one monitoring server with Nagios. Now I want to create a central location and want to combine all the nagios services running at each location. Can anyone please point me to some documentation for these type of jobs.
There are two approaches that you can take.
Install a new Nagios core as you did at each location and perform active checks on each of the remote hosts. You'll likely end up installing NRPE on each of the remote hosts at each location and can read this document for the details: http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf. If your remote servers are Windows servers, you can use NSClient to much of the same things that NRPE does for Linux hosts. This effectively centralizes your monitoring server. I also wrote some how-to style entries for using NRPE to run privileged commands http://blog.gnucom.cc/?p=479 or to run event handlers http://blog.gnucom.cc/?p=458. If you get tired of installing NRPE, you can use my script here http://blog.gnucom.cc/?p=185. I also have instructions to install NSClient here http://blog.gnucom.cc/?p=201.
Install a new Nagios core as you did at each location and perform passive checks by instructing the remote Nagios cores to feed their results to the new central Nagios core's passive command file. I haven't done this myself, so I'm going to point you to the communities documentation here http://nagios.sourceforge.net/docs/2_0/passivechecks.html. You could probably look at my event handler post to set up event handlers that send checks to the main server.
From my personal experience, the first option I mentioned is easier to implement, and is far easy to administer. However, as your server fleet grows you'll start seeing major CPU bottlenecks with the main Nagios core. This is where passive checks would become beneficial, as the main Nagios core simply waits for critical checks to be sent to it rather than having to check them itself.
Hope this helps. :)
A centralized view tool may be what you are looking for. There are a number of different options available.
Nagiosfusion
MK Livestatus
Nagcen
Thruk