What the best way to move IBM Message Broker components between DEV, QA and PROD environments? - esb

On a SOA project we start to employ IBM WebSphere Message Broker to orchestrate .NET-based web services. We have distinct DEV, QA and PROD environments for the system being developed.
WebSphere Message Broker Toolkit would be used to develop message flows in DEV. And with DEV everything is more or less clear.
For QA and PROD we aim to have repeatable and as automated and possible deployment procedure. With .NET portion it almost a no-brainer, but deployment to Message Broker seem to require substantial manual effort, which is not good.
What are recommendations for deployment to WebSphere Message Broker? What is the best way to package Message Broker components?

You components (flows and so forth) will be packaged up as Broker Archive (.bar) files. You can use Ant to script the deployment of these components between environments, for example.

Use scripts.
As Andy Piper says Ant works quite well.
Also be aware that you can use the CMP API which has been rebadged as the Message Broker API its quite comprehensive and lets you get at and modify information in you Broker Archive(BAR) files to a much more significant extent that just the various Broker commands you can invoke from a script.

Related

How can check the common vulnerabilities in FIWARE components?

I would like to check the common vulnerabilities in some of FIWARE components that we are using in our platform, components list is given below.
Cepheus
Cygnus
Orion
STH-Comet
QuantumLeap
IoT Agent for JSON
IoT Agent Node Lib
If any source is available over some FIWARE website or some other source, where we can verify the vulnerabilities in FIWARE component. Please provide the information if such information is available.
For a given Docker baseline we are using Anchore and Clair checks. For a given usual running Docker Container based on a Docker Compose file a Docker Benchmark Security recommendation is executed. Additionally, we are running SAST code analysis over the corresponding repositories. Plus npm audit for the node.js ones plus.
We are defining corresponding GitHub Actions to use inside the repositories.
There is a working project to provide security analysis of the components, the first version is not released yet. You can take a look on it in this repository FIWARE Security Scan

What is the difference between application console vs cluster console?

What is the difference between application console vs cluster console in openshift enterprise version. I am new to openshift and confused with terminologies. I feel that openshift is like linux kernel in our system(an analogy). On top of that are containers and to orchestrate we have kubernetes. However , the architecture of openshift is exact opposite. Please correct me.
OpenShift is just one of the available Kubernetes distributions, which adds enterprise-level services like authentication, authorization and multitenancy.
The web console provides two perspectives: Administrator and Developer. The Developer perspective provides workflows specific to developer use cases like create, deploy and monitor applications, while Administrator perspective is responsible for managing the cluster resources, users, and projects. Depending on the user's role, you will see a different set of views available in the main menu.

Using VSTS with multiple Azure Environments

We are new to VSTS and will be using the online service and integrating with our production Azure AD tenant. Since we do development that involves Office 365, this meant that we have both production and development Office 365/Azure AD environments. We understand that our authentication can only be tied to one of these (which is fine) but can we use VSTS to perform tasks against both environments (e.g. staging, deploy, etc.)? Are there certain things that do/don't work we should consider or are there other suggestions on how we would leverage VSTS across these environments as we take code tested against development to production? Thanks!
One option to do this would be using powershell and service principal authentication. No point in copy\pasting documentation so I'll leave a link.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authenticate-service-principal
You can also just authenticate to the API, get oAuth token and do pretty much anything with that. Not super straight forward, but can be done ;)
You can add multiple azure service endpoints, then deploy app through release, simple steps:
Refer to this blog: Automating Azure Resource Group deployment using a Service Principal in Visual Studio Online to manual configure Azure service endpoint (Manual Configuration section)
Create a Release Definition in vsts
Add environments (e.g. Staging, Deploy)
Add Azure App Service Deployment task for each environment
Specify corresponding Azure Subscription for these tasks

Using JUnit, Maven and Hudson/Jenkins for integration tests

We are going to use Hudson/Jenkins build server to both build our server applications (just calling maven) and run integration tests against it. We are going to prepare 3 Hudson/Jenkins jobs: for build, deploy and run integration tests, which call each other in this order. All these jobs (build, deploy, integration tests) will be running nightly.
The integration tests are written with JUnit and are invoked by mvn test, (which will be invoked by the "test" Hudson/Jenkins job in turn). Since they require the server to be up and running we have to run that "deploy" job.
Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?
It definitely makes sense, basically you are referring to a build pipeline. There is a Jenkins-plugin to help visualize the upstream/downstream projects (you create a new pipeline view in jenkins).
As for the deployment of the server component, this depends on what technology/stack you are running on. For instance you could write a script that deploys the application to a test environment using a post-build step in jenkins.
Another option is to use a maven plugin to deploy the application. You can separate the deployment step in profile, and run only the deploy goal on the deploy step etc.
Basically there are a lot of options, but the idea of a build pipeline makes a lot of sense. To read up on build pipelines and related topics I would suggest taking a look at Continuous Deployment.
For more information related to Jenkins, have a look at this video.
Does it make sense? Is there are any special server to deploy
application and run tests or Hudson/Jenkins is ok for that?
You can run the application on the same server as jenkins, but wether that makes sense depends on the application. If it depends heavily on a specific server setup, a better choice may be to run the server in a vm, and but the configuration in source control. There are plenty of tools to help automate this, of the top of my head you have Puppet, Chef and Vagrant
Depending on the technology of your server, you could do all of this in a single Hudson project, executing your integration tests using Maven's Failsafe plugin instead of Surefire.
This allows you to start and deploy prior to executing the integration tests, and shutdown your server after they have completed. It also allows you to separate your integration tests from your unit tests.
For Java EE applications, you can perform the start/deploy/stop steps using Cargo, or use an embedded Jetty containing and the Jetty Maven plugin.

SSIS and IBM MQ Series integration

I am in the process of exploring integration between SSIS and IBM MQ Series. Can some point me to some artical for integration.
You might be able to use the "Message Queue Task" in conjunction with some other host bridge. Here's what I found while trolling the Internet:
SSIS does not support loading into/from IBM MQ Series directly out of the box. ETI is the only known Microsoft partner or other 3rd party that can handle reading and writing to IBM MQ Series. This is handled via native sequential (MQSeries Queue Message)through their Built-to-Order connectivity process. SSIS customers also have the below ways of integrating their solutions with IBM MQ Series:
1.Microsoft’s MSMQ-MQ Series Bridge that comes with Host Integration Server. MSMQ Task in Control Flow in SSIS can be used along with the bridge, which provides connectionless, store-and-forward messaging across messaging systems and computing platforms throughout the network.
2.Using BizTalk’s MQ Series Adapter to push/pull messages into/out of SSIS Data Flow Engine via various exchange mechanisms : in-memory: script component, or staging : exchanging messages via the file system or directly writing it into SQL Server tables and use a data source component to read them.
3.Using the extensibility story in SSIS: build an MQ Series source/destination custom/script component directly interfacing with MQ Series native libraries.
You can find the original website here.
Here's a description of the "Message Queue Task" that I came across in a PDF comparing how SSIS and Informatica handle messaging.
SSIS includes a Microsoft Message Queue (MSMQ) connection manager that enables
connectivity to MSMQ message queues. Coupled with this is a bidirectional Message Queue
task that enables an SSIS package to push messages onto and pop messages off a message
queue. The messages can be in the form of text, files or the contents of SSIS variables.
SQL Server Service Broker which is included with SQL Server 2005 (and therefore with SSIS)
adds asynchronous messaging functionality to database applications. Service Broker queues
accessible from any application able to issue T-SQL code which includes SSIS.