How to call an informatica workflow which running in different integration service - integration

I have 2 workflows workflow 1 in Integration service 1 and workflow 2 integration service 2.
How do I call workflow 2 from workflow 1 I am currently trying to call then using command prompt but it didn't work
Just to let you know these integration servicce 1 is informatica 9.2
and integration service 2 is informatica version 10.2

PowerCenter does not provide suppor for cross-workflows dependencies. Regardless of whether these are configured to use same or a different Integration Service.
The best way to solve this kind of challenges is to use separate scheduling tool, such as AirFlow, Control-M, Autosys - or any other.
It is also possible to expose the workflow as a webservice and call it from different workflows, if needed. Not really convenitent, but possible.
Lastly, it's possible to use command line interface pmcmd startworkflow in a command task of one workflow to have the other one started.

I have done something similar this way:
The other WF is a web service one/ or is executed along a web service.
Add an application connection.
The WSH where your WF runs should be the endpoint of that connection.
Add this WF inside the mapping of the other one as a Web Service transformation.

Related

Spring batch deployed on openshift using several pods

I deploy an application on Openshift and I use at least 2 pods.
My war contains a Spring Batch application, scheduled by a Spring cron.
Of course, each pod start the same batch at the same time, and it's my problem/question.
Is there a way to avoid this behaviour ? I would like to start only one batch instance (or is there a way to configure Spring batch to check if a batch is already running ?)
Thanks in advance.
Assuming you use Deployment, it's not trivial, but here are some ideas that can help you.
Use ScheduledJobs/CronJobs from Kubernetes. Meaning you would ditch controling of launching batch from your app completely and have dedicated pod launched to perform batch job and die
Use master elector sidecar for establishing the right to exec batch (https://github.com/kubernetes/contrib/tree/master/election)
Implement some locking mechanism on your own
Use StatefulSet and bind batch to run only on a praticular hostname (ie. by config var passed to Pods like BATCH_HOSTNAME. StatefulSets have deterministic names so you could say that batch should run only on my-pods-0
It sounds like you need leader election in your situation. Spring Integration provides leader election functionality you can use to determine who is the master. That master would be the one that actually launches the jobs. The other would just ignore the scheduled event. You can read more about Spring Integration's leader election in the documentation here: https://docs.spring.io/spring-integration/api/org/springframework/integration/support/leader/LockRegistryLeaderInitiator.html

Using VSTS with multiple Azure Environments

We are new to VSTS and will be using the online service and integrating with our production Azure AD tenant. Since we do development that involves Office 365, this meant that we have both production and development Office 365/Azure AD environments. We understand that our authentication can only be tied to one of these (which is fine) but can we use VSTS to perform tasks against both environments (e.g. staging, deploy, etc.)? Are there certain things that do/don't work we should consider or are there other suggestions on how we would leverage VSTS across these environments as we take code tested against development to production? Thanks!
One option to do this would be using powershell and service principal authentication. No point in copy\pasting documentation so I'll leave a link.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authenticate-service-principal
You can also just authenticate to the API, get oAuth token and do pretty much anything with that. Not super straight forward, but can be done ;)
You can add multiple azure service endpoints, then deploy app through release, simple steps:
Refer to this blog: Automating Azure Resource Group deployment using a Service Principal in Visual Studio Online to manual configure Azure service endpoint (Manual Configuration section)
Create a Release Definition in vsts
Add environments (e.g. Staging, Deploy)
Add Azure App Service Deployment task for each environment
Specify corresponding Azure Subscription for these tasks

Running Mule Standalone vs Tomcat in Production

There are many ways of deploying Mule ESB into a production environment. According to the documentation, it appears that running Mule as a standalone service is the recommended way of doing so.
Are there any reasons for NOT running Mule standalone in production? I'm sure its stable, but how does it compare to Tomcat as far as performance, reliability, and resource utilization go?
Should I still consider running it within Tomcat for any reason?
Using Tomcat, or any other web container, allows you to use the web tier of that container for HTTP inbound endpoint (via the Servlet transport) instead of either Mule's HTTP or Jetty transports.
Other differences are found in class loading, handling of hot redeployment and logging.
Now the main reason why people do not use Mule standalone is corporate policy, ie "thou shalt deploy on _". When production teams have gained experience babysitting a particular Java app/web server, they want you to deploy your Mule project in that context so they can administer/monitor it in a well-known and consistent manner.
But if you're happy with the inbound HTTP layer you get in Mule standalone and you are allowed to deploy it in production, then go for it. It's production ready.
Mule actually recommends deploying standalone. Inside a container like e.g. tomcat it has to share the threadpool, heap etc... This can obviously prevent it from performing at it's best.
The main reason you'd want to inside a container like tomcat is to get automatic deployment. I.e. you can just update your Mule application .war and the container will restart mule with the new application. This helps in testing.
Also some transports are specific to running inside a container, like the servlet transport. OTOH when designing solution so Mule transports between your container and your servlets your'e doing it wrong.

Using JUnit, Maven and Hudson/Jenkins for integration tests

We are going to use Hudson/Jenkins build server to both build our server applications (just calling maven) and run integration tests against it. We are going to prepare 3 Hudson/Jenkins jobs: for build, deploy and run integration tests, which call each other in this order. All these jobs (build, deploy, integration tests) will be running nightly.
The integration tests are written with JUnit and are invoked by mvn test, (which will be invoked by the "test" Hudson/Jenkins job in turn). Since they require the server to be up and running we have to run that "deploy" job.
Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?
It definitely makes sense, basically you are referring to a build pipeline. There is a Jenkins-plugin to help visualize the upstream/downstream projects (you create a new pipeline view in jenkins).
As for the deployment of the server component, this depends on what technology/stack you are running on. For instance you could write a script that deploys the application to a test environment using a post-build step in jenkins.
Another option is to use a maven plugin to deploy the application. You can separate the deployment step in profile, and run only the deploy goal on the deploy step etc.
Basically there are a lot of options, but the idea of a build pipeline makes a lot of sense. To read up on build pipelines and related topics I would suggest taking a look at Continuous Deployment.
For more information related to Jenkins, have a look at this video.
Does it make sense? Is there are any special server to deploy
application and run tests or Hudson/Jenkins is ok for that?
You can run the application on the same server as jenkins, but wether that makes sense depends on the application. If it depends heavily on a specific server setup, a better choice may be to run the server in a vm, and but the configuration in source control. There are plenty of tools to help automate this, of the top of my head you have Puppet, Chef and Vagrant
Depending on the technology of your server, you could do all of this in a single Hudson project, executing your integration tests using Maven's Failsafe plugin instead of Surefire.
This allows you to start and deploy prior to executing the integration tests, and shutdown your server after they have completed. It also allows you to separate your integration tests from your unit tests.
For Java EE applications, you can perform the start/deploy/stop steps using Cargo, or use an embedded Jetty containing and the Jetty Maven plugin.

What the best way to move IBM Message Broker components between DEV, QA and PROD environments?

On a SOA project we start to employ IBM WebSphere Message Broker to orchestrate .NET-based web services. We have distinct DEV, QA and PROD environments for the system being developed.
WebSphere Message Broker Toolkit would be used to develop message flows in DEV. And with DEV everything is more or less clear.
For QA and PROD we aim to have repeatable and as automated and possible deployment procedure. With .NET portion it almost a no-brainer, but deployment to Message Broker seem to require substantial manual effort, which is not good.
What are recommendations for deployment to WebSphere Message Broker? What is the best way to package Message Broker components?
You components (flows and so forth) will be packaged up as Broker Archive (.bar) files. You can use Ant to script the deployment of these components between environments, for example.
Use scripts.
As Andy Piper says Ant works quite well.
Also be aware that you can use the CMP API which has been rebadged as the Message Broker API its quite comprehensive and lets you get at and modify information in you Broker Archive(BAR) files to a much more significant extent that just the various Broker commands you can invoke from a script.