castle scheduler - cluster - castle-windsor

We're using the castle scheduler component: http://using.castleproject.org/display/Comp/Castle.Components.Scheduler?showChildren=false
I have a wcf service which creates the tasks and that does it's job fine.
I then have a console app running (will be a windows service eventually) which should then keep an eye out for tasks to run.
Thing is the each create their own scheduler in the DB but they both have the same clusterid.
Should the console app be able to run the tasks created by the wcf service? If not - how can i make it do that?
Cheers
w://

we are advised to not use the castle scheduler.
The guys are building a quartz integration and i have offered my help.

Related

How to call an informatica workflow which running in different integration service

I have 2 workflows workflow 1 in Integration service 1 and workflow 2 integration service 2.
How do I call workflow 2 from workflow 1 I am currently trying to call then using command prompt but it didn't work
Just to let you know these integration servicce 1 is informatica 9.2
and integration service 2 is informatica version 10.2
PowerCenter does not provide suppor for cross-workflows dependencies. Regardless of whether these are configured to use same or a different Integration Service.
The best way to solve this kind of challenges is to use separate scheduling tool, such as AirFlow, Control-M, Autosys - or any other.
It is also possible to expose the workflow as a webservice and call it from different workflows, if needed. Not really convenitent, but possible.
Lastly, it's possible to use command line interface pmcmd startworkflow in a command task of one workflow to have the other one started.
I have done something similar this way:
The other WF is a web service one/ or is executed along a web service.
Add an application connection.
The WSH where your WF runs should be the endpoint of that connection.
Add this WF inside the mapping of the other one as a Web Service transformation.

Spring batch deployed on openshift using several pods

I deploy an application on Openshift and I use at least 2 pods.
My war contains a Spring Batch application, scheduled by a Spring cron.
Of course, each pod start the same batch at the same time, and it's my problem/question.
Is there a way to avoid this behaviour ? I would like to start only one batch instance (or is there a way to configure Spring batch to check if a batch is already running ?)
Thanks in advance.
Assuming you use Deployment, it's not trivial, but here are some ideas that can help you.
Use ScheduledJobs/CronJobs from Kubernetes. Meaning you would ditch controling of launching batch from your app completely and have dedicated pod launched to perform batch job and die
Use master elector sidecar for establishing the right to exec batch (https://github.com/kubernetes/contrib/tree/master/election)
Implement some locking mechanism on your own
Use StatefulSet and bind batch to run only on a praticular hostname (ie. by config var passed to Pods like BATCH_HOSTNAME. StatefulSets have deterministic names so you could say that batch should run only on my-pods-0
It sounds like you need leader election in your situation. Spring Integration provides leader election functionality you can use to determine who is the master. That master would be the one that actually launches the jobs. The other would just ignore the scheduled event. You can read more about Spring Integration's leader election in the documentation here: https://docs.spring.io/spring-integration/api/org/springframework/integration/support/leader/LockRegistryLeaderInitiator.html

Openshift application lifecycle events - application creation event?

I'm trying to hook into an application created event in OpenShift - if such an event exists.
The reason being, I would like to have a command run (ideally in a new pod), for creating a database schema. It doesnt make sense to have this in the application image, as I only need this run once - when the application is created.
I have looked into pod lifecycle hooks (https://docs.openshift.com/enterprise/3.1/dev_guide/deployments.html#pod-based-lifecycle-hook) however these events happen everytime there is a new deployment. So this also is too often for my use case.
Is there a way to have an image run just once when an Openshift application is created?
You're on the right track in the comments here. In the OpenShift v2 days the same scenario existed with lifecycle hooks.
For our WordPress Quickstart in OpenShift v2, for instance, we would check to see if the database was created yet on every new deployment. If not, we initialized an empty database with the same name as the app (in this case letting WordPress create the schema afterwards, but it's the same idea needed here): OpenShift v2 WordPress deploy action hook
In OpenShift v3, there are a few ways to implement a similar lifecycle hook, but the common pattern we're using in our templates now is to leverage the ability to execute a new pod to run database setup steps just prior to the deployment phase: OpenShift v3 CakePHP pre deploy lifecycle hook
Following this pattern, you would add your code to generate your database schema in a file like the v3 CakePHP migrate-database.sh in your source repo and execute the script with a pre deploy lifecycle hook (via execNewPod), checking first to see if the database/schema (select * from someknowntable limit 1) exists before loading the schema.

Using VSTS with multiple Azure Environments

We are new to VSTS and will be using the online service and integrating with our production Azure AD tenant. Since we do development that involves Office 365, this meant that we have both production and development Office 365/Azure AD environments. We understand that our authentication can only be tied to one of these (which is fine) but can we use VSTS to perform tasks against both environments (e.g. staging, deploy, etc.)? Are there certain things that do/don't work we should consider or are there other suggestions on how we would leverage VSTS across these environments as we take code tested against development to production? Thanks!
One option to do this would be using powershell and service principal authentication. No point in copy\pasting documentation so I'll leave a link.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authenticate-service-principal
You can also just authenticate to the API, get oAuth token and do pretty much anything with that. Not super straight forward, but can be done ;)
You can add multiple azure service endpoints, then deploy app through release, simple steps:
Refer to this blog: Automating Azure Resource Group deployment using a Service Principal in Visual Studio Online to manual configure Azure service endpoint (Manual Configuration section)
Create a Release Definition in vsts
Add environments (e.g. Staging, Deploy)
Add Azure App Service Deployment task for each environment
Specify corresponding Azure Subscription for these tasks

Using JUnit, Maven and Hudson/Jenkins for integration tests

We are going to use Hudson/Jenkins build server to both build our server applications (just calling maven) and run integration tests against it. We are going to prepare 3 Hudson/Jenkins jobs: for build, deploy and run integration tests, which call each other in this order. All these jobs (build, deploy, integration tests) will be running nightly.
The integration tests are written with JUnit and are invoked by mvn test, (which will be invoked by the "test" Hudson/Jenkins job in turn). Since they require the server to be up and running we have to run that "deploy" job.
Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?
It definitely makes sense, basically you are referring to a build pipeline. There is a Jenkins-plugin to help visualize the upstream/downstream projects (you create a new pipeline view in jenkins).
As for the deployment of the server component, this depends on what technology/stack you are running on. For instance you could write a script that deploys the application to a test environment using a post-build step in jenkins.
Another option is to use a maven plugin to deploy the application. You can separate the deployment step in profile, and run only the deploy goal on the deploy step etc.
Basically there are a lot of options, but the idea of a build pipeline makes a lot of sense. To read up on build pipelines and related topics I would suggest taking a look at Continuous Deployment.
For more information related to Jenkins, have a look at this video.
Does it make sense? Is there are any special server to deploy
application and run tests or Hudson/Jenkins is ok for that?
You can run the application on the same server as jenkins, but wether that makes sense depends on the application. If it depends heavily on a specific server setup, a better choice may be to run the server in a vm, and but the configuration in source control. There are plenty of tools to help automate this, of the top of my head you have Puppet, Chef and Vagrant
Depending on the technology of your server, you could do all of this in a single Hudson project, executing your integration tests using Maven's Failsafe plugin instead of Surefire.
This allows you to start and deploy prior to executing the integration tests, and shutdown your server after they have completed. It also allows you to separate your integration tests from your unit tests.
For Java EE applications, you can perform the start/deploy/stop steps using Cargo, or use an embedded Jetty containing and the Jetty Maven plugin.