how does Hudson generate a job config.xml? - hudson

I need your help !
I want to know how do Hudson generate a job's config.xml?
I explain: I want to add in my application a hudson-like build tool, to do this, a user will have, like in Hudson's GUI, define some parameters like path to jdk, where the pom.xml is stored, etc... and then the config.xml for this job is generated.
Once i will have the config.xml for this job, i will create and build it.
I tried to search for Hudson's API, but it's all about creating a job, building, deleting.. but no way to give it parameters (personalize it). This is a "create" code sample:
private void put(HttpClient client, String hudsonBaseURL,String jobName ,File configFile) throws IOException, HttpException {
PostMethod postMethod = new PostMethod(hudsonBaseURL+ "/createItem?name=" + jobName);
postMethod.setRequestHeader("Content-type","application/xml; charset=ISO-8859-1");
postMethod.setRequestBody(new FileInputStream(configFile));
postMethod.setDoAuthentication(true);
try {
int status = client.executeMethod(postMethod);
System.out.println("Projet existe déjà\n"+status + "\n"+ postMethod.getResponseBodyAsString());
} finally {
postMethod.releaseConnection();
}
}
This method requires the config.xml to create a job.
I'm now trying to see the content of the hudson.war, inside its classes, but i have to say that this is not easy.
I wish i was clear.
Any idea would be welcome.
Nacef.

I recommend using Hudson's remote API for automating creation of a job.
Have a look at http://your.hudson.server/api. Hudson will return HTML documentation for the remote API. Under Create Job you'll see that you can POST a config.xml to a Hudson URL in order to create a job. You should be able to create a template job manually, then use that config.xml as a template in your automated system.
As described in this previous answer, job configuration can be found in HUDSON_HOME/jobs/[name]/config.xml.

Related

How can I externalize ISchedulerExecutorService to run tasks in an external hazelcast cluster(Hazecast 5.2) without using UserCodeDeployment?

I am working on externalizing our IScheduledExecutorService so I can run tasks externally on a external cluster. I am able to write a test and get the Runnable to actually run ONLY if I turn on UserCode deployment. If I want to change this task at all and run the tests again I get the below in my external cluster member's logs..
java.lang.IllegalStateException: Class com.mycompany.task.ScheduledTask is already in local cache and has conflicting byte code representation
I want to be able to change the task if I could and redeploy to Hazelcast to just handle it. I do this kind of thing with our external maps now. It can handle different versions of our objects using compact serialization.
Am I stuck using user code deployment for these functional objects? If I need to make a change to it I need to change the class name and redeploy to production. I'm hoping to get this task right the first time and not have to ever do that but I have a way of handling it if I do.
The cluster is already running in production and I'll have to add the following to each member
HZ_USERCODEDEPLOYMENT_ENABLED=true
and the appropriate client code(listed below) to enable this.
What I've done...
Added the following to my local docker file
HZ_USERCODEDEPLOYMENT_ENABLED=true
and also in the code that creates a hazelcast client connecting to my external cluster with
ClientConfig clientConfig = new ClientConfig(); ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig = new ClientUserCodeDeploymentConfig(); clientUserCodeDeploymentConfig.addClass("com.mycompany.task.ScheduledTask"); clientUserCodeDeploymentConfig.setEnabled(true); clientConfig.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
However, if I remove those two pieces I get the following Exception with a failing test. It doesn't know about my class at all.
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.mycompany.task.ScheduledTask
Side Note:
We are using compact serialization for several maps already and when I try to configure this Runnable task via compact serialization I get the below error. I don't think that's the right approach either.
[Scheduler: myScheduledExecutorService][Partition: 121][Task: 7afe68d5-3185-475f-b375-5a82a7088de3] Exception occurred during run
java.lang.ClassCastException: class com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord cannot be cast to class java.lang.Runnable (com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord is in unnamed module of loader 'app'; java.lang.Runnable is in module java.base of loader 'bootstrap')
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:49) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) ~[hazelcast-5.2.0.jar:5.2.0]

How to configure bootBuildImage task to be up-to-date when no changes to source code

I'm trying to use spring boot 2.3's new support for creating docker images, but the bootBuildImage gradle task is never up-to-date. This unfortunately causes a new docker image to be generated even if no source code was changed.
My goal is to have a static build command that doesn't result in new images being produced unnecessarily. So something like one of the two scenarios below:
./gradlew bootBuildImage (but does nothing if no source code has changed)
OR
./gradlew someOtherTask (if this task is not up-to-date, it triggers bootBuildImage)
My latest effort was to configure bootBuildImage to only run if the bootJar task is not up to date:
tasks {
val bootJarTask: TaskProvider<BootJar> = this.bootJar
bootBuildImage {
outputs.upToDateWhen {
bootJarTask.get().state.upToDate
}
}
}
But this fails with this error (for some reason this particular task hates jars as inputs)
> Unable to store input properties for task ':bootBuildImage'. Property 'jar' with value '/demo/build/libs/demo-0.0.1-SNAPSHOT.jar' cannot be serialized.
Surely I'm missing something obvious here! The reason I need bootBuildImage to only produce an image when necessary is because I've got a multi-project build. I don't want subprojects to generate and push a new image even when nothing in them changed.
Using Spring Boot 2.3.4, Gradle 6.6.1, Java 11.
This seems to work:
val bootJarTask: TaskProvider<BootJar> = this.bootJar
bootBuildImage {
onlyIf {
!bootJarTask.get().state.skipped
}
}

How do I retrieve the json representation of an azure data factory pipeline?

I want to track pipeline changes in source control, and I'm looking for a way to programmatically retrieve the json representation from the ADF.
The .Net routines return the objects, but sadly ToString() does not return json (wouldn't THAT be convenient?), so right now I'm looking at copying the json down by hand (shoot me now!), or possibly trying to recreate the json from the .Net objects (shoot me later!).
Please tell me I'm being dense and there is an obvious way to do this.
You can serialize the object using Newtonsoft Json.
See (https://azure.microsoft.com/en-us/documentation/articles/data-factory-create-data-factories-programmatically/) for how to connect via the ADF SDK
var aadTokenCredentials = new TokenCloudCredentials(ConfigurationManager.AppSettings["SubscriptionId"], GetAuthorizationHeader());
var resourceManagerUri = new Uri(ConfigurationManager.AppSettings["ResourceManagerEndpoint"]);
var manager = new DataFactoryManagementClient(aadTokenCredentials, resourceManagerUri);
var pipeline = manager.Pipelines.Get(resourceGroupName, dataFactoryName, pipelineName);
var pipelineAsJson = JsonConvert.SerializeObject(pipeline.Pipeline, Formatting.Indented);
I was expecting something more complex but looking at the sdk source GitHub it is not doing anything special.
Our team has a deployment tool that takes git changes and deploy them appropriately. Everything is done asynchronously and being controlled and versioned through git.
In a nutshell our deployment has the following flow:
Any completed git merge request triggers a VSO build. This is simply
building the whole solution via MsBuild.
Every successful build is applied a Git tag for tracking of Last Known Good.
Next (if build succeeded) our .net ADFPublisher starts by taking only the changed data factory files and asynchronously publishing them based on their
git operation (modified, add, delete, etc.).
For some failures cases our ADFPublisher will perform a retry.
This whole process (Build + publish) takes ~ 65 seconds and has
already saved us from having several bugs. It also allows us to move
definitions from one environment to another very easily.
Let me know if you think this is something that you will be interested in and I will setup a way to share it with you

Warn (or fail) if a package is run without having overriden every pkg connectionstring with a config file entry

It seems like a very common issue with SSIS packages is releasing a package to Production that ends up with running the wrong connectionstring parameters. This could happen by making any one of many mistakes or ommisions. As a result, I find it helpful to dump all ConnectionString values to a log file. This helps me understand what connectionstrings were actually applied to the package at run time.
Now, I am considering having my packages check to see if every connnection object in my package had its connectionstring overriden by an entry in the config file and if not, return a warning or even fail the package. This is to allow easier configuration by extracting all environment variables to a config file. If a connectionstring is never overridden, this risks that a package, when run in production, may use development settings or a package, when run in a non production setting when testing, may accidentily be run against production.
I'd like to borrow from anyone who may have tried to do this. I'd also be interested in suggestions on how to accomplish this with minimal work.
Thx
Technical question 1 - what are my connection string
This is an easy question to answer. In your package, add a Script Task and enumerate through the Connections collection. I fire the OnInformation event and if I had this scheduled, I'd be sure to have the /rep iew options in my dtexec to ensure I record Information, Errors and Warnings.
namespace TurnDownForWhat
{
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
/// <summary>
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
/// </summary>
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
bool fireAgain = false;
foreach (var item in Dts.Connections)
{
Dts.Events.FireInformation(0, "SCR Enumerate Connections", string.Format("{0}->{1}", item.Name, item.ConnectionString), string.Empty, 0, ref fireAgain);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Running that on my package, I can see I had two Connection managers, CM_FF and CM_OLE along with their connection strings.
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_FF->C:\ssisdata\dba_72929.csv
Information: 0x0 at SCR Enum, SCR Enumerate Connections: CM_OLE->Data Source=localhost\dev2012;Initial Catalog=tempdb;Provider=SQLNCLI11;Integrated Security=SSPI;
Add that to ... your OnPreExecute event for all the packages and no one sees it but every reports back.
Technical question 2 - Missed configurations
I'm not aware of anything that will allow a package to know it's under configuration. I'm sure there's an event as you will see in your Information/Warning messages that a package attempted to apply a configuration, didn't find one and is going to retain it's design time value. Information - I'm configuring X via Y. Warning - tried to configure X but didn't find Y. But how to have a package inspect itself to find that out, I have no idea.
That said, I've seen reference to a property that fails package on missed configuration. I'm not seeing it now, but I'm certain it exists in some crevice. You can supply the /w parameter to dtexec which treats warnings as errors and really, warnings are just errors that haven't grown up yet.
Unspoken issue 1 - Permissions
I had a friend who botched an XML config file as part of their production deploy. Their production server started consuming data from a dev server. Bad things happened. It sounds like you have had a similar situation. The resolution is easy, insulate your environments. Are you using the same service account for your production class SQL Server boxes and dev/test/uat/qa/load/etc? STOP. Make a new one. Don't allow prod to talk to any boxes that aren't in their tier of service. Someone bones a package and doesn't set a configuration? First of all, you'll catch it when it goes from dev to something-before-production because that tier wouldn't be able to talk to anything else that's not that level. But if you're in the ultra cheap shop and you've only got dev and prod, so be it. Non-configured package goes to prod. Prod SQL Agent fires off the package. Package uses default connection manager and fails validation because it can't talk to the dev sales database.
Unspoken issue 2 - template
What's your process when you have a new package to build? Does your team really start from scratch? There are so many ways to solve this problem but the core concept is to define your best practices for Configuration, Logging, Package Protection Level, Transaction levels, etc into some easily consumable form. Maybe that's 3 starter packages: one for raw acquisition, maybe one stages and conforms the data and the last one moves data from conformed into the final destination. Teammates then simply have to pick one to start from and fill in the spots that need it. If they choose to do their own thing, that's the stick you beat them with when their package fails to run in production because they didn't follow the standard path.
There are other approaches here. If you're a strong .NET crew, you can gen your template packages that way. At this point, I create my templates with Biml and use that to drive basic package creation.
If I am understanding you correctly the below solution should work.
My suggestion to you is to turn on the Do not save sensitive option for the ProtectionLevel property at the top level of the package.
This will require you to use package configurations for every connection, otherwise it will not have the credentials to make a connection.

How do i add a test project to the Eclipse environment created by JUnit for Plugin-testing?

I currently am working on an Eclipse plugin which needs to access the selected project in the Project Explorer. I have to provide JUnit tests, but i'm very unsure how to write proper tests for an Eclipse plugin.
I think JUnit is atleast properly creating a test-eclipse, since i can use calls like "PlatformUI.getWorkbench()" inside the test. But how do i setup a test-project inside this test-eclipse that my JUnit tests can work with? (I also need to set some of the project more internal stuff, since i'm checking natureIds and builderNames)
Thanks in advance for your answers! I would also be glad for links to a walkthrough of writing tests for an eclipse-plugin ;)
You write your tests in a plug-in as well, so that they're part of the executing Eciipse runtime. Then you have access to the APIs from org.eclipse.core.resources to create projects, folders, and files.
For creating a project specifically:
IProjectDescription description = ResourcesPlugin.getWorkspace().newProjectDescription(name);
IProject project = ResourcesPlugin.getWorkspace().getRoot().getProject(name);
// set nature IDs on the description here
try {
project.create(description, new NullProgressMonitor());
project.open(new NullProgressMonitor());
}
catch (CoreException e) {
e.printStackTrace();
}
return project;