Pass information between two Jenkins jobs in a Pipeline - parameter-passing

I want to migrate an existing job to Jenkins Pipeline and this process I try to migrate the different plugins attached to this job to use the new syntax.
However, not all plugins provide a corresponding wrapper syntax.
In the current case, I want to allocate three separate ports (embedded DB, container, and process engine) so that the build can run independently from other builds on the same machine. In the classic Jenkins job, we could use the Port Allocator Plugin, but it's not (yet) available via Pipeline Syntax.
My idea is to trigger a classic build that uses the port allocator plugin and returns the free ports that I can then use in a later build to start up the required services.
node {
stage("Allocate Ports") {
build job: "allocate-port", parameters: [ // <- the classic build
string(name: "name1", value: "PORT_DB"),
string(name: "name2", value: "PORT_CONTAINER"),
string(name: "name3", value: "PORT_ENGINE")
]
}
stage("Integration Tests") {
sh """
run-test \
-db=${PORT_DB} \
-container=${PORT_CONTAINER} \
-engine=${PORT_ENGINE}
"""
}
}
Is there a good way to store the results from the allocate-port build and return it to the enclosing pipeline?

Related

key-value of JSON object not stored in Azure App Config as expected when reading from App Config

I'm developing an Azure Function which has to consume JSON as input and then trigger a hybrid CI/CD pipeline split between on-prem and Azure DevOps. To split configuration from code I intend to use an Azure App Configuration store to retrieve configuration settings that the Function will use to trigger the correct pipeline depending on JSON input. I'm completely new to App Config but have tried to investigate how to properly use it. However, I have stumbled into a perplexing issue and can't find an explanation for it. I apologize if I have missed something obvious out there.
For the purpose of this question I have abstracted away any business-related terminology.
Imagine I have a JSON object stored in a file TestStructure.json that looks like this:
{
"TestStructure": {
"Repository1": {
"RepositoryName": "Repository1",
"RepositoryUrl": "https://url.repository1.com/"
},
"Repository2": {
"RepositoryName": "Repository2",
"RepositoryUrl": "https://url.repository2.com/"
},
"Repository3": {
"RepositoryName": "Repository3",
"RepositoryUrl": "https://url.repository3.com/"
}
}
}
I store this in my App Config using the Azure CLI with the following command:
az appconfig kv import -n <myAppConfigName> -s file --format json --path "C:\workspace\TestStructure.json" --content-type "application/json" --separator . --depth 2
The command yields the following key-value pairings:
---------------- Key Values Preview ----------------
Adding:
{"key": "TestStructure.Repository1", "value": "{\"RepositoryName\": \"Repository1\", \"RepositoryUrl\": \"https://url.repository1.com/\"}"}
{"key": "TestStructure.Repository2", "value": "{\"RepositoryName\": \"Repository2\", \"RepositoryUrl\": \"https://url.repository2.com/\"}"}
{"key": "TestStructure.Repository3", "value": "{\"RepositoryName\": \"Repository3\", \"RepositoryUrl\": \"https://url.repository3.com/\"}"}
These keys are what I expect to find in my App Config store.
Going to the App Config in the Azure Portal I find that the JSON object has been stored correctly, i.e. the keys are TestStructure.Repository1, TestStructure.Repository2 and so forth, all with their corresponding values as the Azure CLI command reported back. This screenshot verifies it:
Now, to the actual problem. When I try to fetch a key from my App Config I get some weird behavior.
I have put together a simple Console App in .NET 6 to test how to read from the App Config:
1 using Microsoft.Extensions.Configuration;
2
3 var config = new ConfigurationBuilder()
4 .AddAzureAppConfiguration("MyConnectionString")
5 .Build();
6
7 var repository = config["TestStructure.Repository1"] // Returns null
It doesn't make sense to me why line 7 returns null, so I attached a debugger to inspect the ConfigurationRoot object a bit further and found the following:
What is going on here? Inspecting the config object reveals that the actual keys to index with are stored as TestStructure.Repository1:RepositoryName and not TestStructure.Repository1 and then the corresponding values.
Thank you for taking your time to read my question. I hope I have expressed clearly what I am trying to achieve and what my problem is.

Unrecognized content type parameters: format when serving model on databricks experiement

I got this Error when serving a model into databricks using MLflow,
Unrecognized content type parameters: format. IMPORTANT: The MLflow Model scoring protocol has changed in MLflow version 2.0. If
you are seeing this error, you are likely using an outdated scoring
request format. To resolve the error, either update your request
format or adjust your MLflow Model's requirements file to specify an
older version of MLflow (for example, change the 'mlflow' requirement
specifier to 'mlflow==1.30.0'). If you are making a request using the
MLflow client (e.g. via mlflow.pyfunc.spark_udf()), upgrade your
MLflow client to a version >= 2.0 in order to use the new request
format. For more information about the updated MLflow Model scoring
protocol in MLflow 2.0, see
https://mlflow.org/docs/latest/models.html#deploy-mlflow-models.
I'm looking after the right format to use on my Json input, as the format I am using looks like this example :
[
{
"input1":12,
"input2":290.0,
"input3":'red'
}
]
I don't really know if it's related to a version of my mlfow (currently I'm using mlflow==1.24.0), I can not update the version as I do not have some privileges.
I also have tried the solution suggested here and got :
TypeError:spark_udf() got an unexpected keyword argument 'env_manager'
I do not find any documentation so far to solve this issue.
Thank you for your help, in advance.
When you are logging the model, your MLflow version is 1.24, but when you serve it as an API in Databrick's there will be a new environment created for it. This new environment is installing a 2.0+ version of MLflow. As the error message suggests, you can either specify the MLflow version or update the request format.
If you are using Classic Model Serving, you should specify the version, if you are using Serverless Model Serving, you should update the request format. If you must use Classic Model Serving and do not want to upgrade, scroll to the bottom.
Specify the MLflow version
When logging the model, you can specify a new Conda environment or add additional pip requirements that are used when the model is being served.
pip
# log model with mlflow==1.* specified
mlflow.<flavor>.log_model(..., extra_pip_requirements=["mlflow==1.*"])
Conda
# get default conda env
conda_env = mlflow.<flavor>.get_default_conda_env()
print(conda_env)
# specify mlflow==1.*
conda_env = {
"channels": ["conda-forge"],
"dependencies": [
"python=3.9.5",
"pip<=21.2.4",
{"pip": ["mlflow==1.*", "cloudpickle==2.0.0"]},
],
"name": "mlflow-env",
}
# log model with new conda_env
mlflow.<flavor>.log_model(..., conda_env=conda_env)
Update the request
An alternative is to update the JSON request format, but this only will work if you are using Databrick's Serverless.
In the MLflow docs link at the end of the error message, you can see all the formats. From the data, you provided, I would suggest using dataframe_split or dataframe_records.
{
"dataframe_split": {
"columns": ["input1", "input2", "input3"],
"data": [[1, 2, "red"]]
}
}
{
"dataframe_records": [
{
"input1": 12,
"input2": 290,
"input3": "red"
}
]
}
Classic model serving with MLflow 2.0+
If you are using Classic Model Serving, don't want to specify the MLflow version and want to use the UI for inference, DO NOT log an input_example when you log the model. I know this does not follow "best practice" for MLflow, but because of some investigating, I believe there is an issue with Databricks when you do this.
When you log an input_example, MLFlow logs information about the example including type and pandas_orient. This information is used to generate the inference recipe. As you can see in the generated curl command, it sets format=pandas-records (the JSON is not generated). But this returns the Unrecognized content type... error.
curl \
-u token:$DATABRICKS_TOKEN \
-X POST \
-H "Content-Type: application/json; format=pandas-records" \
-d '{
"dataframe_split": {
"columns": ["input1", "input2", "input3"],
"data": [[12, 290, 3]]
}
}' \
https://<url>/model/<model>/<version>/invocations
For me when I removed format=pandas-records entirely, then everything works as expected. Because of this, I believe if you log an example and use the UI then Databricks is adding this format to the request for you. Which results in an error even if you did everything correctly. While in serverless the generated curl does not include this parameter at all.

How to use the output of the invokeLambda in jenkins pipeline?

I am invoking a lambda function from jenkins declarative pipeline. Now i want to use its output in other the pipeline. I am trying the following code :
def health=invokeLambda([awsAccessKeyId: 'xxxx', awsRegion: 'rrrrr', awsSecretKey: 'kkkkk', functionName: 'yyyyyy', payload: '', synchronous: true]);
when i try echo "$health" iam getting null.
Does anyone know how to use the output of lambda function in jenkinsfile?
I Assume you are using the AWS Lambda Plugin which is a relatively old plugin and was designed for freestyle jobs and not for pipelines.
While you can use it in a pipeline script you will not be able to get the returned value as the plugin is designed to update environment variables with the results - which works grate in freestyle jobs but is not supported in pipelines.
To achieve what you want you can use the new pipeline oriented AWS Steps Plugin which contains many AWS related steps which are designed for pipeline usage and allow easy access to the output of the steps. It is also more secure and allows much more capabilities then the old plugin - especially for pipelines.
In your case you can use the invokeLambda step that will return the output is you expect:
withAWS(region:'eu-central-1', credentials:'nameOfSystemCredentials') {
def result = invokeLambda(
functionName: 'myLambdaFunction',
payload: [ "key": "value", "anotherkey" : [ "another", "value"] ]
)
}

Nested JSON path with dot/period in azure

The goal is to use the az iot edge deployment update command to change a module in an azure iot hub/edge deployment. The attempt to do this uses the property-path within the deployment configuration json to replace the image path. The problem is that there is a dot in a json property properties.desired and attempts of escaping it have been futile. The file is a default azure deployment configuration file.
Command format
az iot edge deployment update --deployment-id <name-of-deployment> --hub-name <name-of-iot-hub> --set <json-path>=<new-value>
First part of the deployment configuration (json)
The goal is to change the value of image
{
"content": {
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"modules": {
"demoimage1-latest": {
"settings": {
"image": "demoworkspac2478a907.azurecr.io/demoimage1:6",
The most obvious attempt
az iot edge deployment update --deployment-id demoimage1-6 --hub-name iot-hubski --set content.modulesContent.'$edgeAgent'.'properties.desired'.modules.'demoimage1-latest'.settings.image=demoworkspac2478a907.azurecr.io/demoimage1:5
Gives
Couldn't find 'properties' in 'content.modulesContent.$edgeAgent.properties.desired.modules.demoimage1-latest'. Available options: ['properties.desired']
Status
Many things have been tried using both bash (ubuntu LTS vm) and powershell (win10)
[properties.desired]
'[properties.desired]'
['properties.desired']
properties\.desired
properties.desired`
properties.desired
'..."properties.desired"...'
'...\"properties.desired\"...'
'$edgeAgent'[properties.desired]
'$edgeAgent'['properties.desired']
^[properties.desired^]
^^[properties.desired^^]
``[properties.desired]
```[properties.desired``]`
you need to manually strinigying the $edgeHub JSON.
az iot edge deployment update --deployment-id testedge --hub-name microwaves --set content.modulesContent.'$edgeHub'="{'properties.desired': {'routes': {'route': 'FROM /messages/* INTO $upstream'},'schemaVersion': '1.0','storeAndForwardConfiguration': {'timeToLiveSecs': 7201}}}"
However it doesn't do anything because of content being immutable. Items that can be updated by az iot edge deployment update command: labels, metrics, priority and targetCondition. labels and metrics do not allow values with ‘.’ in the name.

How to expose Openshift enviroment variables on a json

I have installed node-push-server. The configuration is loaded from a json like this:
{
"webPort": 8000,
"mongodbUrl": "mongodb://username:password#localhost/database",
"gcm": {
"apiKey": "YOUR_API_KEY_HERE"
},
"apn": {
"connection": {
"gateway": "gateway.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem"
},
"feedback": {
"address": "feedback.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem",
"interval": 43200,
"batchFeedback": true
}
}
}
How can I set the enviroment variables for my application in this json file?
I don't think it's possible. You should be able to change all these settings in the code though. For example in node you can do: process.env.OPENSHIFT_VARIABLENAME to read an environment variable.
Example for MongoDB connection string from docs:
//provide a sensible default for local development
mongodb_connection_string = 'mongodb://127.0.0.1:27017/' + db_name;
//take advantage of openshift env vars when available:
if(process.env.OPENSHIFT_MONGODB_DB_URL){
mongodb_connection_string = process.env.OPENSHIFT_MONGODB_DB_URL + db_name;
}
As an alternative, there is a quick and easy deployable gear called AeroGear Push that might serve your needs.
Config files can be awkward because including them in your source repo isn't always a good move.
OpenShift deployments are mostly git push-driven, so there are several options for helping you correctly resolve your configs on the server.
Configuring your service using ENV vars is the most common approach, but since this one requires a flat file, you'll need to find a way to update the file with the correct values.
If you know what keys and values are needed, you should be able to write a script that updates the example json, or merges two json objects to produce a flat config file including the strings node-pushserver will expect.
It looks like mongodbUrl, webPort, (and domain?) would need to be populated with OpenShift-provided values (when available). config-multipaas might be able to help with that.
I would probably implement the config bootstrapping / merging work as a build step, allowing you to prep the config file and start node-pushserver in it's usual way