i am new to jenkins, i have a pipline job with parameters.
I want to create a JSON file and write my parameters there.
(and then let my jar file read that JSON file and run according to it)
how can i do this in groovy?
this is my jenkins file:
pipeline {
agent {
label "create_pass_criteria"
}
parameters {
string(name: 'IP', description: 'Please enter your ip')
password(name: 'PASSWORD',description: 'Please enter your mx password')
string(name: 'NAME', description: 'Please enter the name ')
}
tools {
maven 'maven-3.3.9'
}
options
{
buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '20'))
gitLabConnection('gitlab')
}
stages {
stage('Git Clone') {
steps {
updateGitlabCommitStatus name: 'Build', state: 'running'
checkout([
$class : 'GitSCM',
branches : [[name: '*/master']],
doGenerateSubmoduleConfigurations: false,
extensions : [],
submoduleCfg : [],
userRemoteConfigs : [[credentialsId: GIT_CRED, url: GIT_PATH]]
])
}
}
stage('Build') {
steps {
sh 'mvn install'
}
}
stage('run') {
steps {
sh 'java -jar /var/lib/jenkins/workspace/create_pass_criteria/target/create_pass_criteria-8.0.125-SNAPSHOT.jar'
}
}
}
post {
success {
updateGitlabCommitStatus name: 'Build', state: 'success'
emailext(
to: EMAIL_ADDR,
subject: "Success Pipeline: ${currentBuild.fullDisplayName}",
body: "Pipeline URL: ${env.BUILD_URL}",
mimeType: 'text/html'
)
}
failure {
updateGitlabCommitStatus name: 'Build', state: 'failed'
emailext(
to: EMAIL_ADDR,
subject: "Failed Pipeline: ${currentBuild.fullDisplayName}",
body: "Pipeline URL: ${env.BUILD_URL}",
mimeType: 'text/html'
)
}
}
} // pipeline
i don't know if it is correct but this is what i need to add to my Jenkins file?:
node{
//to create json declare a sequence of maps/arrays in groovy
//here is the data according to your sample
def data = [
attachments:[
[
mxIp : params.MX_IP,
mxPassword : params.MX_PASSWORD,
policyName : params.POLICY_NAME,
]
]
]
writeJSON(file: 'parameters.json', json: data)
}
if yes, at which part does it is has to be?
You could put this code in a script block like this:
stage('run') {
steps {
script {
def data = [
attachments:[
[
mxIp : params.MX_IP,
mxPassword : params.MX_PASSWORD,
policyName : params.POLICY_NAME,
]
]
]
writeJSON(file: 'parameters.json', json: data)
}
sh 'java -jar /var/lib/jenkins/workspace/create_pass_criteria/target/create_pass_criteria-8.0.125-SNAPSHOT.jar'
}
}
In complex pipelines I try to create clean code by adhering to the single level of abstraction principle. In this case I would extract the script and sh steps into a separate function, which could then be called from the pipeline section as a single step:
stage('run') {
steps {
createPassCriteria()
}
}
Define the function after the closing } of the pipeline section:
void createPassCriteria() {
def data = [
attachments:[
[
mxIp : params.MX_IP,
mxPassword : params.MX_PASSWORD,
policyName : params.POLICY_NAME,
]
]
]
writeJSON(file: 'parameters.json', json: data)
sh 'java -jar /var/lib/jenkins/workspace/create_pass_criteria/target/create_pass_criteria-8.0.125-SNAPSHOT.jar'
}
Related
i got stuck on splitting the values in json. I want to split the values for the key: ServerIPList. Here's the JSON format:
{
ServerIPList : [
<PrivateIP> <HOSTNAME> <REGION> <AWS ACCOUNT>
{ 172.00.00.00 ,ip-172-00-00-00.ec2.internal ,us-east-1,123456789123 } ,
] ,
Operation : start,
SNowTicket : RITM00001
}
Here's my code:
import groovy.json.*
def message
pipeline {
agent any
options {
buildDiscarder logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: '', numToKeepStr: '5')
}
stages {
stage('Recieve SQS Message') { steps {
script {
def aws = new AwsUtil()
def success = aws.receiveSqsMessages(env.AWS_SQS_QUEUE, 1, { json ->
message = new Util().jsonSlurper(json.Body)
println message
println message.ServerIPList
println message.Operation
result = sh (
script: "python ec2InstanceState.py ${message.ServerIPList} ${message.Operation}",
returnStdout: true
).trim()
output of my code:
Body:{
"ServerIPList" : "172.24.8.73","ip-172-00-00-00.ec2.internal","us-east-1","123456789123"
"Operation" : "start",
"SNowTicket" : "RITM1062357"
},
"172.24.8.73","ip-172-00-00-00.ec2.internal","us-east-1","123456789123"
"start"
"python ec2InstanceState.py "172.24.8.73","ip-172-00-00-00.ec2.internal","us-east-1","123456789123" start,
Now, here i would like to grep only PrivateIp from the key: "ServerIPList". How to get it:
python ec2InstanceState.py 172.24.8.73 start
I've been using VeeValidate v2 and had something like this:
VeeValidate.Validator.localize('en', customErrors);
const customErrors = {
custom: {
someField: {
required: 'error.required',
},
...
}}
I have JSON files for example en.json, de.json, fr.json, etc. which look like this:
// en.json
{
"something": {
"something1": "phrase1",
"something2": "phrase2",
}
"error": {
"required": "Field is required"
}
}
In v2 this worked and errors were translated.
I updated vee-validate to v4 because of the Vue update to v3 and I don't know how to achieve the same effect.
Now I got
import { configure } from 'vee-validate';
// VeeValidate.Validator.localize('en', customErrors);
configure({
generateMessage: localize('en', customErrors)
});
I also changed customErrors
const customErrors = {
fields: {
someField: {
required: 'error.required',
},
...
}}
With this config my error is just error.required instead of the value from this field from the JSON file.
Can somebody help?
What I would like to achieve is to be able to generate eventgridsubscriptions very easy using bicep. Because manually it costs a lot of time. I have to create like a over a dozen each day.
I have the following bicep file called main.bicep
param eventSubscriptionName string = 'eventSubName'
param storageAccountName string ='storeAccountName'
param deadLetterAccountName string = 'deadlttrstore'
param serviceBusQueueName string = 'queue.name.enter'
param onrampName string = 'storagecontainername'
resource storageAccount 'Microsoft.Storage/storageAccounts#2021-09-01' existing = {
name: storageAccountName
}
resource deadLetterAccount 'Microsoft.Storage/storageAccounts#2021-09-01' existing = {
name: deadLetterAccountName
}
resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues#2021-11-01' existing = {
name: serviceBusQueueName
}
resource eventgridsubscription 'Microsoft.EventGrid/eventSubscriptions#2021-12-01' = {
name: eventSubscriptionName
scope: storageAccount
properties: {
deadLetterDestination: {
endpointType: 'StorageBlob'
properties: {
blobContainerName: 'storage-deadletters'
resourceId: deadLetterAccount.id
}
}
destination: {
endpointType: 'ServiceBusQueue'
properties: {
deliveryAttributeMappings: [
{
name: serviceBusQueueName
type: 'Static'
properties: {
isSecret: false
value: ''
}
}
]
resourceId: serviceBusQueue.id
}
}
eventDeliverySchema: 'EventGridSchema'
filter: {
enableAdvancedFilteringOnArrays: false
includedEventTypes: [
'Microsoft.Storage.BlobCreated'
]
isSubjectCaseSensitive: false
subjectBeginsWith: '/blobServices/default/containers/${onrampName}'
subjectEndsWith: '.json'
}
retryPolicy: {
eventTimeToLiveInMinutes: 1440
maxDeliveryAttempts: 5
}
}
}
When I want create the event subscription using az cli with:
az deployment group create -f main.bicep -g <resource-group>
I get the following error:
{
"status": "Failed",
"error":
{
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details":
[
{
"code": "BadRequest",
"message": "{\r\n \"error\":
{\r\n \"code\": \"InvalidTemplate\",\r\n \"message\": \"Unable to process template language expressions for resource '/subscriptions/x1234456-f9cc-44e5-bc40-5f02d962f2d7/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/providers/Microsoft.EventGrid/eventSubscriptions/eventSubName' at line '34' and column '5'. 'The language expression property array index '1' is out of bounds.'\",\r\n \"additionalInfo\":
[\r\n {\r\n \"type\": \"TemplateViolation\",\r\n \"info\": {\r\n \"lineNumber\": 34,\r\n \"linePosition\": 5,\r\n \"path\": \"\"\r\n }\r\n }\r\n ]\r\n }\r\n}"
}
]
}
}
I am working according to the template documented at MS here:
https://learn.microsoft.com/en-us/azure/templates/microsoft.eventgrid/eventsubscriptions?tabs=bicep
Eventually the solution was quite simple the servicebus resource was missing its parent resource namely the servicebus namespace. Once that was added it worked.
resource serviceBus 'Microsoft.ServiceBus/namespaces#2021-11-01' existing = {
name: serviceBusName
}
and
resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues#2021-11-01' existing = {
parent: serviceBus
name: serviceBusQueueName
}
to
param eventSubscriptionName string = 'eventSubName'
param storageAccountName string ='storeAccountName'
param deadLetterAccountName string = 'deadlttrstore'
param serviceBusQueueName string = 'queue.name.enter'
param onrampName string = 'storagecontainername'
resource storageAccount 'Microsoft.Storage/storageAccounts#2021-09-01' existing = {
name: storageAccountName
}
resource deadLetterAccount 'Microsoft.Storage/storageAccounts#2021-09-01' existing = {
name: deadLetterAccountName
}
resource serviceBus 'Microsoft.ServiceBus/namespaces#2021-11-01' existing = {
name: serviceBusName
}
resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues#2021-11-01' existing = {
parent: serviceBus
name: serviceBusQueueName
}
resource eventgridsubscription 'Microsoft.EventGrid/eventSubscriptions#2021-12-01' = {
name: eventSubscriptionName
scope: storageAccount
properties: {
deadLetterDestination: {
endpointType: 'StorageBlob'
properties: {
blobContainerName: 'storage-deadletters'
resourceId: deadLetterAccount.id
}
}
destination: {
endpointType: 'ServiceBusQueue'
properties: {
deliveryAttributeMappings: [
{
name: serviceBusQueueName
type: 'Static'
properties: {
isSecret: false
value: 'some-value'
}
}
]
resourceId: serviceBusQueue.id
}
}
eventDeliverySchema: 'EventGridSchema'
filter: {
enableAdvancedFilteringOnArrays: false
includedEventTypes: [
'Microsoft.Storage.BlobCreated'
]
isSubjectCaseSensitive: false
subjectBeginsWith: '/blobServices/default/containers/${onrampName}'
subjectEndsWith: '.json'
}
retryPolicy: {
eventTimeToLiveInMinutes: 1440
maxDeliveryAttempts: 5
}
}
}
I tried to reproduce the error message you posted but could get the same result. I did get an error message because value: '':
{
name: serviceBusQueueName
type: 'Static'
properties: {
isSecret: false
value: ''
}
}
When I updated to the following, it worked:
{
name: serviceBusQueueName
type: 'Static'
properties: {
isSecret: false
value: 'some-value'
}
}
The error message I saw with the empty string was:
Null or empty value for static delivery attribute queue-name-enter. Static delivery attribute value must be a non-empty string.
After adding some random text, the deployment completed successfully.
is there a shorthand way figuring out parameter type based on the json evaluated by readJSON groovy package. I am using resulting event_processor_parameters in a job such as
build job: "dvmt-event-processor-dev", wait: false, parameters: event_processor_parameters
I have this working but I would like to have more cleaner way.
props = readJSON text: env.hb_job_params
for ( param in props.get(application_server)) {
if (param.value.getClass() == Boolean){
event_processor_parameters.add([$class: 'BooleanParameterValue', name: param.key, value: param.value])
}
else if (param.value.getClass() == String){
event_processor_parameters.add([$class: 'StringParameterValue', name: param.key, value: param.value])
}
}
env.hb_job_params ==>
{
"server1": {
"ENV": "DEV",
"dev_xbar_host": "xbarserver1",
"platform_type" : "o2",
"dev_app_host" : "server1",
"VERSION" : "1.0.0.23",
force_build: false
}
}
as a variant:
for ( param in props.get(application_server)) {
def clazz = "${param.value.getClass().getSimpleName()}ParameterValue"
event_processor_parameters.add([$class: clazz, name: param.key, value: param.value])
}
I have a couple of Json objects and I need to delete one of them if this Json contains specific information. For an example I need to delete if state of the Json object is RUNNING.
INPUT
projects {
key: "ads_evenflow.opt"
value {
name: "ads_evenflow.opt"
state: COMPLETE
result: PASSED
}
}
projects {
key: "alexandria.opt"
value {
name: "alexandria.opt"
state: RUNNING
result: PASSED
}
}
projects {
key: "android.opt"
value {
name: "android.opt"
state: COMPLETE
result: PASSED
}
}
OUTPUT
projects {
key: "ads_evenflow.opt"
value {
name: "ads_evenflow.opt"
state: COMPLETE
result: PASSED
}
}
projects {
key: "android.opt"
value {
name: "androids.opt"
state: COMPLETE
result: PASSED
}
}
Your structure isn't an valid JSON. For such structures you need some more relaxed parser. Fortunately, the JSONY perl module could parse it. From the doc:
JSONY is a data language that is simlar to JSON, just more chill. All
valid JSON is also valid JSONY (and represents the same thing when
loaded), but JSONY lets you omit a lot of the syntax that makes JSON a
pain to write.
The following perl code does what you want.
#!/usr/bin/env perl
use 5.014;
use warnings;
use JSONY;
my $string = slurp_file();
my $data = JSONY->new->load( $string );
for my $proj (#{$data}) {
next unless ref($proj);
next if $proj->{value}->{state} eq 'RUNNING';
pretty_print_proj($proj);
}
sub pretty_print_proj {
my $p = shift;
say "project {";
say qq{\tkey: "$p->{key}"};
say "\tvalue {";
say "\t\t$_: ", $p->{value}->{$_} for (qw(name state result));
say "\t}";
say "}";
}
sub slurp_file {
#change this for your real case...
return do { local $/; <DATA>};
}
__DATA__
projects {
key: "ads_evenflow.opt"
value {
name: "ads_evenflow.opt"
state: COMPLETE
result: PASSED
}
}
projects {
key: "alexandria.opt"
value {
name: "alexandria.opt"
state: RUNNING
result: PASSED
}
}
projects {
key: "android.opt"
value {
name: "android.opt"
state: COMPLETE
result: PASSED
}
}
prints:
project {
key: "ads_evenflow.opt"
value {
name: ads_evenflow.opt
state: COMPLETE
result: PASSED
}
}
project {
key: "android.opt"
value {
name: android.opt
state: COMPLETE
result: PASSED
}
}