I have installed Pipeline Utility Steps plugin in my Jenkins and I used to use readJSON and readYaml without any issue.
A month later when I tried it i am getting the following error for both of them
groovy.lang.MissingMethodException: No signature of method: Script1.readJSON() is applicable for argument types: (java.util.LinkedHashMap) values: [[file:/data/ecsnames/dev_ECSNames.json]]
The error is similar for readYaml step as well.
I am not sure how it suddenly stopped working. Only thing I got from one of my teammates is that Jenkins was Updated to 2.235.5 version couple of weeks ago.
I used the following command
def clstrndsrvcnme = readJSON file: "/data/ecsnames/dev_ECSNames.json"
can anyone help me with this? And what does this error means?
Update*
So I was trying the above command at JenkinsURL/script. There is a little IDE to run groovy scripts. I do all kinds of Debug there.
At that location it was throwing the error.
But when I am running the same commands from a Jenkins Job, it is working perfectly fine and i am able to read values from Yaml and Json. So what I believe is that somehow JenkinsURL/script is not able to use Pipeline Utility Scripts plugin.
I am able to do my work but still wanted to understand why it is failing here JenkinsURL/script.
I spent most of a day on this same problem. This fails in the Script Console:
def jstring = '{"one":1, "two":2}'
def jobj = readJSON text: jstring
Thanks to your post, I tried it in a test pipe and it works:
pipeline {
agent any
stages {
stage('readjson') {
steps {
script {
def jstring = '{"one":1, "two":2}'
def jobj = readJSON text: jstring
echo jobj.toString()
}
}
}
}
}
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in C:\Program Files (x86)\Jenkins\workspace\readJSON
[Pipeline] {
[Pipeline] stage
[Pipeline] { (readjson)
[Pipeline] script
[Pipeline] {
[Pipeline] readJSON
[Pipeline] echo
{"one":1,"two":2}
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I've logged an issue in the Jenkins Jira for this plugin: https://issues.jenkins.io/browse/JENKINS-65910
I'll update this post with any responses from Jenkins.
Related
I am getting the below error when trying to run md-to-pdf (see https://www.npmjs.com/package/md-to-pdf) in a Bitbucket Pipeline script (see script below).
Error
ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported.
See https://crbug.com/638180.
bitbucket-pipelines.yaml file
image: buildkite/puppeteer:v1.15.0
pipelines:
default:
- step:
caches:
- node
script:
- npm install -g md-to-pdf
- doc="appendix"
- md-to-pdf --config-file config.json ${doc}.md
config.json file
I tried to follow instructions for this. Is this config.json malformed?
{
"launch_options": {
"args": ["no-sandbox"]
}
}
The correct syntax is:
{
"launch_options": {
"args": ["--no-sandbox"]
}
}
I am defining my environment variables in Jenkins-File.
I am using Pipeline Utility Steps Plugin to read a json file in the directory that has configuration.
When I echo out the read json file, the output is correct, it reads and prints the json file correctly.
When I try to access the value associated with a key in that json object, I am getting error: "No such property: internalCentralConsoleUrl for class: java.lang.String"
The json format config file looks life following:
{
"activeVersion": "19.01.303",
"internalCentralConsoleUrl": "https://11.111.111:8083/api/v1",
"dataType": "APPLICATION_JSON"
}
I am reading that file using readJSON in the pipeline.
And in the following lines, trying to access the value inside the json object using the key.
Which gives the error I mentioned above.
pipeline {
agent any
environment {
config = readJSON file: 'config.json'
baseUrl = "${config.internalCentralConsoleUrl}"
baseUrl2 = config['internalCentralConsoleUrl']
}
stages {}
}
Both the ways I tried above to read the json value are documented in the jenkins page linked here
I cannot wrap my head around what is causing an issue in this straight forward task.
Edit1: Just corrected a formatting mistake in pipeline.
I copied your example and added a stage to print the variable:
pipeline {
agent any
environment {
def config = readJSON file: 'config.json'
baseUrl = "${config.internalCentralConsoleUrl}"
}
stages {
stage('Test') {
steps {
echo baseUrl
}
}
}
}
And it prints the variable correctly without any exception:
[Pipeline] {
[Pipeline] readJSON
[Pipeline] readJSON
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
https://11.111.111:8083/api/v1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
After RNoB's comment above that it works fine in his Jenkins; I came to the conclusion that it has nothing to do with the pipeline and it might be:
a. Jenkins Plugin issue.
b. Jenkins itself.
c. Host where Jenkins is running.
So, I took the following approach:
a. I upgraded the plugins, and reinstalled them. This did not fix the problem.
b. I uninstalled Jenkins and removed all Jenkins related files and reinstalled it, and installed all the plugins again. This fixed the problem.
I still don't know what exactly was wrong, it just might be some file that was corrupt. I am not Jenkins expert but this solved the issue for me. Hope this will be helpful for somebody who is having similar issue.
I am trying to run the gulp build task for the dev environment on the server but its failing. However, The same gulp build is working on my local machine. The function and error are given below.
Function:
// S3 Upload for dev
gulp.task('s3sync:dev', function () {
var config = {
accessKeyId: "-Key-",
secretAccessKey: "-Key-"
};
var s3 = require('gulp-s3-upload')(config);
return gulp.src("./dist/**")
.pipe(s3({
Bucket: 'example',
ACL: 'public-read'
}, {
maxRetries: 5
}))
});
Command:
Gulp build:development
Error:
[09:01:04] Starting 's3sync:dev'...
events.js:160
throw er; // Unhandled 'error' event
^
Error: EISDIR: illegal operation on a directory, read
at Error (native)
Any idea?
Finally, This problem has been solved by removing a system symlink which was created after the deployment from the capistrano which is also running below npm commands.
npm run clean && npm run build
After removing the system file. I have run the below command and it works fine.
gulp build:development
I have simple groovy script that I'm using with Jenkins pipeline and fails on the git merge operation with kind of strange exception:
The script:
node("master") {
ws(env.BUILD_NUMBER.toString()) { // workspace
withCredentials([
[$class: 'UsernamePasswordBinding', credentialsId: 'bitbucket', variable: 'BITBUCKET_AUTH'],
[$class: 'UsernamePasswordBinding', credentialsId: 'bitbucket-https', variable: 'BITBUCKET_HTTPS_AUTH'],]) {
def applicationName = env.CUSTOMER_NAME
def packageName = "no.bstcm.loyaltyapp.${env.REPO_NAME}"
def googleServicesJsonContents = env.GOOGLE_SERVICES_JSON
def bitbucketRepoName = "android_loyalty_app_${env.REPO_NAME}"
def bitbucketRepoApiUrl = "https://api.bitbucket.org/2.0/repositories/boost-development/${bitbucketRepoName}"
def starterBranch = "shopping_mall"
def projectPath = "jenkins-project"
stage('Create repository on bitbucket') {
sh "curl POST -v -u $BITBUCKET_AUTH $bitbucketRepoApiUrl -H 'Content-Type: application/json' -d '{\"is_private\": true, \"project\": {\"key\": \"LOY\"}}'"
}
stage('Create local repository') {
dir(projectPath) {
sh "git init"
sh "touch README.md"
sh "git add README.md"
sh "git commit --message='Initial commit'"
}
}
stage('Merge starter') {
dir(projectPath) {
sh "git init"
sh "git remote add starter https://$BITBUCKET_HTTPS_AUTH#bitbucket.org/boost-development/app_designer_starter_android.git"
sh "git fetch starter"
sh "git checkout master" <--- FAILS HERE
sh "git remote add origin https://$BITBUCKET_HTTPS_AUTH#bitbucket.org/boost-development/$bitbucketRepoName.git"
sh "git push -u origin master"
sh "git remote remove starter"
}
}
}
}
And the exception I receive (and the pipeline broke):
[Pipeline] sh
[jenkins-project] Running shell script
+ git fetch starter
From https://bitbucket.org/***/***
* [new branch] master -> starter/master
[Pipeline] sh
[jenkins-project] Running shell script
+ git checkout master
Already on 'master'
Branch master set up to track remote branch master from starter.
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // ws
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: git for
class: org.codehaus.groovy.runtime.GStringImpl
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAd
apter.java:53)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:458)
at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:243)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onGetProperty(GroovyInterceptor.java:52)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:308)
at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:28)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
at WorkflowScript.run(WorkflowScript:36)
at ___cps.transform___(Native Method)
Do you guys have any idea what could cause this problem? I have no idea and google doesn't help much here.
trouble in this kind of groovy string:
sh ".... $bitbucketRepoName.git ...."
in this case groovy it tries to access property git of variable bitbucketRepoName
just change it this:
sh ".... ${bitbucketRepoName}.git ...."
I am having issues with Mocha finding a module that exists. I have a data folder that contains all my test data. This folder exists and is at the correct directory (confirmed with pwd). When I run the test using mocha test filename the tests pass, but when I use grunt test I get the following error.
Running "mochaTest:tests" (mochaTest) task
>> Mocha exploded!
>> Error: Cannot find module '/Users/user/Documents/project/project_dispatcher/tests/data'
>> at Function.Module._resolveFilename (module.js:339:15)
>> at Function.Module._load (module.js:290:25)
>> at Module.require (module.js:367:17)
>> at require (internal/module.js:16:19)
>> at /Users/user/Documents/project/project_dispatcher/node_modules/mocha/lib/mocha.js:219:27
>> at Array.forEach (native)
>> at Mocha.loadFiles (/Users/user/Documents/project/project_dispatcher/node_modules/mocha/lib/mocha.js:216:14)
>> at MochaWrapper.run (/Users/user/Documents/project/project_dispatcher/node_modules/grunt-mocha-test/tasks/lib/MochaWrapper.js:51:15)
>> at /Users/user/Documents/project/project_dispatcher/node_modules/grunt-mocha-test/tasks/mocha-test.js:86:20
>> at capture (/Users/user/Documents/project/project_dispatcher/node_modules/grunt-mocha-test/tasks/mocha-test.js:33:5)
Warning: Task "mochaTest:tests" failed. Use --force to continue.
I am assuming this is a config issue, but I am lost as to what is causing it. I have the following structure:
->test
-->testfile.js
-->data
---->datafile.json
And I include the json file using:
var requestObj = require('./data/workflowRequest.json');
Is the ./ no longer valid because I am using Grunt which is located above the test folder? I am also confused because the stack trace only contains mocha files, and no test files that I wrote.
Edit Grunt code may be useful too
mochaTest: {
tests: {
src:['tests/**/*'],
options: {
reporter: 'spec'
}
},
watch: {
src:['tests/**/*'],
options: {
reporter: 'nyan'
}
},
tap: {
src:['tests/**/*'],
options: {
reporter: 'tap'
}
}
},