I am defining my environment variables in Jenkins-File.
I am using Pipeline Utility Steps Plugin to read a json file in the directory that has configuration.
When I echo out the read json file, the output is correct, it reads and prints the json file correctly.
When I try to access the value associated with a key in that json object, I am getting error: "No such property: internalCentralConsoleUrl for class: java.lang.String"
The json format config file looks life following:
{
"activeVersion": "19.01.303",
"internalCentralConsoleUrl": "https://11.111.111:8083/api/v1",
"dataType": "APPLICATION_JSON"
}
I am reading that file using readJSON in the pipeline.
And in the following lines, trying to access the value inside the json object using the key.
Which gives the error I mentioned above.
pipeline {
agent any
environment {
config = readJSON file: 'config.json'
baseUrl = "${config.internalCentralConsoleUrl}"
baseUrl2 = config['internalCentralConsoleUrl']
}
stages {}
}
Both the ways I tried above to read the json value are documented in the jenkins page linked here
I cannot wrap my head around what is causing an issue in this straight forward task.
Edit1: Just corrected a formatting mistake in pipeline.
I copied your example and added a stage to print the variable:
pipeline {
agent any
environment {
def config = readJSON file: 'config.json'
baseUrl = "${config.internalCentralConsoleUrl}"
}
stages {
stage('Test') {
steps {
echo baseUrl
}
}
}
}
And it prints the variable correctly without any exception:
[Pipeline] {
[Pipeline] readJSON
[Pipeline] readJSON
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
https://11.111.111:8083/api/v1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
After RNoB's comment above that it works fine in his Jenkins; I came to the conclusion that it has nothing to do with the pipeline and it might be:
a. Jenkins Plugin issue.
b. Jenkins itself.
c. Host where Jenkins is running.
So, I took the following approach:
a. I upgraded the plugins, and reinstalled them. This did not fix the problem.
b. I uninstalled Jenkins and removed all Jenkins related files and reinstalled it, and installed all the plugins again. This fixed the problem.
I still don't know what exactly was wrong, it just might be some file that was corrupt. I am not Jenkins expert but this solved the issue for me. Hope this will be helpful for somebody who is having similar issue.
Related
I have installed Pipeline Utility Steps plugin in my Jenkins and I used to use readJSON and readYaml without any issue.
A month later when I tried it i am getting the following error for both of them
groovy.lang.MissingMethodException: No signature of method: Script1.readJSON() is applicable for argument types: (java.util.LinkedHashMap) values: [[file:/data/ecsnames/dev_ECSNames.json]]
The error is similar for readYaml step as well.
I am not sure how it suddenly stopped working. Only thing I got from one of my teammates is that Jenkins was Updated to 2.235.5 version couple of weeks ago.
I used the following command
def clstrndsrvcnme = readJSON file: "/data/ecsnames/dev_ECSNames.json"
can anyone help me with this? And what does this error means?
Update*
So I was trying the above command at JenkinsURL/script. There is a little IDE to run groovy scripts. I do all kinds of Debug there.
At that location it was throwing the error.
But when I am running the same commands from a Jenkins Job, it is working perfectly fine and i am able to read values from Yaml and Json. So what I believe is that somehow JenkinsURL/script is not able to use Pipeline Utility Scripts plugin.
I am able to do my work but still wanted to understand why it is failing here JenkinsURL/script.
I spent most of a day on this same problem. This fails in the Script Console:
def jstring = '{"one":1, "two":2}'
def jobj = readJSON text: jstring
Thanks to your post, I tried it in a test pipe and it works:
pipeline {
agent any
stages {
stage('readjson') {
steps {
script {
def jstring = '{"one":1, "two":2}'
def jobj = readJSON text: jstring
echo jobj.toString()
}
}
}
}
}
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in C:\Program Files (x86)\Jenkins\workspace\readJSON
[Pipeline] {
[Pipeline] stage
[Pipeline] { (readjson)
[Pipeline] script
[Pipeline] {
[Pipeline] readJSON
[Pipeline] echo
{"one":1,"two":2}
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
I've logged an issue in the Jenkins Jira for this plugin: https://issues.jenkins.io/browse/JENKINS-65910
I'll update this post with any responses from Jenkins.
I am getting the below error when trying to run md-to-pdf (see https://www.npmjs.com/package/md-to-pdf) in a Bitbucket Pipeline script (see script below).
Error
ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported.
See https://crbug.com/638180.
bitbucket-pipelines.yaml file
image: buildkite/puppeteer:v1.15.0
pipelines:
default:
- step:
caches:
- node
script:
- npm install -g md-to-pdf
- doc="appendix"
- md-to-pdf --config-file config.json ${doc}.md
config.json file
I tried to follow instructions for this. Is this config.json malformed?
{
"launch_options": {
"args": ["no-sandbox"]
}
}
The correct syntax is:
{
"launch_options": {
"args": ["--no-sandbox"]
}
}
All, I hope this question belongs here.
I am following a Blockgeeks tutorial, trying to set up my environment for Ethereum blockchain development. I have basically gotten to the final step, installing swarm, but I am receiving an error that seems to be related to the structure of a folder on github. How should I fix this?
Handy info:
-OS: Windows 10, running this project within cygwin with proper gcc dependencies installed
-Go version: 1.11.4
I have tried to find a solution for days now, but nothing I've found has worked. Any help is appreciated.
Basically, everyone says these steps work for them: https://swarm-guide.readthedocs.io/en/latest/installation.html#generic-linux
Maybe it's something with cygwin?
When I attempt this command: $ go install -v ./cmd/swarm
I expect it to install properly, but i get this error:
unexpected directory layout:
import path: github.com/naoina/toml
root: C:\cygwin64\home\di203179\go\src
dir: C:\cygwin64\home\di203179\go\src\github.com\ethereum\go-ethereum\vendor\github.com\naoina\toml
expand root: C:\cygwin64\home\di203179\go\src
expand dir: C:\cygwin64\home\di203179\go\src\github.com\ethereum\go-ethereum\vendor\github.com\naoina\toml
separator: \
Any help is appreciated.
Update:
I think I found the code that throws this error here: https://golang.org/src/cmd/go/internal/load/pkg.go
And here is the snippet:
// dirAndRoot returns the source directory and workspace root
// for the package p, guaranteeing that root is a path prefix of dir.
func dirAndRoot(p *Package) (dir, root string) {
dir = filepath.Clean(p.Dir)
root = filepath.Join(p.Root, "src")
if !str.HasFilePathPrefix(dir, root) || p.ImportPath != "command-line-arguments" && filepath.Join(root, p.ImportPath) != dir {
// Look for symlinks before reporting error.
dir = expandPath(dir)
root = expandPath(root)
}
if !str.HasFilePathPrefix(dir, root) || len(dir) <= len(root) || dir[len(root)] != filepath.Separator || p.ImportPath != "command-line-arguments" && !p.Internal.Local && filepath.Join(root, p.ImportPath) != dir {
base.Fatalf("unexpected directory layout:\n"+
" import path: %s\n"+
" root: %s\n"+
" dir: %s\n"+
" expand root: %s\n"+
" expand dir: %s\n"+
" separator: %s",
p.ImportPath,
filepath.Join(p.Root, "src"),
filepath.Clean(p.Dir),
root,
dir,
string(filepath.Separator))
}
return dir, root
}
It seems there are several path-related issues that could make a Go project throw this error. But I feel my path is correct, so I'm still at a loss...
Upddate 2:
I have confirmed that the first if-statement from that snippet is running, and the first three conditions of the second if statement resolve to false (meaning they are not the cause of the error), so that means the last condition that is composed of multiple AND statements must be returning true since the error is throwing. Still can't tell why, though. Thanks for any help.
I am having issues with Mocha finding a module that exists. I have a data folder that contains all my test data. This folder exists and is at the correct directory (confirmed with pwd). When I run the test using mocha test filename the tests pass, but when I use grunt test I get the following error.
Running "mochaTest:tests" (mochaTest) task
>> Mocha exploded!
>> Error: Cannot find module '/Users/user/Documents/project/project_dispatcher/tests/data'
>> at Function.Module._resolveFilename (module.js:339:15)
>> at Function.Module._load (module.js:290:25)
>> at Module.require (module.js:367:17)
>> at require (internal/module.js:16:19)
>> at /Users/user/Documents/project/project_dispatcher/node_modules/mocha/lib/mocha.js:219:27
>> at Array.forEach (native)
>> at Mocha.loadFiles (/Users/user/Documents/project/project_dispatcher/node_modules/mocha/lib/mocha.js:216:14)
>> at MochaWrapper.run (/Users/user/Documents/project/project_dispatcher/node_modules/grunt-mocha-test/tasks/lib/MochaWrapper.js:51:15)
>> at /Users/user/Documents/project/project_dispatcher/node_modules/grunt-mocha-test/tasks/mocha-test.js:86:20
>> at capture (/Users/user/Documents/project/project_dispatcher/node_modules/grunt-mocha-test/tasks/mocha-test.js:33:5)
Warning: Task "mochaTest:tests" failed. Use --force to continue.
I am assuming this is a config issue, but I am lost as to what is causing it. I have the following structure:
->test
-->testfile.js
-->data
---->datafile.json
And I include the json file using:
var requestObj = require('./data/workflowRequest.json');
Is the ./ no longer valid because I am using Grunt which is located above the test folder? I am also confused because the stack trace only contains mocha files, and no test files that I wrote.
Edit Grunt code may be useful too
mochaTest: {
tests: {
src:['tests/**/*'],
options: {
reporter: 'spec'
}
},
watch: {
src:['tests/**/*'],
options: {
reporter: 'nyan'
}
},
tap: {
src:['tests/**/*'],
options: {
reporter: 'tap'
}
}
},
Please give me some lights about what I'm doing wrong here. First of all I'm newbie with Gradle and Groovy and for learning purposes I'm playing with them and DBUnit.
I tried the code listed below, my goal is to generate a dataset getting the data from a mysql db.
import groovy.sql.Sql
import org.dbunit.database.DatabaseConnection;
import org.dbunit.database.IDatabaseConnection;
import org.dbunit.dataset.IDataSet;
import org.dbunit.dataset.xml.FlatXmlDataSet;
repositories {
mavenCentral()
}
configurations {
dbunit
}
dependencies {
dbunit 'dbunit:dbunit:2.2',
'junit:junit:4.11',
'mysql:mysql-connector-java:5.1.25'
}
URLClassLoader loader = GroovyObject.class.classLoader
configurations.dbunit.each { File file -> loader.addURL(file.toURL()) }
task listJars << {
configurations.dbunit.each { File file -> println file.name }
}
task listTables << {
getConnection("mydb").eachRow('show tables') { row -> println row[0] }
}
task generateDataSet << {
def IDatabaseConnection conn = new DatabaseConnection(getConnection("mydb").connection)
def IDataSet fullDataSet = conn.createDataSet()
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"))
}
static Sql getConnection(db) {
def props = [user: 'dbuser', password: 'userpass', allowMultiQueries: 'true'] as Properties
def url = (db) ? 'jdbc:mysql://host:3306/'.plus(db) : 'jdbc:mysql://host:3306/'
def driver = 'com.mysql.jdbc.Driver'
Sql.newInstance(url, props, driver)
}
What is weird to me is that all MySQL methods work well, I can get the list of tables and for instance the connection was done well so the mysql-connector-java.jar is being loaded (I think), but when I add the DBUnit stuff (import libs and the generateDataSet method) it seems the dbunit jar is not available for the script, I got the following errors:
FAILURE: Build failed with an exception.
* Where:
Build file '/home/me/tmp/dbunit/build.gradle' line: 5
* What went wrong:
Could not compile build file '/home/me/tmp/dbunit/build.gradle'.
> startup failed:
build file '/home/me/tmp/dbunit/build.gradle': 5: unable to resolve class org.dbunit.dataset.xml.FlatXmlDataSet
# line 5, column 1.
import org.dbunit.dataset.xml.FlatXmlDataSet;
^
build file '/home/me/tmp/dbunit/build.gradle': 2: unable to resolve class org.dbunit.database.DatabaseConnection
# line 2, column 1.
import org.dbunit.database.DatabaseConnection;
^
build file '/home/me/tmp/dbunit/build.gradle': 3: unable to resolve class org.dbunit.database.IDatabaseConnection
# line 3, column 1.
import org.dbunit.database.IDatabaseConnection;
^
build file '/home/me/tmp/dbunit/build.gradle': 4: unable to resolve class org.dbunit.dataset.IDataSet
# line 4, column 1.
import org.dbunit.dataset.IDataSet;
^
4 errors
But if I call the listJars task, I got this:
:listJars
junit-4.11.jar
mysql-connector-java-5.1.25.jar
hamcrest-core-1.3.jar
xercesImpl-2.6.2.jar
xmlParserAPIs-2.6.2.jar
junit-addons-1.4.jar
poi-2.5.1-final-20040804.jar
commons-collections-3.1.jar
commons-lang-2.1.jar
commons-logging-1.0.4.jar
dbunit-2.2.jar
BUILD SUCCESSFUL
Which in my understanding means all those jars were loaded and are available for the script, am I right? or am I doing something wrong with the class loader stuff?
Thanks very much.
The GroovyObject.class.classLoader.addURL hack is not the right way to add a dependency to the build script class path. It's just sometimes necessary to get JDBC drivers to work with Groovy (long story). Here is how you add a dependency to the build script class path:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath "some:library:1.0"
}
}
// library can be used in the rest of the build script
You can learn more about this in the Gradle User Guide.