How do you set up Stormcrawler to run with chromedriver instead of phantomJS? - selenium-chromedriver

The tutorial here describes how to set up Stormcrawler to run with phantomJS, but phantomJS doesn't seem capable of sourcing and executing outlinking javascript pages (e.g., javascript code that's linked to outside of the immediate page's context). Chromedriver appears to be able to handle this case, however. How can I set up Stormcrawler to run with chromedriver instead of phantomJS?

The basic set of steps you need to follow are:
Install latest versions of Chrome and Chromedriver (below based on the tutorial here):
# Install Google Chrome
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo apt install ./google-chrome-stable_current_amd64.deb
# Install Chromedriver
PLATFORM=linux64 # Adjust as necessary depending on your system
VERSION=$(curl http://chromedriver.storage.googleapis.com/LATEST_RELEASE)
curl -O http://chromedriver.storage.googleapis.com/$VERSION/chromedriver_$PLATFORM.zip
unzip chromedriver_linux64.zip
# Move executable into your path
cp chromedriver /usr/bin/
Specify the following selenium settings in your crawler configuration file (based on snippet from #JulienNioche here), including the address and port at which chromedriver will be running:
http.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.selenium.RemoteDriverProtocol"
https.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.selenium.RemoteDriverProtocol"
selenium.addresses: "http://localhost:9515"
selenium.setScriptTimeout: 10000
selenium.pageLoadTimeout: 1000
selenium.implicitlyWait: 1000
selenium.capabilities:
goog:chromeOptions:
args:
- "--no-sandbox"
- "--disable-dev-shm-usage"
- "--headless"
- "--disable-gpu"
Rebuild your Stormcrawler maven package: mvn clean install; mvn clean package
Only necessary if you modified any of your source configuration files, but doesn't hurt to rebuild again anyway
Start chromedriver in background (defaults to port 9515): chromedriver --headless &
[Optional if connecting to Elasticsearch] Set up your ES indices, if not already done so
Start your topology (first in local mode as shown here to test your setup; if it doesn't crash, then you should be good to go in remote mode):
storm jar target/stormcrawler-1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --local es-crawler.flux --sleep 600000
If things still don't work for you after following these steps, there may be an issue in one of your configuration files or a version incompatibility issue between one or more of the tools. In any case, I've provided a set of example configurations below that worked for me (as of the time of writing) which I hope may be of help in getting things working.
Example Configurations (for stormcrawler-elasticsearch setup with chromedriver)
Versions used at time of writing this answer:
Stormcrawler 1.17
Storm 1.2.3
Selenium 4.0.0-alpha-6 (no need to install this separately; will be downloaded and installed during maven build of stormcrawler 1.17)
Chromedriver 90.0.4430.24
Google Chrome 90.0.4430.93
crawler-conf.yaml
config:
topology.workers: 3
topology.message.timeout.secs: 3000
topology.max.spout.pending: 100
topology.debug: true
fetcher.threads.number: 100
# override the JVM parameters for the workers
topology.worker.childopts: "-Xmx2g -Djava.net.preferIPv4Stack=true"
# mandatory when using Flux
topology.kryo.register:
- com.digitalpebble.stormcrawler.Metadata
# lists the metadata to persist to storage
# these are not transfered to the outlinks
metadata.persist:
- _redirTo
- error.cause
- error.source
- isSitemap
- isFeed
http.agent.name: "Anonymous Coward"
http.agent.version: "1.0"
http.agent.description: "built with StormCrawler 1.17"
http.agent.url: "http://someorganization.com/"
http.agent.email: "someone#someorganization.com"
# The maximum number of bytes for returned HTTP response bodies.
# The fetched page will be trimmed to 65KB in this case
# Set -1 to disable the limit.
http.content.limit: -1 # default 65536
parsefilters.config.file: "parsefilters.json"
urlfilters.config.file: "urlfilters.json"
# revisit a page daily (value in minutes)
# set it to -1 to never refetch a page
fetchInterval.default: 1440
# revisit a page with a fetch error after 2 hours (value in minutes)
# set it to -1 to never refetch a page
fetchInterval.fetch.error: 120
# never revisit a page with an error (or set a value in minutes)
fetchInterval.error: -1
# configuration for the classes extending AbstractIndexerBolt
# indexer.md.filter: "someKey=aValue"
indexer.url.fieldname: "url"
indexer.text.fieldname: "content"
indexer.canonical.name: "canonical"
indexer.md.mapping:
- parse.title=title
- parse.keywords=keywords
- parse.description=description
- domain=domain
# Metrics consumers:
topology.metrics.consumer.register:
- class: "org.apache.storm.metric.LoggingMetricsConsumer"
parallelism.hint: 1
http.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.selenium.RemoteDriverProtocol"
https.protocol.implementation: "com.digitalpebble.stormcrawler.protocol.selenium.RemoteDriverProtocol"
selenium.addresses: "http://localhost:9515"
selenium.setScriptTimeout: 10000
selenium.pageLoadTimeout: 1000
selenium.implicitlyWait: 1000
selenium.capabilities:
goog:chromeOptions:
args:
- "--nosandbox"
- "--disable-dev-shm-usage"
- "--headless"
- "--disable-gpu"
es-conf.yaml
config:
# ES indexer bolt
es.indexer.addresses: "localhost"
es.indexer.index.name: "content"
# es.indexer.pipeline: "_PIPELINE_"
es.indexer.create: false
es.indexer.bulkActions: 100
es.indexer.flushInterval: "2s"
es.indexer.concurrentRequests: 1
# ES metricsConsumer
es.metrics.addresses: "http://localhost:9200"
es.metrics.index.name: "metrics"
# ES spout and persistence bolt
es.status.addresses: "http://localhost:9200"
es.status.index.name: "status"
es.status.routing: true
es.status.routing.fieldname: "key"
es.status.bulkActions: 500
es.status.flushInterval: "5s"
es.status.concurrentRequests: 1
# spout config #
# time in secs for which the URLs will be considered for fetching after a ack of fail
spout.ttl.purgatory: 30
# Min time (in msecs) to allow between 2 successive queries to ES
spout.min.delay.queries: 2000
# Delay since previous query date (in secs) after which the nextFetchDate value will be reset to the current time
spout.reset.fetchdate.after: 120
es.status.max.buckets: 50
es.status.max.urls.per.bucket: 2
# field to group the URLs into buckets
es.status.bucket.field: "key"
# fields to sort the URLs within a bucket
es.status.bucket.sort.field:
- "nextFetchDate"
- "url"
# field to sort the buckets
es.status.global.sort.field: "nextFetchDate"
# CollapsingSpout : limits the deep paging by resetting the start offset for the ES query
es.status.max.start.offset: 500
# AggregationSpout : sampling improves the performance on large crawls
es.status.sample: false
# max allowed duration of a query in sec
es.status.query.timeout: -1
# AggregationSpout (expert): adds this value in mins to the latest date returned in the results and
# use it as nextFetchDate
es.status.recentDate.increase: -1
es.status.recentDate.min.gap: -1
topology.metrics.consumer.register:
- class: "com.digitalpebble.stormcrawler.elasticsearch.metrics.MetricsConsumer"
parallelism.hint: 1
es-crawler.flux
name: "crawler"
includes:
- resource: true
file: "/crawler-default.yaml"
override: false
- resource: false
file: "crawler-conf.yaml"
override: true
- resource: false
file: "es-conf.yaml"
override: true
spouts:
- id: "spout"
className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.AggregationSpout"
parallelism: 10
- id: "filespout"
className: "com.digitalpebble.stormcrawler.spout.FileSpout"
parallelism: 1
constructorArgs:
- "."
- "seeds.txt"
- true
bolts:
- id: "filter"
className: "com.digitalpebble.stormcrawler.bolt.URLFilterBolt"
parallelism: 3
- id: "partitioner"
className: "com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt"
parallelism: 3
- id: "fetcher"
className: "com.digitalpebble.stormcrawler.bolt.FetcherBolt"
parallelism: 3
- id: "sitemap"
className: "com.digitalpebble.stormcrawler.bolt.SiteMapParserBolt"
parallelism: 3
- id: "parse"
className: "com.digitalpebble.stormcrawler.bolt.JSoupParserBolt"
parallelism: 12
- id: "index"
className: "com.digitalpebble.stormcrawler.elasticsearch.bolt.IndexerBolt"
parallelism: 3
- id: "status"
className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt"
parallelism: 3
- id: "status_metrics"
className: "com.digitalpebble.stormcrawler.elasticsearch.metrics.StatusMetricsBolt"
parallelism: 3
streams:
- from: "spout"
to: "partitioner"
grouping:
type: SHUFFLE
- from: "spout"
to: "status_metrics"
grouping:
type: SHUFFLE
- from: "partitioner"
to: "fetcher"
grouping:
type: FIELDS
args: ["key"]
- from: "fetcher"
to: "sitemap"
grouping:
type: LOCAL_OR_SHUFFLE
- from: "sitemap"
to: "parse"
grouping:
type: LOCAL_OR_SHUFFLE
- from: "parse"
to: "index"
grouping:
type: LOCAL_OR_SHUFFLE
- from: "fetcher"
to: "status"
grouping:
type: FIELDS
args: ["url"]
streamId: "status"
- from: "sitemap"
to: "status"
grouping:
type: FIELDS
args: ["url"]
streamId: "status"
- from: "parse"
to: "status"
grouping:
type: FIELDS
args: ["url"]
streamId: "status"
- from: "index"
to: "status"
grouping:
type: FIELDS
args: ["url"]
streamId: "status"
- from: "filespout"
to: "filter"
grouping:
type: FIELDS
args: ["url"]
streamId: "status"
- from: "filter"
to: "status"
grouping:
streamId: "status"
type: CUSTOM
customClass:
className: "com.digitalpebble.stormcrawler.util.URLStreamGrouping"
constructorArgs:
- "byDomain"
parsefilters.json
{
"com.digitalpebble.stormcrawler.parse.ParseFilters": [
{
"class": "com.digitalpebble.stormcrawler.parse.filter.XPathFilter",
"name": "XPathFilter",
"params": {
"canonical": "//*[#rel=\"canonical\"]/#href",
"parse.description": [
"//*[#name=\"description\"]/#content",
"//*[#name=\"Description\"]/#content"
],
"parse.title": [
"//TITLE",
"//META[#name=\"title\"]/#content"
],
"parse.keywords": "//META[#name=\"keywords\"]/#content"
}
},
{
"class": "com.digitalpebble.stormcrawler.parse.filter.LinkParseFilter",
"name": "LinkParseFilter",
"params": {
"pattern": "//FRAME/#src"
}
},
{
"class": "com.digitalpebble.stormcrawler.parse.filter.DomainParseFilter",
"name": "DomainParseFilter",
"params": {
"key": "domain",
"byHost": false
}
},
{
"class": "com.digitalpebble.stormcrawler.parse.filter.CommaSeparatedToMultivaluedMetadata",
"name": "CommaSeparatedToMultivaluedMetadata",
"params": {
"keys": ["parse.keywords"]
}
}
]
}
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.rcsb.crawler</groupId>
<artifactId>stormcrawler</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>stormcrawler</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<stormcrawler.version>1.17</stormcrawler.version>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.2</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.3.2</version>
<executions>
<execution>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>java</executable>
<includeProjectDependencies>true</includeProjectDependencies>
<includePluginDependencies>false</includePluginDependencies>
<classpathScope>compile</classpathScope>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.3.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" />
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>org.apache.storm.flux.Flux</mainClass>
<manifestEntries>
<Change></Change>
<Build-Date></Build-Date>
</manifestEntries>
</transformer>
</transformers>
<!-- The filters below are necessary if you want to include the Tika
module -->
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
<filter>
<!-- https://issues.apache.org/jira/browse/STORM-2428 -->
<artifact>org.apache.storm:flux-core</artifact>
<excludes>
<exclude>org/apache/commons/**</exclude>
<exclude>org/apache/http/**</exclude>
<exclude>org/yaml/**</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>com.digitalpebble.stormcrawler</groupId>
<artifactId>storm-crawler-core</artifactId>
<version>${stormcrawler.version}</version>
</dependency>
<dependency>
<groupId>com.digitalpebble.stormcrawler</groupId>
<artifactId>storm-crawler-elasticsearch</artifactId>
<version>${stormcrawler.version}</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>1.2.3</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>flux-core</artifactId>
<version>1.2.3</version>
</dependency>
</dependencies>
</project>

Related

How to get or parse coverage persentage of Jacoco report in GitHub Actions

I have Spring Boot app it uses gradle for build and package. I have configured Jacoco plugin and generating report in my local as HTML. I want to include it in my GitHub build workflow so whenever build runs I want to upload/store that Jacoco generated html report in the branch in which build is running. Is it possible using GitHub Actions?
Also once Jacoco build report is created want to extract final code coverage percentage of that build (this percentage is only for my defined coverage rule) and create a badge in the repository in which build is running.
EDIT
test {
finalizedBy jacocoTestReport // report is always generated after tests run
}
jacocoTestReport {
dependsOn test // tests are required to run before generating the report
}
jacoco {
toolVersion = "0.8.6"
//reportsDirectory = file("$buildDir/report/")
}
jacocoTestReport {
dependsOn test
sourceSets sourceSets.main
executionData fileTree(project.rootDir.absolutePath).include("**/build/jacoco/*.exec")
reports {
xml.enabled false
csv.enabled true
html.destination file("${buildDir}/reports/jacoco/Html")
csv.destination file("${buildDir}/reports/jacoco/jacoco.csv")
}
}
//Test coverage Rule to make sure code coverage is 100 %
jacocoTestCoverageVerification {
violationRules {
rule {
element = 'CLASS'
limit {
counter = 'LINE'
value = 'COVEREDRATIO'
minimum = 1.0
}
excludes = [
'com.cicd.herokuautodeploy.model.*',
'com.cicd.herokuautodeploy.HerokuautodeployApplication',
'com.cicd.herokuautodeploy.it.*'
]
}
}
}
WorkFlow File
# This workflow will build a Java project with Gradle whenever Pull and Merge request to main branch
# For more information see: https://help.github.com/actions/language-and-framework-guides/building-and-testing-java-with-gradle
name: Build WorkFlow - Building and Validating Test Coverage
on:
pull_request:
branches: [ main,dev ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 1.11
uses: actions/setup-java#v1
with:
java-version: 1.11
- name: Grant execute permission for gradlew
run: chmod +x gradlew
- name: Build with Gradle
run: ./gradlew build
- name: Generate JaCoCo Badge
id: jacoco
uses: cicirello/jacoco-badge-generator#v2.0.1
- name: Log coverage percentage
run: |
echo "coverage = ${{ steps.jacoco.outputs.coverage }}"
echo "branch coverage = ${{ steps.jacoco.outputs.branches }}"
- name: Commit the badge (if it changed)
run: |
if [[ `git status --porcelain` ]]; then
git config --global user.name 'UserName'
git config --global user.email 'useremail#gmail.com'
git add -A
git commit -m "Autogenerated JaCoCo coverage badge"
git push
fi
- name: Upload JaCoCo coverage report
uses: actions/upload-artifact#v2
with:
name: jacoco-report
path: reports/jacoco/
Error
File "/JacocoBadgeGenerator.py", line 88, in computeCoverage
with open(filename, newline='') as csvfile :
FileNotFoundError: [Errno 2] No such file or directory: 'target/site/jacoco/jacoco.csv'
Disclosure: I am the author of the cicirello/jacoco-badge-generator GitHub Action that this question concerns.
Although the default behavior assumes Maven's location of the jacoco.csv, there is an action input that can be used to indicate the location of the jacoco report. So in your case, with gradle, you can change the cicirello/jacoco-badge-generator step of your workflow to the following (if you use gradle's default location and filename for the report):
- name: Generate JaCoCo Badge
uses: cicirello/jacoco-badge-generator#v2
with:
jacoco-csv-file: build/reports/jacoco/test/jacocoTestReport.csv
It looks like your gradle configuration changed the report location though, so you will need the following in order to coincide with your report location and filename:
- name: Generate JaCoCo Badge
id: jacoco
uses: cicirello/jacoco-badge-generator#v2
with:
jacoco-csv-file: build/reports/jacoco/jacoco.csv
The id: jacoco is just because one of your later steps in your workflow is referring to the action outputs through that id.
If you can configure jacoco to generate a jacoco.csv file, then the GitHub Action jacoco-badge-generator can generate the requested badge.
See for instance "Use Jacoco And GitHub Actions to Improve Code Coverage" from Rodrigo Graciano for an example of pom.xml project configuration to generate the report during build.
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco.plugin.version}</version>
<executions>
<execution>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
For a gradle example:
jacocoTestReport {
dependsOn test
sourceSets sourceSets.main
executionData fileTree(project.rootDir.absolutePath).include("**/build/jacoco/*.exec")
reports {
xml.enabled false
csv.enabled true
html.destination file("${buildDir}/reports/jacoco/Html")
csv.destination file("${buildDir}/reports/jacoco/jacoco.csv")
}
}

request response data when using taurus junit-xml module

I am trying to clean-up the jmeter docker+ci pipeline of our functional tests. I see taurus has a clean way to run jmeter scripts in a container and it does the heavy lifting of downloading the version of jmeter I want + installing the plugins my scripts use - excellent.
Now I need to generate the reports in junit.xml so I could keep the reporting consistent. Up until now I was using a modified fork of https://github.com/tguzik/m2u to convert jtl reports to junit.xml
Appreciate any help with how I can get request, response (code & body) for all samples onto junit.xml (at least for the failed samples)?
I tried few variations of taurus yaml ...
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: junit-xml
filename: report/report.xml
data-source: sample-labels
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: passfail
- module: junit-xml
filename: report/report.xml
data-source: pass-fail
Also added certain passfail criteria variations on the passfail module. did not help
After fiddling with this for few hours, I believe there is no clean way to get anything meaningful onto the junit .xml report from the junit-xml module in taurus. It appears barebone. I also noticed that it could mess up the default jenkins junit plugin test result summary.
So I settled down with the following yaml setting and continued to use m2u.jar to convert the jtl to junit.xml
modules:
jmeter:
path: ~/.bzt/jmeter-taurus/bin/jmeter
download-link: https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-{version}.zip
version: 5.3
force-ctg: true
detect-plugins: true
plugins:
- jpgc-json=2.2
- jmeter-ftp
- jpgc-casutg
xml-jtl-flags:
xml: true
fieldNames: true
time: true
timestamp: true
latency: true
connectTime: false
success: true
label: true
code: true
message: true
threadName: true
dataType: false
encoding: false
assertions: true
subresults: true
responseData: false
samplerData: false
responseHeaders: false
requestHeaders: true
responseDataOnError: true
saveAssertionResultsFailureMessage: true
bytes: true
threadCounts: false
url: true
execution:
- write-xml-jtl: full
scenario:
script: v_jmxfilename
properties:
environment: v_env
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
# - module: junit-xml
# filename: report/junit-report.xml
# data-source: sample-labels
As per JUnit-XML-Reporter documentation currently this is not possible:
This reporter provides test results in JUnit XML format parseable by Jenkins JUnit Plugin. Reporter has two options:
filename (full path to report file, optional. By default xunit.xml in artifacts dir)
data-source (which data source to use: sample-labels or pass-fail)
If sample-labels used as source data, report will contain urls with test errors. If pass-fail used as source data, report will contain Pass/Fail criteria information. Please note that you have to place pass-fail module in reporters list, before junit-xml module.
Taurus is not only for JMeter, it supports many more tools and not all of them provide possibility to store request and response data so the options I can think of are in:
Add a Listener to your Test Plan and choose what metrics you need to store into a separate file, the easiest one for using is Flexible File Writer
Use ShellExec Service to run your m2u.jar from Taurus config YAML

How to create kubernetes secret as json object and load the same in kubernetes environment as json

I need to pass a JWK as kubernetes environment variable to my app.
I created a file to store my key like so:
cat deploy/keys/access-signature-public-jwk
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Which is then used to create a kubernetes secret like so:
kubectl create secret generic intimations-signature-public-secret --from-file=./deploy/keys/access-signature-public-jwk
Which is then retrived in the kubernetes environment variable as:
- name: ACCESS_SIGNATURE_PUBLIC_JWK
valueFrom:
secretKeyRef:
name: intimations-signature-public-secret
key: access-signature-public-jwk
And passed to the application.conf of the application likeso:
pac4j.lagom.jwt.authenticator {
signatures = [
${ACCESS_SIGNATURE_PUBLIC_JWK}
]
}
The pac4j library expects the config pac4j.lagom.jwt.authenticator as a json object. But get the following exception when I run this app:
com.typesafe.config.ConfigException$WrongType: env variables: signatures has type list of STRING rather than list of OBJECT
at com.typesafe.config.impl.SimpleConfig.getHomogeneousWrappedList(SimpleConfig.java:452)
at com.typesafe.config.impl.SimpleConfig.getObjectList(SimpleConfig.java:460)
at com.typesafe.config.impl.SimpleConfig.getConfigList(SimpleConfig.java:465)
at org.pac4j.lagom.jwt.JwtAuthenticatorHelper.parse(JwtAuthenticatorHelper.java:84)
at com.codingkapoor.holiday.impl.core.HolidayApplication.jwtClient$lzycompute(HolidayApplication.scala
POD Description
Name: holiday-deployment-55b86f955d-9klk2
Namespace: default
Priority: 0
Node: minikube/192.168.99.103
Start Time: Thu, 28 May 2020 12:42:50 +0530
Labels: app=holiday
pod-template-hash=55b86f955d
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/holiday-deployment-55b86f955d
Containers:
holiday:
Container ID: docker://18443cfedc7fd39440f5fa6f038f36c58cec1660a2974e6432500e8c7d51f5e6
Image: codingkapoor/holiday-impl:latest
Image ID: docker://sha256:6e0ddcf41e0257755b7e865424671970091d555c4bad88b5d896708ded139eb7
Port: 8558/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:49:24 +0530
Finished: Thu, 28 May 2020 22:49:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:44:15 +0530
Finished: Thu, 28 May 2020 22:44:21 +0530
Ready: False
Restart Count: 55
Liveness: http-get http://:management/alive delay=20s timeout=1s period=10s #success=1 #failure=10
Readiness: http-get http://:management/ready delay=20s timeout=1s period=10s #success=1 #failure=10
Environment:
JAVA_OPTS: -Xms256m -Xmx256m -Dconfig.resource=prod-application.conf
APPLICATION_SECRET: <set to the key 'secret' in secret 'intimations-application-secret'> Optional: false
MYSQL_URL: jdbc:mysql://mysql/intimations_holiday_schema
MYSQL_USERNAME: <set to the key 'username' in secret 'intimations-mysql-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'intimations-mysql-secret'> Optional: false
ACCESS_SIGNATURE_PUBLIC_JWK: <set to the key 'access-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REFRESH_SIGNATURE_PUBLIC_JWK: <set to the key 'refresh-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REQUIRED_CONTACT_POINT_NR: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kqmmv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kqmmv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kqmmv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m21s (x23 over 100m) kubelet, minikube Container image "codingkapoor/holiday-impl:latest" already present on machine
Warning BackOff 27s (x466 over 100m) kubelet, minikube Back-off restarting failed container
I was wondering if there is any way to pass the environment variable as a json object instead of string. Please suggest. TIA.
First, the file access-signature-public-jwk is not a valid JSON file. You should update it as a valid one.
{
"algorithm" : "RS256",
"jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Steps I followed to validate.
kubectl create secret generic token1 --from-file=jwk.json
Mount the secret into the pod.
env:
- name: JWK
valueFrom:
secretKeyRef:
name: token
key: jwk.json
exec to the pod and check the env variable JWK
$ echo $JWK
{ "algorithm" : "RS256", "jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
Copy the content to a file
echo $JWK > jwk.json
Validate the file
$ jsonlint-php jwk.json
Valid JSON (jwk.json)
If I use the file you are given and followed the same steps. It gives an json validation error. Also, env variables are always strings. You have to convert them into the required types in your code.
$ echo $JWK
{ algorithm = "RS256" jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
$ echo $JWK > jwk.json
$ jsonlint-php jwk.json
jwk.json: Parse error on line 1:
{ algorithm = "RS256"
-^
Expected one of: 'STRING', '}'
Although not a direct answer but an alternate solution to this problem.
As #hariK pointed out environment variables are always strings and in order to consume them as json we would need to convert the env var read as string into json.
However, in my case, this was not a viable solution because I was using a lib that was expecting a Config object and not a json object directly which would have meant a lot of work. Converting string -> json -> Config. Plus this approach is inconsistent with how Config object was being built in the developement scenarios i.e., json -> Config. See here.
The framework I am using to build this app is based on Play Framework which allows to modularize application configs in separate files and then club the required pieces together in a desired config file, as shown below. You can read it more in detail here.
application.conf
include "/opt/conf/app1.conf"
include "/opt/conf/app2.conf"
This allowed me to make use of Using Secrets as files from a Pod
feature from kubernetes.
Basically, I created a small config file that contains a part of my main application configuration file, as shown below:
cat deploy/keys/signature-public-jwk
pac4j.lagom.jwt.authenticator {
signatures = [
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
]
}
Then created a kubernetes secret and mounted volumes in deployment to appear in the pod as file
kubectl create secret generic signature-public-secret --from-file=./deploy/secrets/signature-public-jwks.conf
// deployment yaml
spec:
containers:
- name: employee
image: "codingkapoor/employee-impl:latest"
volumeMounts:
- name: signature-public-secret-conf
mountPath: /opt/conf/signature-public-jwks.conf
subPath: signature-public-jwks.conf
readOnly: true
volumes:
- name: signature-public-secret-conf
secret:
secretName: signature-public-secret
Use this mounted file location in the application.conf to include the same
include file("/opt/conf/signature-public-jwks.conf")
Notice that the mountPath and the file location in the application.conf are same.
Advantages of this approach:
The solution is consistent with both the development and test, production environments as we could return json instead of string to the lib, as explained above
Secrets shouldn't be passed as environment variables anyway! You can read more about it here.

Cloud SQL connection for Kubernetes using proxy

I'm currently running a Spring Boot Pod in Kubernetes. There's a side car in the pod for the cloud SQL proxy.
Below is my spring Boot application.properties configuration:
server.port=8081
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.continue-on-error=true
spring.datasource.url=jdbc:mysql://localhost:3306/<database_name>
spring.datasource.username=<user_name>
spring.datasource.password=<password>
Below is my pom.xml extract with plugins and dependencies:
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.jayway.jsonpath</groupId>
<artifactId>json-path</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-hateoas</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>ca.performance.common</groupId>
<artifactId>common-http</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>com.google.cloud.sql</groupId>
<artifactId>mysql-socket-factory</artifactId>
<version>1.0.10</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
And this is my deployment.yaml file:
apiVersion: v1
kind: Service
metadata:
name: app-dummy-name
spec:
selector:
app: app-dummy-name
ports:
- port: 81
name: http-app-dummy-name
targetPort: http-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-dummy-name
spec:
replicas: 1
selector:
matchLabels:
app: app-dummy-name
template:
metadata:
labels:
app: app-dummy-name
spec:
containers:
- name: app-dummy-name
image: <image url>
ports:
- containerPort: 8081
name: http-api
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
I followed the instructions from this link, so I created the secrets and the service account. However, I'm constantly getting connection refusal errors when I deploy the previous yaml file in Kubernetes after creating the secrets:
org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data;
nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection;
nested exception is com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure.
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I even tested the Spring boot application locally using the proxy and the same application.properties configuration and it was working fine.
Im adding my deployment yaml which worked for me, check if adding the following will help:
under volumes:
volumes:
- name: cloudsql
emptyDir:
in the connection: --dir=/cloudsql
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<INSTANCE_CONNECTION_NAME=tcp:5432>",
"-credential_file=/secrets/cloudsql/credentials.json"]
also make sure you enabled the Cloud SQL Administration API
here is my full deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-dummy-name
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: app-dummy-name
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: app-dummy-name
image: <image url>
ports:
- containerPort: 80
env:
- name: DB_HOST
value: localhost
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project-id:us-central1:postgres-instance-name=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
here are my pre-delpoy script:
#!/bin/bash
# https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
# 1. Go to the Cloud SQL Service accounts page of the Google Cloud Platform Console.
# GO TO THE SERVICE ACCOUNTS PAGE
# 2. If needed, select the project that contains your Cloud SQL instance.
# 3. Click Create service account.
# 4. In the Create service account dialog, provide a descriptive name for the service account.
# 5. For Role, select Cloud SQL > Cloud SQL Client.
# Alternatively, you can use the primitive Editor role by selecting Project > Editor, but the Editor role includes permissions across Google Cloud Platform.
#
# 6. If you do not see these roles, your Google Cloud Platform user might not have the resourcemanager.projects.setIamPolicy permission. You can check your permissions by going to the IAM page in the Google Cloud Platform Console and searching for your user id.
# Change the Service account ID to a unique value that you will recognize so you can easily find this service account later if needed.
# 7. Click Furnish a new private key.
# 8. The default key type is JSON, which is the correct value to use.
# 9. Click Create.
# 10. enable Cloud SQL Administration API [here](https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview)
# make sure to choose your project
echo "create cloudsql secret"
kubectl create secret generic cloudsql-instance-credentials \
--from-file=credentials.json=postgres-sql-credential.json
echo "create cloudsql user and password"
kubectl create secret generic cloudsql-db-credentials \
--from-literal=username=postgres --from-literal=password=123456789
postgres-sql-credential.json file:
{
"type": "service_account",
"project_id": "my-project",
"private_key_id": "1234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\n123445556\n123445\n-----END PRIVATE KEY-----\n",
"client_email": "postgres-sql#my-project.iam.gserviceaccount.com",
"client_id": "1234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/postgres-sq%my-project.iam.gserviceaccount.com"
}

Choose arquillian container during gradle build

I am running arquillian tests with junit and gradle. How do I choose which container gets started?
At the moment I am defining the container qualifier in a file with name arquillian.launch.
My arquillian.xml looks as follows:
<?xml version="1.0" encoding="UTF-8" ?>
<arquillian ...>
<container qualifier="glassfish3-embedded" default="true">
<configuration>
...
</configuration>
</container>
<container qualifier="wls">
<configuration>
...
</configuration>
</container>
</arquillian>
My build.gradle looks as follows:
[...]
configurations {
glassfishEmbeddedTestRuntime { extendsFrom testRuntime }
weblogic10RemoteTestRuntime { extendsFrom testRuntime }
}
dependencies {
glassfishEmbeddedTestRuntime group: 'org.jboss.arquillian.container', name: 'arquillian-glassfish-embedded-3.1', version: '1.0.0.CR4'
glassfishEmbeddedTestRuntime group: 'org.glassfish.main.extras', name: 'glassfish-embedded-all', version: libraryVersions.glassfish
weblogic10RemoteTestRuntime group: 'org.jboss.arquillian.container', name: 'arquillian-wls-remote-10.3', version: '1.0.0.Alpha2'
}
task glassfishEmbeddedTest(type: Test)
task weblogic10RemoteTest(type: Test)
tasks.withType(Test).matching({ t-> t.name.endsWith('Test') } as Spec).each { t ->
t.classpath = project.configurations.getByName(t.name + 'Runtime') + project.sourceSets.main.output + project.sourceSets.test.output
}
How can I expand the definition for weblogic10RemoteTest, so that I can choose the container, and I don't have to edit the arquillian.launch file or the arquillian.xml file by changing the xml before executing the tests?
I thought about doing it like here: https://github.com/seam/solder/blob/develop/testsuite/pom.xml#L123
But I don't know the equivalent of this statement in gradle.
The POM you linked to sets system properties for the JVM running the tests. You can do the same in Gradle by configuring your Test task(s):
test { // or: tasks.withType(Test) {
systemProperty "one", "foo"
systemProperty "two", "bar"
}
(Note that Gradle always runs tests in a separate JVM.)
For further information, see Test in the Gradle Build Language Reference.