Extract POM version, for a job from Github Actions - github-actions

I am trying to test with Github Actions to build my docker container automatically.
In my POM, the version of the docker image that I create with JIB, I extract it from the version of my project.
<groupId>io.xxx.my-proyect</groupId>
<artifactId>my-proyect</artifactId>
<version>0.2.0-SNAPSHOT</version>
<name>my-proyect</name>
...
<plugin>
<groupId>com.google.cloud.tools</groupId>
....
<to>

</to>
</plugin>
Github Actions:
- name: package
run: ./mvnw package jib:dockerBuild
- name: push
run: docker push xxx/my-proyect:VERSION (<-- Extract from version property of my POM)
Anyone have any idea how to do it.

You can ask Maven using maven-help-plugin:
mvn help:evaluate -Dexpression=project.version -q -DforceStdout
If the command prints nothing (because -q suppresses all output including the evaluated version), then it's probably because the maven-help-plugin being used is too old. If so, you can pin the version like this:
mvn org.apache.maven.plugins:maven-help-plugin:3.2.0:evaluate -Dexpression=project.version -q -DforceStdout
Or, you could pin the version 3.2.0 in your pom.xml or in a parent pom.xml (if applicable).
Note that running a Maven command always takes some time, so it may add a few seconds to your build time. Passing --offline may help if it works (or may not so much).
Alternatively, you could run jib:build instead of jib:dockerBuild to have Jib directly push to xxx/my-project:VERSION. Looking at your YAML, there does not seem any good reason to run jib:dockerBuild followed by docker push. Using jib:build in this case will substantially cut your build time, as Jib loses a lot of performance optimizations when pushing to a local Docker daemon.
UDPATE: also, unless you are using Jib's <containerizingMode>packaged configuration, you don't need mvn package. mvn compile jib:... will suffice (and marginally faster).

Related

Docker cache not working on repository dispatch

I have a workflow that builds a Docker image.
When the workflow runs with manual trigger/push trigger the cache works fine and I get really good performance.
When I trigger the workflow through repository dispatch (another workflow that triggers the workflow) the cache doesn't work.
I tried everything: using cache module with all storage possibilities there are, running on GitHub runner, running on self-hosted runner, using bash commands to build and push the image instead of using a module, nothing seems to work.
Did anyone come across a similar issue?
This is how build and push look at the moment (on a self hosted runner):
- name: Build Docker image
id: image_id
run: |
docker build -f Dockerfile.test --build-arg LAMBDA_NAME=sharon-test LAMBDA_HANDLER=dist/apps/test/main.handler --build-arg NPM_TOKEN=${{ secrets.NPM_TOKEN }} -t ****.dkr.ecr.us-east-2.amazonaws.com/sharon-test:latest .
- name: Push Docker image
run: |
docker push ****.dkr.ecr.us-east-2.amazonaws.com/sharon-test:latest

Errors Uploading to CircleCI with MySQL Database - Maven Clean Also Creates Issues When Run Before Maven Test

I'm using the Hibernate framework along with Maven in IntelliJ. I'm creating a MySQL database, I also have some ORM classes that map the MySQL database, and then I'm running some JUNIT tests to make sure everything works.
Where I'm having trouble is in two places, which are related to each other:
When I run mvn test, sometimes my JUNIT tests work fine and are
able to query the simulated database, establish a connection (even
though it's just with the simulated database), execute a statement,
etc. However, sometimes, if I run mvn clean before running mvn test,
while the JUNIT tests still execute, the tests output with failures
(not errors, just failures, thought this is still bad, of course).
The problem outlined in #1 is essentially duplicated when I upload to GitHub and run CircleCI (which isn't surprising, since CircleCI runs mvn test when doing its integration testing). Most of my uploads failed, but one of them finally worked. However, I'm not exactly sure why the "final" upload was successful while the others weren't.
The error messages I'm getting either from mvn test or the CircleCI builds are typically as follows. These errors are from my pent-ultimate upload, the one I did just prior to the next upload which actually worked:
java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
com.mysql.cj.exceptions.CJException: Public Key Retrieval is not allowed
java.sql.SQLNonTransientConnectionException: Could not create connection to database server
I should also note that my intention is to run mvn clean first, then upload to CircleCI, however, running mvn clean seems to be somehow involved in perpetuating these errors.
As far as different resources I'm using go, here they are. If I'm forgetting something, please let me know and I should be able to include it.
In my hibernate.cfg.xml file, I have the following lines:
<property name="connection.url">jdbc:mysql://localhost:3306/stocks</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
At the end of the word "stocks" on the first line, I have sometimes appended any of the following (sometimes I only appended one of the following, other times I combined them, depending on the error(s) from either Maven or CircleCI). Appending some combination of these lines seemed to help get things to work, but running mvn clean seemed to halt any effect these additions were having:
autoReconnect=true
useSSL=false
allowPublicKeyRetrieval=true
Running the JUNIT tests from within IntelliJ usually works, but if I run mvn clean first, then IntelliJ usually won't work, unless I then go back into this file and append ?autoReconnect=true&useSSL=false. If I do that, then IntelliJ will run the JUNIT tests fine.
In my config.yml file for CircleCI, I have the following code. Certain statements were added in MAVEN_OPTS based on other research I did to try to counteract the errors I was getting, but I don't know if these statements are having any impact one way or the other:
# Java Maven CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:8-jdk
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/postgres:9.4
- image: circleci/mysql:latest-ram
environment:
- MYSQL_ROOT_PASSWORD: (my real password goes here)
- MYSQL_DATABASE: stocks
- MYSQL_USER: bob
- MYSQL_PASSWORD: (the real password goes here)
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m -Dmaven.wagon.http.ssl.insecure=true -Dmaven.wagon.http.ssl.allowall=true -Dmaven.wagon.http.ssl.ignore.validity.dates=true
steps:
- checkout
- run: sudo apt install -y mysql-client
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
# run tests!
- run: mvn integration-test
If anyone has any idea what's going on, I appreciate the help. My goal is to be able to upload to CircleCI by first running mvn clean so only the src files, pom.xml file, and .circleci folder are included in the upload. Also, not to belabor the point, but my most recent upload to CircleCI did in fact work, but I'm not sure what made that build work while all the others ones did not.

Maven run docker image with java ee application

I have a running java ee application which is using wildfly and mysql. Now i heard that docker is using everyone and it is very productive so i decided to dockerize my development environment. Sounds easier than it is.
What i have so far:
Maven for packaging my app into a .war file
Arquillian unit tests which runs the test on my locally installed wildfly instance
What i want:
Using predefined docker images (jboss/wildfly,...) for running my application.
Also running my tests in the docker container.
I started out by building an docker image with the maven-docker-plugin:
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.4.13</version>
<configuration>
<imageName>netbeans/sampleapplication</imageName>
<dockerDirectory>src/main/docker</dockerDirectory>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.war</include>
</resource>
</resources>
<execution>
<id>build-image</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
</configuration>
</plugin>
Dockerfile:
FROM jboss/wildfly
COPY *.war /opt/jboss/wildfly/standalone/deployments/app.war
EXPOSE 8080 9990
Maven command: clean package docker:build.
I can reach the application server just with my docker-maschine url and not like previously with localhost.
In the end i just want to use a single maven command for:
Building the application
Building the docker images (wildfly,mysql...)
Run arquillian junit tests
Deploy the application and expose it via localhost:8080
Stop container if a new deploy is made
I am really struggling with that. Anbody an idea how to do this?
there is no straight forward way of doing this - since some of the docker tasks cannot easily be mapped to a maven phase. So you need to choose what a preferred way of working for you is.
So some thoughts that hopefully will lead to a solution:
The spotify-docker-maven plugin has no mojo's (maven goals) to run an image. Its main tasks are about creating and publishing the docker images.
So to run an image you can simply write some bash scripts (since they will be simple, they will run on linux and even windows using the git bash command line). You could execute those scripts using the maven-exec-plugin.
To properly map that to the maven lifecycle is a bit more tricky.
The phase that matches this the best (my opinion only) is the integration-test phase. This phase has a pre-integration-test phase, the integration-test phase and a post-integration-test phase. The idea is to startup the containers in the pre- phase. Then run the tests in the integration test phase using the failsafe-plugin (not letting the build fail!) and cleaning up the containers in the post- phase. It will be a good idea to cleanup the containers of that project in the pre- phase as well - just in case some zombie containers stick around.
These steps could be put into a profile. Since the integration-test phase is needed for integration tests as well, one would end up executing "maven verify" with different profiles (mvn verify && mvn verify -P docker-tests && mvn -P docker-other-tests).
Another approach would be using the maven plugin created by fabric8.
This plugin is a bit more complicated than the one created by spotify (again: my opinion only). But it comes along with more goals.
Using the provided <packaging>docker</packaging> of the plugin the docker run and stop goals are already mapped to the lifecycle.
Both plugins have in the end a similar complexity in the pom.xml - just its more reading with the fabric plugin. But there are some nice examples and a good user manual.
So those are the two options that came to my mind. Hope this will help :)
Alternatively to directly using a JBoss Wildfly container, you also might check out Wildfly Swarm. It's a separate distribution of Wildfly with even more goodies regarding docker.

How to add path variable to job shell

I am setting up Jenkins to replace our current TeamCity CI build.
I have created a free-style software project so that I can execute a shell script.
The Shell script runs the mvn command.
But the build fails complaining that the 'mvn' command cannot be found.
I have figured that this is because Jenkins is running the build in a different shell, which does not have Maven on it's path.
My question is; how do I add the path so 'mvn' is found in my Shell script? I've looked around but can't spot where the right place might be.
Thanks for your time.
I solved this by exporting and setting the Path in the Jenkins Job configuration where you can enter shell commands. So I set the environments variable before I execute my Shell script, works a treat.
Some possible solutions:
You can call maven with an absolute path
You configure a global environment variable in the jenkins system settings with the absolute path to your maven instance, and use this in your script call (if you use the inline shell script, I don't know if those are substituted to a called script, you have to test)
You use a maven project and configure your maven instance in the jenkins system settings
ps.: Usually /bin/sh is chosen from Jenkins, if you want to switch to eg. bash, you can configure this in the jenkins system settings, in case you want to configure global environment variables.
You can use envInject plugin. It's very powerful.
I use it to install rbenv. And it can inject environment variables into your current job.
Another option to Dags suggestion is that if you're only using a single version of maven, on each slave server you could do either;
* add PATH=${PATH}:
* symlink mvn into /usr/bin with; sudo ln -s /usr/bin
I'm not at a Jenkins box at the moment, but I can find some more detailed examples if you'd like.
Jenkins is using sh by default and not bash.
This is my first time defining a jenkins maven job, and I also followed soem regular maven instructions (for running from command line...), and tried to update ~/.bashrc with M2_HOME, M2, PATH, but it didn't work because jenkins used sh and not bash. Then I found out that there is a simpler and better way built into jenkins.
After installing maven, I was supposed to configure my maven installation in jenkins.
To configure your maven installation in Jenkins:
login to jenkins web console
click Manage Jenkins --> Configure System
Under Maven, click the "Maven Installations..." button
a. Give it some name
b. and under MVN_HOME set the path to where you installed maven, for example "/usr/local/apache-maven/apache-maven-3.0.5"
Click Save button
Define a job with maven target
edit your job
Click "Add build step"
on Maven Version, enter the name you gave your maven installation (step #4 above)
set some goal like clean install

Hudson + Maven + Emma/Sonar = Build Cycle Runs 2x

I have a bunch of Maven projects building in Hudson with Sonar sitting in the side-lines. Sonar gives me Sonar stats, FindBugs stats, and code-coverage.
I've noticed that regardless of if I use Sonar or if I use EMMA via Maven directly, the entire build cycle runs twice. This includes init (which in my case, reinitializes the database -- expensive) and unit tests (a few hundred -- also expensive).
How can I prevent this? I did a lot of reading, and it seems like this is due to the design of code-coverage plugins -- to keep uninstrumented classes separated from instrumented ones.
I've tried configurations like:
Maven runs: deploy, EMMA
Maven runs: deploy; deploy to Sonar on completion
The sonar documentation recommends running the sonar plugin in 2 stages:-
mvn clean install -Dtest=false -DfailIfNoTests=false
mvn sonar:sonar
The tests are bypassed in the first phase and run implicitly in the second stage.
A one line alternative is to run the following command:-
mvn clean install sonar:sonar -Dmaven.test.failure.ignore=true
but this will run the tests twice - as you have found.
To add to #Strawberry's answer, you could reuse the unit test reports instead of running them again. Refer to the section Reuse existing unit test reports in the sonar documentation
Once this is done, you can configure the following in Hudson
clean deploy sonar:sonar