Docker Setup with PKCS11 - hsm

Background
We are developing a Spring application which does crypto operations. Requirement of project is that this application implementation should be independent of HSM specific libraries (because client may have any HSM) and hence we used SunPKCS11 interface. SunPKCS11 interface needs a path to HSM library file which implements common PKCS11 interface. This way any HSM which has library implemented PKCS11 interface, would work with application.
Current State
We have host machine for testing where HSM is installed. HSM installer also provides the library which implements PKCS11 interface. We are planning to deploy this application using Docker. Since application should be HSM library independent we have create docker image where no HSM specific information is mentioned. However, we have mount complete lib folder (where PKCS11 implementation library is present) of HSM using docker-compose file. When we do docker-compose up, it gives error for a library which belongs to HSM (though its mounted in lib folder).
Error in loading shared library xxx.so
Question
Should I use docker in this case? I have seen discussions in internet to access devices using docker and answer was mostly to use some other docker image which is device specific. However, I don't know if HSM to be used with application (at client side) would have such docker image.
If so, Is it good idea to mount lib folder of HSM? During HSM installation, I have installed 3 rpm files. These 3 installation might have additional libraries which would be required for interacting with HSM.
In case what I am doing is correct approach, what is cause of error?
Dockerfile
FROM some/url/xxxbuild:openjdk8u151-alpine3.7-1.0.0
LABEL maintainer "Team"
ENV APP_USER myapp
ENV APP_HOME /opt/my/app
USER $APP_USER
RUN madir -p $APP_HOME/config
docker-compose file
my-microservice:
image: my-microservice:1.1.0-SNAPSHOT
container_name: my-microservice-container
restart: on-failure
environment:
SERVER_PORT: 9999
JAVA_OPTS: -Dlog4j.configurationFile=/opt/gd/app/config/log4j2.xml
ports:
- 8888:9999
volumes:
- ./applicationSpecificFile:/opt/gd/app/config
- /opt/hsm/lib:/opt/hsm/lib <-- HSM Specific lib files
I am new to Docker and Linux. Let me know in case I miss to mention something.

Related

How to scan docker image using JFrog XRay from Openshift pipeline

I have docker image pushed to artifactory docker registry, JFrog XRay is up and running.
I understand that to use XRay, it requires some build info passed to it (like buildName, buildNumber), which artifactory docker registry doesn't contain.
According to https://www.jfrog.com/confluence/display/JFROG/Scripted+Pipeline+Syntax#ScriptedPipelineSyntax-DockerBuildswithArtifactory I must have access to docker daemon (on jenkins agent itself or some other container). As far as I know running docker requires privileged access which is unsafe as it could compromise cluster security.
Is there any way to push docker build to XRay without docker daemon?
To scan a Docker with Xray you don't have to add the build-info.
It is enough to define a Watch on the relevant Docker repository with the needed policies.
If you want to scan a Docker build as part of the build process, I suggest that you will contact JFrog Support and they will assist you with any relevant question.
Thanks,
Ofir - trying to help with Xray :-)

How can I specify Dockerfile build in buildConfig for OpenShift Online?

Openshfit details:
Paid Professional version.
Version Information:
Been trying to create a build from a Dockerfile in Openshift.
Its tough going.
So I tried to use the existing templates in the Cluster Console.
One of which is the Docker one. When i press "Try it" it generates a sample BuildConfig, when I try to then Create it, it gives me the error:
(i have now raised the above in the Origin upstream issue tracker)
Anyhoo...anyone know how to specify a buildConfig an image from a Dockerfile in a git repo? I would be grateful to know.
You can see the build strategies allowed for OpenShift Online on the product website: https://www.openshift.com/products/online. Dockerfile build isn't deprecated, it's just explicitly disallowed in OpenShift Online. You can build your Dockerfile locally and push it directly to the OpenShift internal registry (commands for docker login and docker push are on your cluster's About page).
However, in other environments (not OpenShift Online), you can specify a Dockerfile build as follows and providing a Git Repo with a Dockerfile contained within (located at BuildConfig.spec.source.contextDir)
strategy:
type: Docker
There are additional options that can be configured for a Dockerfile build as well, outlined in https://docs.okd.io/latest/dev_guide/builds/build_strategies.html#docker-strategy-options.

Configuration files BROKER_DATABASE_HOST Docker

I'm trying to access the ContextBroker configuration file in the path /etc/sysconfig/contextBroker and it's empty. What is the problem?
https://fiware-orion.readthedocs.io/en/master/admin/running/index.html
I'm using Docker.
Also I am testing the installation by yum centos and tells me that the repository is wrong.
  Is it copied from the web?
https://github.com/telefonicaid/fiware-orion/blob/master/doc/manuals/admin/yum.md
The /etc/sysconfig/contextBroker is used in RPM-base deployment. Docker is based in compiling Context Broker directly from sources, as you can see in the docker file.
So, in this case, you have to use CLI based configuration. Note the docker is built with some of them:
ENTRYPOINT ["/usr/bin/contextBroker","-fg", "-multiservice", "-ngsiv1Autocast" ]
But you can add aditional ones. For example, in the reference docker-compose.yml we set -dbhost, and more ones could be added in the same way.
command: -dbhost mongo
I guess that using docker run you could also add commands in the same way command works in docker-compose.yml, although I don't know the details. Maybe some docker expert could add more info :)

Installing Bosun component in on VM or in several VMs

I would like to use the Bosun GE by my own but it is not clear if I could install the 2 components of this GE (fiware-facts and fiware-cloto) into different VM or they can be installed in the same VM.
Yes, you can install Fiware Bosun in both ways.
By default configure files are written to work in the same VM, so if you install all required software in the same VM everything will works perfectly.
If you want to distribute fiware-facts and cloto in two different VM, you must configure the IP address in both components:
Fiware-cloto config file:
cloto: {{Fiware-Cloto-Public-IP}} (example: 83.53.21.33)
rabbitMQ: RabbitIP
Fiware-facts config file:
NOTIFICATION_URL: http://{{Fiware-Facts-Public-IP}}:5000/v1.0
RABBITMQ_URL: RabbitIP
In addition, Note that MYSQL could also be installed into a different VM, so you should edit mysql host too in order to provide the IP address where Database is installed in both configuration files (fiware-cloto and fiware-facts)

Configuring a Jetty application with env variables

I'm trying to deploy a Java WAR file in Jetty 9 with Docker. I would like to configure things as database URI string, loglevel and such via environment variables - so that I could also use the link features of Docker.
But, if I start the application via java -jar start.jar, the environment variables I've set are not available to the application.
What is the simplest way to pass environment variables to my application?
I managed to find a solution for Jetty.
Just set JAVA_OPTIONS in the Dockerfile and you should be good to go.
The full Dockerfile as for my case looks like this:
FROM jetty:9.2.10
MAINTAINER Me "me#me.com"
ENV spring.profiles.active=dev
ENV JAVA_OPTIONS="-Dkey=value"
ADD myWar.war /var/lib/jetty/webapps/ROOT.war
Using system environment variables (aka System.getenv(String)) is not supported by Jetty's start.jar
Feel free to file a feature request with Jetty for that support.
Know however, that the Jetty start.jar process does support properties, either as System properties, or as start properties. Either on the command line or in the ${jetty.base}/start.ini