openshift-liberty : jdbc drivers not getting loaded - openshift

Websphere Liberty server running on openshift getting below error on startup. As the workarea in liberty is created during start time how can we grant r/w access to the same
JVMSHRC155E Error copying username into cache name
JVMSHRC686I Failed to startup shared class cache. Continue without using it as -Xshareclasses:nonfatal is specified
Launching defaultServer (WebSphere Application Server 18.0.0.2/wlp-1.0.21.cl180220180619-0403) on IBM J9 VM, version 8.0.5.21 - pxa6480sr5fp21-20180830_01(SR5 FP21) (en_US)
[AUDIT ] CWWKE0001I: The server defaultServer has been launched.
[AUDIT ] CWWKG0028A: Processing included configuration resource: /config/project/server.xml
****[ERROR ] CWWKG0074E: Unable to update the configuration for webApplication with the unique identifier null because of the exception: /opt/ibm/wlp/output/defaultServer/workarea/org.eclipse.osgi/10/data/configs/webApplication!-908507812 (Permission denied)**.**
[WARNING ] CWWKG0076W: The previous configuration for webApplication with id null is still in use.
**[ERROR ] CWWKG0074E: Unable to update the configuration for applicationMonitor with the unique identifier null because of the exception: /opt/ibm/wlp/output/defaultServer/workarea/org.eclipse.osgi/10/data/configs/applicationMonitor!1812182506 (Permission denied).**
[WARNING ] CWWKG0076W: The previous configuration for applicationMonitor with id null is still in use.
**[ERROR ] CWWKG0074E: Unable to update the configuration for dataSource with the unique identifier null because of the exception: /opt/ibm/wlp/output/defaultServer/workarea/org.eclipse.osgi/10/data/configs/dataSource!1272470629 (Permission denied).**
[WARNING ] CWWKG0076W: The previous configuration for dataSource with id null is still in use.
**[ERROR ] CWWKG0074E: Unable to update the configuration for jdbcDriver with the unique identifier DB2 because of the exception: /opt/ibm/wlp/output/defaultServer/workarea/org.eclipse.osgi/10/data/configs/jdbcDriver_2!329917942 (Permission denied).**
Server.xml
<jdbcDriver id="DB2">
<library name="DB2JCC4Lib">
<fileset dir="${wlp.user.dir}/shared/resources" includes="db2jcc4.jar db2jcc_license_cisuz.jar" />
</library>
</jdbcDriver>
<dataSource jdbcDriverRef="DB2" jndiName="jdbc/abcdDB2DataSource">
<connectionManager maxPoolSize="25" minPoolSize="0" agedTimeout="21600s" connectionTimeout="180s" reapTime="180s" maxIdleTime="1800s"/>
<properties.db2.jcc databaseName="${env.DB_MF_DATABASENAME}" password="${env.DB_MF_PWD}" portNumber="${env.DB_MF_PORT}" serverName="${env.DB_MF_URL}" user="${env.DB_MF_USERID}" sslConnection="true"/>
</dataSource>
<applicationMonitor updateTrigger="mbean" />
<webApplication contextRoot="${env.APP_CONTEXT_ROOT}" location="abcd.war" />
DockerFile content
RUN mkdir -p /config/project && \
chown -R 1001:0 /config && \
chmod -R g+rw /config && \
mkdir -p /opt/ibm/wlp/installTmp && \
chown -R 1001:0 /opt/ibm/wlp && \
chmod -R g+rw /opt/ibm/wlp && \
chown -R 1001:0 /logs && \
chmod -R g+rw /logs && \
mkdir -p $HOME && \
chown -R 1001:0 $HOME && \
chmod -R g+rw $HOME && \
chown -R 1001:0 $JAVA_HOME/lib/security && \
chmod -R g+rw $JAVA_HOME/lib/security && \
mkdir -p /opt/ibm/wlp/output/defaultServer/workarea && \
chown -R 1001:0 /opt/ibm/wlp/output/defaultServer/workarea && \
chmod -R g+rw /opt/ibm/wlp/output/defaultServer/workarea

When using RHEL as the base OS we noticed that a sticky bit was being set on /output which was causing some troubles. Check out the places where we had to manipulate /output in this PR.

#ArthurDM, Thanks but was not able to get it working with sticky bit solution. But mounted the workarea folder in the container and got it to work
volumeMounts:
- name: lib-workarea-volume
mountPath: /opt/ibm/wlp/output/defaultServer/workarea
volumes:
- name: lib-workarea-volume
emptyDir: {}

Related

Nifi image Not Running over openshift

While trying to run docker image of apache nifi present in the docker hub in the open shift, it is giving me the permission issue as the docker image was running the user nifi which is not allowed via openshft. so I build the docker image using the below docker file but now I am not even able to run the build image in my local docker container.
FROM openjdk:8-jre
ARG NIFI_VERSION=1.12.1
ARG BASE_URL=https://archive.apache.org/dist
ARG MIRROR_BASE_URL=${MIRROR_BASE_URL:-${BASE_URL}}
ARG NIFI_BINARY_PATH=${NIFI_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.zip}
ARG NIFI_TOOLKIT_BINARY_PATH=${NIFI_TOOLKIT_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-toolkit-${NIFI_VERSION}-bin.zip}
ENV NIFI_BASE_DIR=/opt/nifi
ENV NIFI_HOME ${NIFI_BASE_DIR}/nifi-current
ENV NIFI_TOOLKIT_HOME ${NIFI_BASE_DIR}/nifi-toolkit-current
ENV NIFI_PID_DIR=${NIFI_HOME}/run
ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
USER root
ADD sh/ ${NIFI_BASE_DIR}/scripts/
# Setup NiFi user and create necessary directories
RUN mkdir -p ${NIFI_BASE_DIR} \
&& apt-get update \
&& apt-get install -y jq xmlstarlet procps
# Download, validate, and expand Apache NiFi Toolkit binary.
RUN curl -fSL ${MIRROR_BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} ${NIFI_TOOLKIT_HOME} \
&& ln -s ${NIFI_TOOLKIT_HOME} ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} \
&& chmod -R g+rwX ${NIFI_TOOLKIT_HOME}
# Download, validate, and expand Apache NiFi binary.
RUN curl -fSL ${MIRROR_BASE_URL}/${NIFI_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} ${NIFI_HOME} \
&& mkdir -p ${NIFI_HOME}/conf \
&& mkdir -p ${NIFI_HOME}/database_repository \
&& mkdir -p ${NIFI_HOME}/flowfile_repository \
&& mkdir -p ${NIFI_HOME}/content_repository \
&& mkdir -p ${NIFI_HOME}/provenance_repository \
&& mkdir -p ${NIFI_HOME}/state \
&& mkdir -p ${NIFI_LOG_DIR} \
&& ln -s ${NIFI_HOME} ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
&& chgrp -R 0 ${NIFI_BASE_DIR} \
&& chmod -R g+rwX ${NIFI_BASE_DIR} \
&& chmod -R g=u ${NIFI_BASE_DIR}/ \
&& chmod -R g=u /etc/passwd
#ADD bootstrap.conf ${NIFI_HOME}/conf/bootstrap.conf
# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile
RUN echo "#!/bin/sh\n" > ${NIFI_HOME}/bin/nifi-env.sh
# Web HTTP(s) & Socket Site-to-Site Ports
EXPOSE 8080 8443 10000
WORKDIR ${NIFI_HOME}
USER 1001
# Apply configuration and start NiFi
#
# We need to use the exec form to avoid running our command in a subshell and omitting signals,
# thus being unable to shut down gracefully:
# https://docs.docker.com/engine/reference/builder/#entrypoint
#
# Also we need to use relative path, because the exec form does not invoke a command shell,
# thus normal shell processing does not happen:
# https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
ENTRYPOINT ["../scripts/start.sh"]
Getting this error while running in the docker container.
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
/opt/nifi/scripts/toolkit.sh: 18: /opt/nifi/scripts/toolkit.sh: cannot create //.nifi-cli.nifi.properties: Permission denied
This build is for the open shift, as the apache nifi user is not working in openshift and giving permission issue while starting the local docker
So I've passed for the same issue trying to run NIFI on a Openshift, I hope that could help you.
The steps used for me was:
As #JuanD shows, I added the config on openshift:
securityContext:
runAsUser: 1000
Further on that I also did:
RUN chmod -R g+rw ${NIFI_BASE_DIR} \
&& chmod -R g+rwX ${NIFI_BASE_DIR}/scripts \
&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi
Another rearrange that I did was to move the copy files to be executed before this command.
And in order to avoid any unnecessary issue I also added the uid-entrypoint.sh
#!/bin/bash
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${USER_NAME:-nifi}:x:$(id -u):0:${USER_NAME:-nifi} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
exec "$#"
The entire dockerfile:
ARG IMAGE_NAME=openjdk
ARG IMAGE_TAG=8-jre
FROM ${IMAGE_NAME}:${IMAGE_TAG}
ARG MAINTAINER="Apache NiFi <dev#nifi.apache.org>"
LABEL maintainer="${MAINTAINER}"
LABEL site="https://nifi.apache.org"
ARG UID=1000
ARG GID=0
ARG NIFI_VERSION=1.14.0
ARG BASE_URL=https://archive.apache.org/dist
ARG MIRROR_BASE_URL=${MIRROR_BASE_URL:-${BASE_URL}}
ARG NIFI_BINARY_PATH=${NIFI_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.zip}
ARG NIFI_TOOLKIT_BINARY_PATH=${NIFI_TOOLKIT_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-toolkit-${NIFI_VERSION}-bin.zip}
ENV NIFI_BASE_DIR=/opt/nifi
ENV NIFI_HOME ${NIFI_BASE_DIR}/nifi-current
ENV NIFI_TOOLKIT_HOME ${NIFI_BASE_DIR}/nifi-toolkit-current
ENV NIFI_PID_DIR=${NIFI_HOME}/run
ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
# Download, validate, and expand Apache NiFi Toolkit binary.
RUN mkdir -p ${NIFI_BASE_DIR} \
&& apt-get update \
&& apt-get install -y jq xmlstarlet procps \
&& curl -fSL ${MIRROR_BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} ${NIFI_TOOLKIT_HOME} \
&& ln -s ${NIFI_TOOLKIT_HOME} ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}
# Download, validate, and expand Apache NiFi binary.
RUN curl -fSL ${MIRROR_BASE_URL}/${NIFI_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} ${NIFI_HOME} \
&& mkdir -p ${NIFI_HOME}/conf \
&& mkdir -p ${NIFI_HOME}/database_repository \
&& mkdir -p ${NIFI_HOME}/flowfile_repository \
&& mkdir -p ${NIFI_HOME}/content_repository \
&& mkdir -p ${NIFI_HOME}/provenance_repository \
&& mkdir -p ${NIFI_HOME}/state \
&& mkdir -p ${NIFI_LOG_DIR} \
&& ln -s ${NIFI_HOME} ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}
COPY scripts/ ${NIFI_BASE_DIR}/scripts/
RUN chmod -R g+rw ${NIFI_BASE_DIR} \
&& chmod -R g+rwX ${NIFI_BASE_DIR}/scripts \
&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi
# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile
RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh
# Web HTTP(s) & Socket Site-to-Site Ports
EXPOSE 8080 8443 10000 8000
WORKDIR ${NIFI_HOME}
USER ${UID}
ENTRYPOINT [ "../scripts/uid-entrypoint.sh" ]
CMD [ "../scripts/start.sh" ]
I hope that could help.
I had a similar issue when deploying a custom nifi container to Openshift. Adding this to the deployment.yaml under spec: helped:
securityContext:
runAsUser: 1000
fsGroup: 1000

Connection refused when trying to connect to local mysql database from docker container

I have a problem to connect with local mysql database from a docker container. I'm using docker-compose with two services in containers, database is not on container
I have this docker-compose file:
version: '2'
services:
web:
build:
context: ./
dockerfile: web-dev.dockerfile
volumes:
- ./:/var/www
ports:
- "8080:80"
links:
- app
network_mode: "bridge"
dns:
- 10.0.50.6
app:
build:
context: ./
dockerfile: app-dev.dockerfile
volumes:
- ./:/var/www
network_mode: "bridge"
dns:
- 10.0.50.6
the web container is an nginx service with this Dockerfile:
FROM nginx:1.10
ADD ./vhost.dev.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www
and this configuration file:
server {
listen 80;
index index.php index.html;
root /var/www/formapp/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
the app container is an app service with this Dockerfile:
FROM php:7-fpm
ENV USER=pasquale
RUN apt-get update && apt-get install -y libmcrypt-dev mysql-client \
openssl zip unzip git nano wget libaio-dev iputils-ping
RUN mkdir -p /opt/oracle/instantclient_10_2
# Download files from in oracle folder:
# https://agora.zanichelli.it/downloads/basic-10.2.0.5.0-linux-x64.zip
# https://agora.zanichelli.it/downloads/sdk-10.2.0.5.0-linux-x64.zip
ADD oracle/basic-10.2.0.5.0-linux-x64.zip /opt/oracle/basic-10.2.0.5.0-linux-x64.zip
ADD oracle/sdk-10.2.0.5.0-linux-x64.zip /opt/oracle/sdk-10.2.0.5.0-linux-x64.zip
RUN unzip /opt/oracle/basic-10.2.0.5.0-linux-x64.zip -d /opt/oracle \
&& unzip /opt/oracle/sdk-10.2.0.5.0-linux-x64.zip -d /opt/oracle \
&& ln -s /opt/oracle/instantclient_10_2/libclntsh.so.10.1 /opt/oracle/instantclient_10_2/libclntsh.so \
&& ln -s /opt/oracle/instantclient_10_2/libclntshcore.so.10.1 /opt/oracle/instantclient_10_2/libclntshcore.so \
&& ln -s /opt/oracle/instantclient_10_2/libocci.so.10.1 /opt/oracle/instantclient_10_2/libocci.so
ADD oracle/tns-admin/tnsnames.ora /opt/oracle/instantclient_10_2/network/admin/tnsnames.ora
ENV LD_LIBRARY_PATH /opt/oracle/instantclient_10_2/
RUN docker-php-ext-install pdo_mysql \
&& pecl install mcrypt-1.0.1 \
&& docker-php-ext-enable mcrypt \
&& pecl install mongodb \
&& docker-php-ext-enable mongodb \
&& docker-php-ext-configure pdo_oci --with-pdo-oci=instantclient,/opt/oracle/instantclient_10_2,10.2 \
&& echo 'instantclient,/opt/oracle/instantclient_10_2' | pecl install oci8 \
&& docker-php-ext-install pdo_oci \
&& docker-php-ext-enable oci8
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_enable=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.default_enable=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_connect_back=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_handler="dbgp"' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_host=0.0.0.0' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_log=/var/www/xdebug.log' >> /usr/local/etc/php/conf.d/xdebug.ini
RUN mkdir -p /home/$USER
RUN groupadd -g 1000 $USER
RUN useradd -u 1000 -g $USER $USER -d /home/$USER
RUN chown $USER:$USER /home/$USER
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /var/www
USER $USER
launching the containers with docker-compose -f docker-compose.dev.yml up --build -d works fine and docker ps command give me this output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7202e2862ef5 190todb_web "nginx -g 'daemon of…" 9 hours ago Up About an hour 443/tcp, 0.0.0.0:8080->80/tcp 190todb_web_1_3c628ae1c69b
d45c04d353d5 190todb_app "docker-php-entrypoi…" 9 hours ago Up About an hour 9000/tcp 190todb_app_1_dd2ac7028b87
I install a lumen app in my project folder formapp and then I created a seeder to insert fake data in my database, and from bash, if I run /projectfolder/formapp$ php artisan db:seed the seeder works and I have this output:
Seeding: UsersTableSeeder
Database seeding completed successfully.
Then I created a route to access my users table from the lumen app:
$router->get('users', function () use ($router) {
return User::all();
});
my lumen env file is this:
APP_NAME=Lumen
APP_ENV=local
APP_KEY=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
APP_DEBUG=true
APP_URL=http://localhost
APP_TIMEZONE=UTC
LOG_CHANNEL=stack
LOG_SLACK_WEBHOOK_URL=
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=form_db
DB_USERNAME=root
DB_PASSWORD=radiohead
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
JWT_SECRET=JhbGciOiJIUzI1N0eXAiOiJKV1QiLC
but if I try to connect from http://localhost:8080/users I have this lumen error:
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `users`)
I tried to change the DB_HOST but I cannot solve the problem:
0.0.0.0 (SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `users`));
172.17.0.1 (SQLSTATE[HY000] [1130] Host '172.17.0.2' is not allowed to connect to this MySQL server (SQL: select * from `users`));
172.17.0.1 is my docker0 inet address.
How can I config my project to work?
PS: the lumen app is up and running, is only the db connection that is not working
You cannot use localhost when the database and app are not on the same server.
You need to allow access by something like:
GRANT ALL PRIVILEGES ON *.* TO 'root'#'172.17.0.2' IDENTIFIED BY '<password>';
You can replace 172.17.0.2 by wildcard %.

How to clean up temporary files after exec-ing inside entrypoint script?

I am trying to write my own mariadb docker image. I wanted to execute some sql statements just after container starts (After exec mysqld). However I found mysqld --init-file option useful for my case. So my entrypoint script is something like below.
Dockerfile
FROM alpine:edge
RUN set -ex \
&& apk add mariadb mariadb-client \
&& mkdir -p /run/mysqld \
&& chown -R mysql:mysql /run/mysqld \
&& ln -snf /usr/lib/mariadb /usr/lib/mysql \
&& mysql_install_db --user=mysql --skip-name-resolve --auth-root-authentication-method=socket --auth-root-socket-user=root --force --rpm --skip-test-db
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
entrypoint.sh
#!/bin/sh
set -ex
{
echo "CREATE USER IF NOT EXISTS '${MYSQL_USER}'#'%' IDENTIFIED BY '${MYSQL_PASSWORD}';"
echo "CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};"
echo "GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'#'%';"
} > /tmp/mysqld-init.sql
exec $# --init-file="/tmp/mysqld-init.sql"
As you can see the temporary init file contains some sensitive information. I wanted to clean it after execution of exec $# --init-file="/tmp/mysqld-init.sql".
Now two ideas came to my mind. One is to create a named pipe (FIFO) file for temporary sql command or to use trap command.
Idea-1
But the problem here is a unnecessary child background process is keep running on container as I have used process control operator &. But I am in vain how can I exit that process.
if [ ! -p "/tmp/mysqld.init" ]; then
mkfifo /tmp/mysqld.init
fi
{
echo "CREATE USER IF NOT EXISTS '${MYSQL_USER}'#'%' IDENTIFIED BY '${MYSQL_PASSWORD}';"
echo "CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};"
echo "GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'#'%';"
} > /tmp/mysqld.init &
exec $# --init-file="/tmp/mysqld.init"
Idea-2
Use trap command and clean the temporary file when exec command gets executed. But I don't know how to catch the exec signal.
trap cleanup "the exec signal"
cleanup()
{
echo "Caught Signal ... cleaning up."
rm -rf /tmp/mysqld-init.sql
echo "Done cleanup ... quitting."
exit 1
}
set -ex
{
echo "CREATE USER IF NOT EXISTS '${MYSQL_USER}'#'%' IDENTIFIED BY '${MYSQL_PASSWORD}';"
echo "CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};"
echo "GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'#'%';"
} > /tmp/mysqld-init.sql
exec $# --init-file="/tmp/mysqld.init"
Use tini to solve this signal and zombie process issue.
FROM alpine:edge
RUN set -ex \
&& apk add --no-cache mariadb mariadb-client tini \
&& mkdir -p /run/mysqld \
&& chown -R mysql:mysql /run/mysqld \
&& ln -snf /usr/lib/mariadb /usr/lib/mysql \
&& mysql_install_db --user=mysql --skip-name-resolve --auth-root-authentication-method=socket --auth-root-socket-user=root --force --rpm --skip-test-db
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
entrypoint.sh
if [ ! -p "/tmp/mysqld.init" ]; then
mkfifo /tmp/mysqld.init
fi
{
echo "CREATE USER IF NOT EXISTS '${MYSQL_USER}'#'%' IDENTIFIED BY '${MYSQL_PASSWORD}';"
echo "CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};"
echo "GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'#'%';"
} > /tmp/mysqld.init &
exec tini -g -- "$#" --init-file="/tmp/mysqld.init"
I think trap is a best solution for this
function interrupt(){
local dir=$1
[ -e ${dir} ] && rm -rf ${dir}
exit 128
}
TMP_DIR=$(mktemp -d /tmp/entrypoint.XXXX)
trap "interrupt ${TMP_DIR}" SIGINT SIGTERM
trap "rm -rf ${TMP_DIR}" EXIT
set -ex
{
echo "CREATE USER IF NOT EXISTS '${MYSQL_USER}'#'%' IDENTIFIED BY '${MYSQL_PASSWORD}';"
echo "CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};"
echo "GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'#'%';"
} > ${TMP_DIR}/mysqld-init.sql
exec $# --init-file="${TMP_DIR}/mysqld-init.sql"

How to add `--registry-mirror` when starting docker from "Docker quickstart terminal"?

From the docker distribution document: https://github.com/docker/distribution
It says to configure the docker to use the mirror, we should:
Configuring the Docker daemon
You will need to pass the --registry-mirror option to your Docker daemon on startup:
docker --registry-mirror=https://<my-docker-mirror-host> daemon
I'm newbie to docker, and I start docker from mac normal by the provided "Docker Quickstart Termial" app, which actaully invokes a start.sh shell:
#!/bin/bash
VM=default
DOCKER_MACHINE=/usr/local/bin/docker-machine
VBOXMANAGE=/Applications/VirtualBox.app/Contents/MacOS/VBoxManage
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m'
unset DYLD_LIBRARY_PATH
unset LD_LIBRARY_PATH
clear
if [ ! -f $DOCKER_MACHINE ] || [ ! -f $VBOXMANAGE ]; then
echo "Either VirtualBox or Docker Machine are not installed. Please re-run the Toolbox Installer and try again."
exit 1
fi
$VBOXMANAGE showvminfo $VM &> /dev/null
VM_EXISTS_CODE=$?
if [ $VM_EXISTS_CODE -eq 1 ]; then
echo "Creating Machine $VM..."
$DOCKER_MACHINE rm -f $VM &> /dev/null
rm -rf ~/.docker/machine/machines/$VM
$DOCKER_MACHINE create -d virtualbox --virtualbox-memory 2048 --virtualbox-disk-size 204800 $VM
else
echo "Machine $VM already exists in VirtualBox."
fi
VM_STATUS=$($DOCKER_MACHINE status $VM)
if [ "$VM_STATUS" != "Running" ]; then
echo "Starting machine $VM..."
$DOCKER_MACHINE start $VM
yes | $DOCKER_MACHINE regenerate-certs $VM
fi
echo "Setting environment variables for machine $VM..."
clear
cat << EOF
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
EOF
echo -e "${BLUE}docker${NC} is configured to use the ${GREEN}$VM${NC} machine with IP ${GREEN}$($DOCKER_MACHINE ip $VM)${NC}"
echo "For help getting started, check out the docs at https://docs.docker.com"
echo
eval $($DOCKER_MACHINE env $VM --shell=bash)
USER_SHELL=$(dscl /Search -read /Users/$USER UserShell | awk '{print $2}' | head -n 1)
if [[ $USER_SHELL == *"/bash"* ]] || [[ $USER_SHELL == *"/zsh"* ]] || [[ $USER_SHELL == *"/sh"* ]]; then
$USER_SHELL --login
else
$USER_SHELL
fi
Is it the correct file that I can put my '--registry-mirror' config to it? What should I do?
If you do a docker-machine create --help:
docker-machine create --help
Usage: docker-machine create [OPTIONS] [arg...]
Create a machine.
Run 'docker-machine create --driver name' to include the create flags for that driver in the help text.
Options:
...
--engine-insecure-registry [--engine-insecure-registry option --engine-insecure-registry option] Specify insecure registries to allow with the created en
gine
--engine-registry-mirror [--engine-registry-mirror option --engine-registry-mirror option] Specify registry mirrors to use
So you can modify your script to add one more parameter:
--engine-registry-mirror=...
However, since your 'default' docker-machine probably already exists (do a docker-machine ls), you might need to remove it first (docker-machine rm default: make sure you can easily recreate your images from your local Dockerfiles, and/or that you don't have data container that would need to be saved first)
Open C:\Users\<YourName>\.docker\daemon.json, edit the "registry-mirrors" entry in that file.
{"registry-mirrors":["https://registry.docker-cn.com"],"insecure-registries":[], "debug":true, "experimental": true}

Issues installing IDAS on CentOS 7 VM through provided RPMs

I've been trying to install IDAS GE in a CentOS 7 VM on my machine through the UL2.0 RPMs(download link!) available in its catalogue page.
I followed the instructions on github, but I get stuck in starting the IoT as per section 3 of the Deployment section of the instructions. If I execute the init_iotagent.sh, where I inserted the local IP of the VM, I get the error:
log4cplus:ERROR No appenders could be found for logger (main).
log4cplus:ERROR Please initialize the log4cplus system properly.
HTTPFilter DESTRUCTOR 0
HTTPFilter DESTRUCTOR 0
Also, in the instructions for Starting IoTAgent as a Service, it's said that:
After installing iot-agent-base RPM an init.d script can be found in
this folder /usr/local/iot/init.d .
But this file is not there, leading me to believe that the IoTAgent wasn't installed properly from the RPMs provided.
Also, I can't find log files regarding IoTAgent, only the MongoDB has its log file at /usr/local/iot/mongodb-linux-x86_64-2.6.9/log/mongoc.log.
If anyone could help, it would be apreciated. Also, if more info is needed, please let me know.
Thank you
I recommend you to get the GitHub repository and build the RPMs from the source and then install it in your CentOS. Like is explained in the documentation:
NOTE: I changed the BUILD_TYPE to Release, so I created the Release dir.
GIT_VERSION and GIT_COMMIT are not the latest ones.
git clone https://github.com/telefonicaid/fiware-IoTAgent-Cplusplus.git
cd fiware....
mkdir -p build/Release
cd build/Release
cmake -DGIT_VERSION=20527 -DGIT_COMMIT=217023407f25ed258043cfc00a46b6c05fb0b52c -DMQTT=ON -DCMAKE_BUILD_TYPE=Release ../../
make install
make package
The packages will be in pack/Linux/RPM/
rpm -i iot-agent-base-xxxxxxx (xxxxxxx will be the numbers of the build)
rpm -i iot-agent-ul-xxxxxx (xxxxxxx will be the numbers of the build)
Once installed with RPMs the init.d file is in: /usr/local/iot/init.d/iotagent
This is the content of the file:
#!/bin/bash
# Copyright 2015 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-IoTagent-Cplusplus (FI-WARE project).
#
# iotagent Start/Stop iotagent
#
# chkconfig: 2345 99 60
# description: iotagent
. /etc/rc.d/init.d/functions
PARAM=$1
INSTANCE=$2
USERNAME=iotagent
EXECUTABLE=/usr/local/iot/bin/iotagent
CONFIG_PATH=/usr/local/iot/config
iotagent_start()
{
local result=0
local instance=${1}
if [[ ! -x ${EXECUTABLE} ]]; then
printf "%s\n" "Fail - missing ${EXECUTABLE} executable"
exit 1
fi
if [[ -z ${instance} ]]; then
list_instances="${CONFIG_PATH}/iotagent_*.conf"
else
list_instances="${CONFIG_PATH}/iotagent_${instance}.conf"
fi
for instance_config in ${list_instances}
do
local NAME
NAME=${instance_config%.conf}
NAME=${NAME#*iotagent_}
source ${instance_config}
local IOTAGENT_PID_FILE="/var/run/iot/iotagent_${NAME}.pid"
printf "Starting iotagent ${NAME}..."
status -p ${IOTAGENT_PID_FILE} ${EXECUTABLE} &> /dev/null
if [[ ${?} -eq 0 ]]; then
printf "%s\n" " Already running, skipping $(success)"
continue
fi
# Load the environment
set -a
source ${instance_config}
# Mandatory parameters
IOTAGENT_OPTS=" ${IS_MANAGER} \
-n ${IOTAGENT_SERVER_NAME} \
-v ${IOTAGENT_LOG_LEVEL} \
-i ${IOTAGENT_SERVER_ADDRESS} \
-p ${IOTAGENT_SERVER_PORT} \
-d ${IOTAGENT_LIBRARY_DIR} \
-c ${IOTAGENT_CONFIG_FILE}"
su ${USERNAME} -c "LD_LIBRARY_PATH=\"${IOTAGENT_LIBRARY_DIR}\" \
${EXECUTABLE} ${IOTAGENT_OPTS} & echo \$! > ${IOTAGENT_PID_FILE}" &> /dev/null
sleep 2 # wait some time to leave iotagent start
local PID=$(cat ${IOTAGENT_PID_FILE})
local var_pid=$(ps -ef | grep ${PID} | grep -v grep)
if [[ -z "${var_pid}" ]]; then
printf "%s" "pidfile not found"
printf "%s\n" "$(failure)"
exit 1
else
printf "%s\n" "$(success)"
fi
done
return ${result}
}
iotagent_stop()
{
local result=0
local iotagent_instance=${1}
if [[ -z ${iotagent_instance} ]]; then
list_run_instances="/var/run/iot/iotagent_*.pid"
else
list_run_instances="/var/run/iot/iotagent_${iotagent_instance}.pid"
fi
if [[ $(ls -l ${list_run_instances} 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of IoTAgent ${iotagent_instance} running $(success)"
return 0
fi
for run_instance in ${list_run_instances}
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*iotagent_}
printf "%s" "Stopping IoTAgent ${NAME}..."
local RUN_PID=$(cat ${run_instance})
kill ${RUN_PID} &> /dev/null
local KILLED_PID=$(ps -ef | grep ${RUN_PID} | grep -v grep | awk '{print $2}')
if [[ -z ${KILLED_PID} ]]; then
printf "%s\n" "$(success)"
else
printf "%s\n" "$(failure)"
result=$((${result}+1))
fi
rm -f ${run_instance} &> /dev/null
done
return ${result}
}
iotagent_status()
{
local result=0
local iotagent_instance=${1}
if [[ -z ${iotagent_instance} ]]; then
list_run_instances="/var/run/iot/iotagent_*.pid"
else
list_run_instances="/var/run/iot/iotagent_${iotagent_instance}.pid"
fi
if [[ $(ls -l ${list_run_instances} 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of IoTAgent ${iotagent_instance} running."
return 1
fi
for run_instance in ${list_run_instances}
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*iotagent_}
printf "%s\n" "IoTAgent ${NAME} status..."
status -p ${run_instance} ${NODE_EXEC}
result=$((${result}+${?}))
done
return ${result}
}
case ${PARAM} in
'start')
iotagent_start ${INSTANCE}
;;
'stop')
iotagent_stop ${INSTANCE}
;;
'restart')
iotagent_stop ${INSTANCE}
iotagent_start ${INSTANCE}
;;
'status')
iotagent_status ${INSTANCE}
;;
esac
And the logs file are in /tmp/ :
IoTAgent-IoTPlatform.log
IoTAgent.log
IoTAgent-Manager.log
Hope this helps you.