Sawtooth transaction not committed - hyperledger-sawtooth

I want to set up sawtooth with multiple validators and PoET engines.
Initially I am trying to set up with just 1 Validator, 1 PoET engine, 1 PoET registry, and the intkey Transaction processor(set up using NodeJs SDK). When I launch the network and connect the TP, it registers. But when I submit a transaction to the rest API, the response URL says PENDING. the validator logs say block validation passed but the block is not created.
version: "2.1"
volumes:
poet-shared:
services:
shell:
image: hyperledger/sawtooth-all:1.1
container_name: sawtooth-shell-default
entrypoint: "bash -c \"\
sawtooth keygen && \
tail -f /dev/null \
\""
validator-0:
image: hyperledger/sawtooth-validator:1.1
container_name: sawtooth-validator-default-0
expose:
- 4004
- 5050
- 8800
ports:
- "4004:4004"
volumes:
- poet-shared:/poet-shared
command: "bash -c \"\
sawadm keygen --force && \
mkdir -p /poet-shared/validator-0 || true && \
cp -a /etc/sawtooth/keys /poet-shared/validator-0/ && \
while [ ! -f /poet-shared/poet-enclave-measurement ]; do sleep 1; done && \
while [ ! -f /poet-shared/poet-enclave-basename ]; do sleep 1; done && \
while [ ! -f /poet-shared/poet.batch ]; do sleep 1; done && \
cp /poet-shared/poet.batch / && \
sawset genesis \
-k /etc/sawtooth/keys/validator.priv \
-o config-genesis.batch && \
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.consensus.algorithm=poet \
sawtooth.poet.report_public_key_pem=\
\\\"$$(cat /poet-shared/simulator_rk_pub.pem)\\\" \
sawtooth.poet.valid_enclave_measurements=$$(cat /poet-shared/poet-enclave-measurement) \
sawtooth.poet.valid_enclave_basenames=$$(cat /poet-shared/poet-enclave-basename) \
-o config.batch && \
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.poet.target_wait_time=5 \
sawtooth.poet.initial_wait_time=25 \
sawtooth.publisher.max_batches_per_block=100 \
-o poet-settings.batch && \
sawadm genesis \
config-genesis.batch config.batch poet.batch poet-settings.batch && \
sawtooth-validator -v \
--bind network:tcp://eth0:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--peering dynamic \
--endpoint tcp://validator-0:8800 \
--scheduler serial \
--network-auth trust
\""
environment:
PYTHONPATH: "/project/sawtooth-core/consensus/poet/common:\
/project/sawtooth-core/consensus/poet/simulator:\
/project/sawtooth-core/consensus/poet/core"
stop_signal: SIGKILL
rest-api-0:
image: hyperledger/sawtooth-rest-api:1.1
container_name: sawtooth-rest-api-default-0
expose:
- 8008
ports:
- "8008:8008"
command: |
bash -c "
sawtooth-rest-api \
--connect tcp://validator-0:4004 \
--bind rest-api-0:8008
"
stop_signal: SIGKILL
settings-tp-0:
image: hyperledger/sawtooth-settings-tp:1.1
container_name: sawtooth-settings-tp-default-0
expose:
- 4004
command: settings-tp -C tcp://validator-0:4004
stop_signal: SIGKILL
poet-engine-0:
image: hyperledger/sawtooth-poet-engine:1.1
container_name: sawtooth-poet-engine-0
volumes:
- poet-shared:/poet-shared
command: "bash -c \"\
if [ ! -f /poet-shared/poet-enclave-measurement ]; then \
poet enclave measurement >> /poet-shared/poet-enclave-measurement; \
fi && \
if [ ! -f /poet-shared/poet-enclave-basename ]; then \
poet enclave basename >> /poet-shared/poet-enclave-basename; \
fi && \
if [ ! -f /poet-shared/simulator_rk_pub.pem ]; then \
cp /etc/sawtooth/simulator_rk_pub.pem /poet-shared; \
fi && \
while [ ! -f /poet-shared/validator-0/keys/validator.priv ]; do sleep 1; done && \
cp -a /poet-shared/validator-0/keys /etc/sawtooth && \
poet registration create -k /etc/sawtooth/keys/validator.priv -o /poet-shared/poet.batch && \
poet-engine -C tcp://validator-0:5050 --component tcp://validator-0:4004 \
\""
poet-validator-registry-tp-0:
image: hyperledger/sawtooth-poet-validator-registry-tp:1.1
container_name: sawtooth-poet-validator-registry-tp-0
expose:
- 4004
command: poet-validator-registry-tp -C tcp://validator-0:4004
environment:
PYTHONPATH: /project/sawtooth-core/consensus/poet/common
stop_signal: SIGKILL
I found some errors here, but how to fix these?

You will need a minimum of two Transaction Processors per node to make the PoET work.
I have the same type of setup; but I am running TP's in Kubernetes and have three replica set(s) per Node in the network.
Besides look at the validator.toml and make sure that you have the bind to network 0.0.0.0, I have tried to bind it to a specific NIC ip and it does not work. Only 0.0.0.0 (all ips) seems to work fine.

Related

Error "unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory" when deploying to elastic beanstalk through bitbucket

I am trying to upload to elastic beanstalk with bitbucket and I am using the following yml file:
image: atlassian/default-image:2
pipelines:
branches:
development:
- step:
name: "Install Server"
image: node:10.19.0
caches:
- node
script:
- npm install
- step:
name: "Install and Build Client"
image: node:14.17.3
caches:
- node
script:
- cd ./client && npm install
- npm run build
- step:
name: "Build zip"
script:
- cd ./client
- shopt -s extglob
- rm -rf !(build)
- ls
- cd ..
- apt-get update && apt-get install -y zip
- zip -r application.zip . -x "node_modules/**"
- step:
name: "Deployment to Development"
deployment: staging
script:
- ls
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
APPLICATION_NAME: $APPLICATION_NAME
ENVIRONMENT_NAME: $ENVIRONMENT_NAME
ZIP_FILE: "application.zip"
All goes well until I reach the AWS deployment and I get this error:
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_SSH_KEY_FILE="$BITBUCKET_SSH_KEY_FILE" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy" \
--env=APPLICATION_NAME="$APPLICATION_NAME" \
--env=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
--env=AWS_DEFAULT_REGION="$AWS_REGION" \
--env=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
--env=ENVIRONMENT_NAME="$ENVIRONMENT_NAME" \
--env=ZIP_FILE="application.zip" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/aws-elasticbeanstalk-deploy:1.0.2
unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory
I'm unsure how to approach this as I've followed the documentation bitbucket lies out exactly and it doesn't look like there's any place to add a .pem file.

Docker cannot execute mysqld

I have a Laravel app that was running fine up until a day ago.
Now, anytime I run a "sail up" command, Mysql will start and then immediately shut down with the following error:
[Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server
/entrypoint.sh: line 57: /usr/sbin/mysqld: cannot execute binary file: Exec format error
[Entrypoint] ERROR: Unable to start MySQL. Please check your configuration.
I'm not sure if this has anything to do with it or if it's just a coincidence but this issue seems to have started when I tried to run another "sail up" command on a different Laravel app.
I've tried the following
Tried to replicate in a fresh Laravel app. The same result occurs.
Tried to start the entrypoint with the -l and -c options. The same result occurs.
Tried to start the application with mysql server 5.7. The same result occurs.
Tried forwarding to another port 3307, instead of 3306. The same result occurs.
Both my docker-compose and Dockerfile are pretty standard...no changes from what Laravel provides:
Docker-composer
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./docker/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- redis
- meilisearch
- mailhog
- selenium
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
redis:
image: 'redis:alpine'
ports:
- '${FORWARD_REDIS_PORT:-6379}:6379'
volumes:
- 'sail-redis:/data'
networks:
- sail
healthcheck:
test: ["CMD", "redis-cli", "ping"]
retries: 3
timeout: 5s
meilisearch:
image: 'getmeili/meilisearch:latest'
ports:
- '${FORWARD_MEILISEARCH_PORT:-7700}:7700'
volumes:
- 'sail-meilisearch:/meili_data'
networks:
- sail
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--spider", "http://localhost:7700/health"]
retries: 3
timeout: 5s
mailhog:
image: 'mailhog/mailhog:latest'
ports:
- '${FORWARD_MAILHOG_PORT:-1025}:1025'
- '${FORWARD_MAILHOG_DASHBOARD_PORT:-8025}:8025'
networks:
- sail
selenium:
image: 'selenium/standalone-chrome'
volumes:
- '/dev/shm:/dev/shm'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local
sail-redis:
driver: local
sail-meilisearch:
driver: local
Dockerfile
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
EXPOSE 8000
ENTRYPOINT ["start-container"]
I'm not really sure what the issue is. Anyone have any suggestions?
Thanks.

Nifi image Not Running over openshift

While trying to run docker image of apache nifi present in the docker hub in the open shift, it is giving me the permission issue as the docker image was running the user nifi which is not allowed via openshft. so I build the docker image using the below docker file but now I am not even able to run the build image in my local docker container.
FROM openjdk:8-jre
ARG NIFI_VERSION=1.12.1
ARG BASE_URL=https://archive.apache.org/dist
ARG MIRROR_BASE_URL=${MIRROR_BASE_URL:-${BASE_URL}}
ARG NIFI_BINARY_PATH=${NIFI_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.zip}
ARG NIFI_TOOLKIT_BINARY_PATH=${NIFI_TOOLKIT_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-toolkit-${NIFI_VERSION}-bin.zip}
ENV NIFI_BASE_DIR=/opt/nifi
ENV NIFI_HOME ${NIFI_BASE_DIR}/nifi-current
ENV NIFI_TOOLKIT_HOME ${NIFI_BASE_DIR}/nifi-toolkit-current
ENV NIFI_PID_DIR=${NIFI_HOME}/run
ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
USER root
ADD sh/ ${NIFI_BASE_DIR}/scripts/
# Setup NiFi user and create necessary directories
RUN mkdir -p ${NIFI_BASE_DIR} \
&& apt-get update \
&& apt-get install -y jq xmlstarlet procps
# Download, validate, and expand Apache NiFi Toolkit binary.
RUN curl -fSL ${MIRROR_BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} ${NIFI_TOOLKIT_HOME} \
&& ln -s ${NIFI_TOOLKIT_HOME} ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} \
&& chmod -R g+rwX ${NIFI_TOOLKIT_HOME}
# Download, validate, and expand Apache NiFi binary.
RUN curl -fSL ${MIRROR_BASE_URL}/${NIFI_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} ${NIFI_HOME} \
&& mkdir -p ${NIFI_HOME}/conf \
&& mkdir -p ${NIFI_HOME}/database_repository \
&& mkdir -p ${NIFI_HOME}/flowfile_repository \
&& mkdir -p ${NIFI_HOME}/content_repository \
&& mkdir -p ${NIFI_HOME}/provenance_repository \
&& mkdir -p ${NIFI_HOME}/state \
&& mkdir -p ${NIFI_LOG_DIR} \
&& ln -s ${NIFI_HOME} ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \
&& chgrp -R 0 ${NIFI_BASE_DIR} \
&& chmod -R g+rwX ${NIFI_BASE_DIR} \
&& chmod -R g=u ${NIFI_BASE_DIR}/ \
&& chmod -R g=u /etc/passwd
#ADD bootstrap.conf ${NIFI_HOME}/conf/bootstrap.conf
# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile
RUN echo "#!/bin/sh\n" > ${NIFI_HOME}/bin/nifi-env.sh
# Web HTTP(s) & Socket Site-to-Site Ports
EXPOSE 8080 8443 10000
WORKDIR ${NIFI_HOME}
USER 1001
# Apply configuration and start NiFi
#
# We need to use the exec form to avoid running our command in a subshell and omitting signals,
# thus being unable to shut down gracefully:
# https://docs.docker.com/engine/reference/builder/#entrypoint
#
# Also we need to use relative path, because the exec form does not invoke a command shell,
# thus normal shell processing does not happen:
# https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
ENTRYPOINT ["../scripts/start.sh"]
Getting this error while running in the docker container.
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
replacing target file /opt/nifi/nifi-current/conf/nifi.properties
/opt/nifi/scripts/toolkit.sh: 18: /opt/nifi/scripts/toolkit.sh: cannot create //.nifi-cli.nifi.properties: Permission denied
This build is for the open shift, as the apache nifi user is not working in openshift and giving permission issue while starting the local docker
So I've passed for the same issue trying to run NIFI on a Openshift, I hope that could help you.
The steps used for me was:
As #JuanD shows, I added the config on openshift:
securityContext:
runAsUser: 1000
Further on that I also did:
RUN chmod -R g+rw ${NIFI_BASE_DIR} \
&& chmod -R g+rwX ${NIFI_BASE_DIR}/scripts \
&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi
Another rearrange that I did was to move the copy files to be executed before this command.
And in order to avoid any unnecessary issue I also added the uid-entrypoint.sh
#!/bin/bash
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${USER_NAME:-nifi}:x:$(id -u):0:${USER_NAME:-nifi} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
exec "$#"
The entire dockerfile:
ARG IMAGE_NAME=openjdk
ARG IMAGE_TAG=8-jre
FROM ${IMAGE_NAME}:${IMAGE_TAG}
ARG MAINTAINER="Apache NiFi <dev#nifi.apache.org>"
LABEL maintainer="${MAINTAINER}"
LABEL site="https://nifi.apache.org"
ARG UID=1000
ARG GID=0
ARG NIFI_VERSION=1.14.0
ARG BASE_URL=https://archive.apache.org/dist
ARG MIRROR_BASE_URL=${MIRROR_BASE_URL:-${BASE_URL}}
ARG NIFI_BINARY_PATH=${NIFI_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.zip}
ARG NIFI_TOOLKIT_BINARY_PATH=${NIFI_TOOLKIT_BINARY_PATH:-/nifi/${NIFI_VERSION}/nifi-toolkit-${NIFI_VERSION}-bin.zip}
ENV NIFI_BASE_DIR=/opt/nifi
ENV NIFI_HOME ${NIFI_BASE_DIR}/nifi-current
ENV NIFI_TOOLKIT_HOME ${NIFI_BASE_DIR}/nifi-toolkit-current
ENV NIFI_PID_DIR=${NIFI_HOME}/run
ENV NIFI_LOG_DIR=${NIFI_HOME}/logs
# Download, validate, and expand Apache NiFi Toolkit binary.
RUN mkdir -p ${NIFI_BASE_DIR} \
&& apt-get update \
&& apt-get install -y jq xmlstarlet procps \
&& curl -fSL ${MIRROR_BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_TOOLKIT_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} ${NIFI_TOOLKIT_HOME} \
&& ln -s ${NIFI_TOOLKIT_HOME} ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}
# Download, validate, and expand Apache NiFi binary.
RUN curl -fSL ${MIRROR_BASE_URL}/${NIFI_BINARY_PATH} -o ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& echo "$(curl ${BASE_URL}/${NIFI_BINARY_PATH}.sha256) *${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip" | sha256sum -c - \
&& unzip ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
&& rm ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
&& mv ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} ${NIFI_HOME} \
&& mkdir -p ${NIFI_HOME}/conf \
&& mkdir -p ${NIFI_HOME}/database_repository \
&& mkdir -p ${NIFI_HOME}/flowfile_repository \
&& mkdir -p ${NIFI_HOME}/content_repository \
&& mkdir -p ${NIFI_HOME}/provenance_repository \
&& mkdir -p ${NIFI_HOME}/state \
&& mkdir -p ${NIFI_LOG_DIR} \
&& ln -s ${NIFI_HOME} ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}
COPY scripts/ ${NIFI_BASE_DIR}/scripts/
RUN chmod -R g+rw ${NIFI_BASE_DIR} \
&& chmod -R g+rwX ${NIFI_BASE_DIR}/scripts \
&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi
# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile
RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh
# Web HTTP(s) & Socket Site-to-Site Ports
EXPOSE 8080 8443 10000 8000
WORKDIR ${NIFI_HOME}
USER ${UID}
ENTRYPOINT [ "../scripts/uid-entrypoint.sh" ]
CMD [ "../scripts/start.sh" ]
I hope that could help.
I had a similar issue when deploying a custom nifi container to Openshift. Adding this to the deployment.yaml under spec: helped:
securityContext:
runAsUser: 1000
fsGroup: 1000

gcloud beta lifesciences, JSON pipeline-file rather than options

I am trying to run gcloud beta lifesciences because genomics API is deprecated.
There have been so many changes, genomics API vs lifesciences API.
I ran one of my analysis step in google clooud using beta lifesciences.
Here is what I found.
(1) wildcard is not working in command line options (2) It is not easy to set the target directory in command line option, I used env-var for copy.
I am now trying to convert commandline option into JSON format pipeline-file, but it is not easy to understand help page in google cloud. Do you have an idea how to convert following options into JSON file, so I could run it with simpler option?
I used YAML formatted pipeline file in genomics API, but beta lifescienes is totally different.
$ more step03_bwa_mem_genome1.run
#SMALL=
SMALL=chr21.
LIFESCIENCESPATH=/gcloud-shared
#LIFESCIENCESPATH=/mnt
SCRIPTFILENAME=step03_bwa_mem_genome.sh
COHORTID=2_C_222
gcloud beta lifesciences pipelines run \
--logging gs://${BUCKETID}/ExomeSeq/hResults/step03_bwa_mem_genome.${COHORTID}.log \
--regions=asia-northeast1,asia-northeast2,asia-northeast3,asia-east1,asia-east2,asia-south1 \
--boot-disk-size 20 \
--preemptible \
--machine-type n1-standard-1 \
--disk-size "gcloud-shared:10" \
--docker-image asia.gcr.io/thermal-shuttle-199104/centos8-essential-software-genomics-custom-python3:0.4 \
--inputs REFERENCE1=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.amb \
--inputs REFERENCE2=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.ann \
--inputs REFERENCE3=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.bwt \
--inputs REFERENCE4=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.fai \
--inputs REFERENCE5=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.intervals \
--inputs REFERENCE6=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.pac \
--inputs REFERENCE7=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.sa \
--inputs SCRIPTFILE=gs://${BUCKETID}/ExomeSeq/${SCRIPTFILENAME} \
--inputs COHORTID=${COHORTID} \
--inputs SAMPLELIST=gs://${BUCKETID}/ExomeSeq/SAMPLELIST.${COHORTID}.lst \
--inputs INPUTFILE1=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_01_1.chr21.fastq.gz \
--inputs INPUTFILE2=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_01_2.chr21.fastq.gz \
--inputs INPUTFILE3=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_02_1.chr21.fastq.gz \
--inputs INPUTFILE4=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_02_2.chr21.fastq.gz \
--inputs INPUTFILE5=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_03_1.chr21.fastq.gz \
--inputs INPUTFILE6=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_03_2.chr21.fastq.gz \
--outputs OUTPUTFILE1=gs://${BUCKETID}/ExomeSeq/hResults/${COHORTID}_01.bam \
--outputs OUTPUTFILE2=gs://${BUCKETID}/ExomeSeq/hResults/${COHORTID}_02.bam \
--outputs OUTPUTFILE3=gs://${BUCKETID}/ExomeSeq/hResults/${COHORTID}_03.bam \
--env-vars REFERENCE1=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.amb,REFERENC
E2=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.ann,REFERENCE3=${LIFESCIENCESPATH}/
ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.bwt,REFERENCE4=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRC
h38.primary_assembly.genome.${SMALL}fa.fai,REFERENCE5=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.ge
nome.${SMALL}fa.intervals,REFERENCE6=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.p
ac,REFERENCE7=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.sa,SCRIPTFILE=${LIFESCIE
NCESPATH}/ExomeSeq/${SCRIPTFILENAME},SAMPLELIST=${LIFESCIENCESPATH}/ExomeSeq/SAMPLELIST.${COHORTID}.lst,INPUTFILE1=${LIF
ESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_01_1.chr21.fastq.gz,INPUTFILE2=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID
}_01_2.chr21.fastq.gz,INPUTFILE3=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_02_1.chr21.fastq.gz,INPUTFILE4=${LIFESC
IENCESPATH}/ExomeSeq/hReads/${COHORTID}_02_2.chr21.fastq.gz,INPUTFILE5=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_0
3_1.chr21.fastq.gz,INPUTFILE6=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_03_2.chr21.fastq.gz,OUTPUTFILE1=${LIFESCIE
NCESPATH}/ExomeSeq/hResults/${COHORTID}_01.bam,OUTPUTFILE2=${LIFESCIENCESPATH}/ExomeSeq/hResults/${COHORTID}_02.bam,OUTP
UTFILE3=${LIFESCIENCESPATH}/ExomeSeq/hResults/${COHORTID}_03.bam \
--command-line="find ${LIFESCIENCESPATH}; /bin/bash ${LIFESCIENCESPATH}/ExomeSeq/${SCRIPTFILENAME} ${COHORTID} 4"
Thank you for your answer!
Finally, I made an input YML file for gcloud lifesciences from operation descriptions. I needed to understand basic functionality of gcloud lifesciences because I want to make full version of genome analysis pipeline, from FASTQ to snpEff/third party annotation/text extraction in google cloud (whole-exome in a day, whole-genome in 5 days for a single set). I already made it using genomics API, but am trying to upgrade it to use gcloud lifesciences.
I also tried gcsfuse, but setting up google cloud authentication is a little bit tricky.
Thanks,
$ more step03_bwa_mem_genome2.run
#SMALL=
SMALL=chr21.
LIFESCIENCESPATH=/gcloud-shared
#LIFESCIENCESPATH=/mnt
SCRIPTFILENAME=step03_bwa_mem_genome.sh
COHORTID=2_C_222
gcloud beta lifesciences pipelines run \
--logging gs://${BUCKETID}/ExomeSeq/hResults/step03_bwa_mem_genome.${COHORTID}.log \
--pipeline-file step03_bwa_mem_genome1.yml
$ cat step03_bwa_mem_genome1.yml
actions:
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_03_2.chr21.fastq.gz ${INPUTFILE6}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_03_1.chr21.fastq.gz ${INPUTFILE5}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_02_2.chr21.fastq.gz ${INPUTFILE4}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_02_1.chr21.fastq.gz ${INPUTFILE3}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_01_2.chr21.fastq.gz ${INPUTFILE2}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_01_1.chr21.fastq.gz ${INPUTFILE1}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/SAMPLELIST.2_C_222.lst ${SAMPLELIST}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/step03_bwa_mem_genome.sh ${SCRIPTFILE}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.sa ${REFERENCE7}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.pac ${REFERENCE6}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.intervals ${REFERENCE5}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.fai ${REFERENCE4}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.bwt ${REFERENCE3}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.ann ${REFERENCE2}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.amb ${REFERENCE1}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- -c
- find /gcloud-shared; /bin/bash /gcloud-shared/ExomeSeq/step03_bwa_mem_genome.sh 2_C_222 4
entrypoint: bash
imageUri: asia.gcr.io/thermal-shuttle-199104/centos8-essential-software-genomics-custom-python3:0.4
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp ${OUTPUTFILE1} gs://genconv1/ExomeSeq/hResults/2_C_222_01.bam
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp ${OUTPUTFILE2} gs://genconv1/ExomeSeq/hResults/2_C_222_02.bam
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp ${OUTPUTFILE3} gs://genconv1/ExomeSeq/hResults/2_C_222_03.bam
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- alwaysRun: true
commands:
- /bin/sh
- -c
- gsutil -m -q cp /google/logs/output gs://genconv1/ExomeSeq/hResults/step03_bwa_mem_genome.2_C_222.log
imageUri: google/cloud-sdk:slim
environment:
COHORTID: 2_C_222
INPUTFILE1: /gcloud-shared/ExomeSeq/hReads/2_C_222_01_1.chr21.fastq.gz
INPUTFILE2: /gcloud-shared/ExomeSeq/hReads/2_C_222_01_2.chr21.fastq.gz
INPUTFILE3: /gcloud-shared/ExomeSeq/hReads/2_C_222_02_1.chr21.fastq.gz
INPUTFILE4: /gcloud-shared/ExomeSeq/hReads/2_C_222_02_2.chr21.fastq.gz
INPUTFILE5: /gcloud-shared/ExomeSeq/hReads/2_C_222_03_1.chr21.fastq.gz
INPUTFILE6: /gcloud-shared/ExomeSeq/hReads/2_C_222_03_2.chr21.fastq.gz
OUTPUTFILE1: /gcloud-shared/ExomeSeq/hResults/2_C_222_01.bam
OUTPUTFILE2: /gcloud-shared/ExomeSeq/hResults/2_C_222_02.bam
OUTPUTFILE3: /gcloud-shared/ExomeSeq/hResults/2_C_222_03.bam
REFERENCE1: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.amb
REFERENCE2: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.ann
REFERENCE3: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.bwt
REFERENCE4: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.fai
REFERENCE5: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.intervals
REFERENCE6: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.pac
REFERENCE7: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.sa
SAMPLELIST: /gcloud-shared/ExomeSeq/SAMPLELIST.2_C_222.lst
SCRIPTFILE: /gcloud-shared/ExomeSeq/step03_bwa_mem_genome.sh
resources:
regions:
- asia-northeast1
- asia-northeast2
- asia-northeast3
- asia-east1
- asia-east2
- asia-south1
virtualMachine:
bootDiskSizeGb: 20
disks:
- name: gcloud-shared
sizeGb: 10
machineType: n1-standard-1
preemptible: true
I would generally recommend using something like Cromwell, Nextflow, or Snakemake rather than using either API directly. They have more built in functionality for these types of tasks.
However, the output from gcloud beta lifesciences operations describe <operation name> will include the pipeline definition that gcloud created, which could be used as a starting point. One thing you'll notice there is that --inputs and --outputs automatically create environment variables so the LIFESCIENCESPATH variable and the --env-vars parameter are unnecessary which will simplify the command line significantly.

Connection refused when trying to connect to local mysql database from docker container

I have a problem to connect with local mysql database from a docker container. I'm using docker-compose with two services in containers, database is not on container
I have this docker-compose file:
version: '2'
services:
web:
build:
context: ./
dockerfile: web-dev.dockerfile
volumes:
- ./:/var/www
ports:
- "8080:80"
links:
- app
network_mode: "bridge"
dns:
- 10.0.50.6
app:
build:
context: ./
dockerfile: app-dev.dockerfile
volumes:
- ./:/var/www
network_mode: "bridge"
dns:
- 10.0.50.6
the web container is an nginx service with this Dockerfile:
FROM nginx:1.10
ADD ./vhost.dev.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www
and this configuration file:
server {
listen 80;
index index.php index.html;
root /var/www/formapp/public;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
the app container is an app service with this Dockerfile:
FROM php:7-fpm
ENV USER=pasquale
RUN apt-get update && apt-get install -y libmcrypt-dev mysql-client \
openssl zip unzip git nano wget libaio-dev iputils-ping
RUN mkdir -p /opt/oracle/instantclient_10_2
# Download files from in oracle folder:
# https://agora.zanichelli.it/downloads/basic-10.2.0.5.0-linux-x64.zip
# https://agora.zanichelli.it/downloads/sdk-10.2.0.5.0-linux-x64.zip
ADD oracle/basic-10.2.0.5.0-linux-x64.zip /opt/oracle/basic-10.2.0.5.0-linux-x64.zip
ADD oracle/sdk-10.2.0.5.0-linux-x64.zip /opt/oracle/sdk-10.2.0.5.0-linux-x64.zip
RUN unzip /opt/oracle/basic-10.2.0.5.0-linux-x64.zip -d /opt/oracle \
&& unzip /opt/oracle/sdk-10.2.0.5.0-linux-x64.zip -d /opt/oracle \
&& ln -s /opt/oracle/instantclient_10_2/libclntsh.so.10.1 /opt/oracle/instantclient_10_2/libclntsh.so \
&& ln -s /opt/oracle/instantclient_10_2/libclntshcore.so.10.1 /opt/oracle/instantclient_10_2/libclntshcore.so \
&& ln -s /opt/oracle/instantclient_10_2/libocci.so.10.1 /opt/oracle/instantclient_10_2/libocci.so
ADD oracle/tns-admin/tnsnames.ora /opt/oracle/instantclient_10_2/network/admin/tnsnames.ora
ENV LD_LIBRARY_PATH /opt/oracle/instantclient_10_2/
RUN docker-php-ext-install pdo_mysql \
&& pecl install mcrypt-1.0.1 \
&& docker-php-ext-enable mcrypt \
&& pecl install mongodb \
&& docker-php-ext-enable mongodb \
&& docker-php-ext-configure pdo_oci --with-pdo-oci=instantclient,/opt/oracle/instantclient_10_2,10.2 \
&& echo 'instantclient,/opt/oracle/instantclient_10_2' | pecl install oci8 \
&& docker-php-ext-install pdo_oci \
&& docker-php-ext-enable oci8
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_enable=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.default_enable=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_connect_back=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_handler="dbgp"' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_host=0.0.0.0' >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo 'xdebug.remote_log=/var/www/xdebug.log' >> /usr/local/etc/php/conf.d/xdebug.ini
RUN mkdir -p /home/$USER
RUN groupadd -g 1000 $USER
RUN useradd -u 1000 -g $USER $USER -d /home/$USER
RUN chown $USER:$USER /home/$USER
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
WORKDIR /var/www
USER $USER
launching the containers with docker-compose -f docker-compose.dev.yml up --build -d works fine and docker ps command give me this output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7202e2862ef5 190todb_web "nginx -g 'daemon of…" 9 hours ago Up About an hour 443/tcp, 0.0.0.0:8080->80/tcp 190todb_web_1_3c628ae1c69b
d45c04d353d5 190todb_app "docker-php-entrypoi…" 9 hours ago Up About an hour 9000/tcp 190todb_app_1_dd2ac7028b87
I install a lumen app in my project folder formapp and then I created a seeder to insert fake data in my database, and from bash, if I run /projectfolder/formapp$ php artisan db:seed the seeder works and I have this output:
Seeding: UsersTableSeeder
Database seeding completed successfully.
Then I created a route to access my users table from the lumen app:
$router->get('users', function () use ($router) {
return User::all();
});
my lumen env file is this:
APP_NAME=Lumen
APP_ENV=local
APP_KEY=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
APP_DEBUG=true
APP_URL=http://localhost
APP_TIMEZONE=UTC
LOG_CHANNEL=stack
LOG_SLACK_WEBHOOK_URL=
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=form_db
DB_USERNAME=root
DB_PASSWORD=radiohead
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
JWT_SECRET=JhbGciOiJIUzI1N0eXAiOiJKV1QiLC
but if I try to connect from http://localhost:8080/users I have this lumen error:
SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `users`)
I tried to change the DB_HOST but I cannot solve the problem:
0.0.0.0 (SQLSTATE[HY000] [2002] Connection refused (SQL: select * from `users`));
172.17.0.1 (SQLSTATE[HY000] [1130] Host '172.17.0.2' is not allowed to connect to this MySQL server (SQL: select * from `users`));
172.17.0.1 is my docker0 inet address.
How can I config my project to work?
PS: the lumen app is up and running, is only the db connection that is not working
You cannot use localhost when the database and app are not on the same server.
You need to allow access by something like:
GRANT ALL PRIVILEGES ON *.* TO 'root'#'172.17.0.2' IDENTIFIED BY '<password>';
You can replace 172.17.0.2 by wildcard %.