I am trying to run gcloud beta lifesciences because genomics API is deprecated.
There have been so many changes, genomics API vs lifesciences API.
I ran one of my analysis step in google clooud using beta lifesciences.
Here is what I found.
(1) wildcard is not working in command line options (2) It is not easy to set the target directory in command line option, I used env-var for copy.
I am now trying to convert commandline option into JSON format pipeline-file, but it is not easy to understand help page in google cloud. Do you have an idea how to convert following options into JSON file, so I could run it with simpler option?
I used YAML formatted pipeline file in genomics API, but beta lifescienes is totally different.
$ more step03_bwa_mem_genome1.run
#SMALL=
SMALL=chr21.
LIFESCIENCESPATH=/gcloud-shared
#LIFESCIENCESPATH=/mnt
SCRIPTFILENAME=step03_bwa_mem_genome.sh
COHORTID=2_C_222
gcloud beta lifesciences pipelines run \
--logging gs://${BUCKETID}/ExomeSeq/hResults/step03_bwa_mem_genome.${COHORTID}.log \
--regions=asia-northeast1,asia-northeast2,asia-northeast3,asia-east1,asia-east2,asia-south1 \
--boot-disk-size 20 \
--preemptible \
--machine-type n1-standard-1 \
--disk-size "gcloud-shared:10" \
--docker-image asia.gcr.io/thermal-shuttle-199104/centos8-essential-software-genomics-custom-python3:0.4 \
--inputs REFERENCE1=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.amb \
--inputs REFERENCE2=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.ann \
--inputs REFERENCE3=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.bwt \
--inputs REFERENCE4=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.fai \
--inputs REFERENCE5=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.intervals \
--inputs REFERENCE6=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.pac \
--inputs REFERENCE7=gs://${BUCKETID}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.sa \
--inputs SCRIPTFILE=gs://${BUCKETID}/ExomeSeq/${SCRIPTFILENAME} \
--inputs COHORTID=${COHORTID} \
--inputs SAMPLELIST=gs://${BUCKETID}/ExomeSeq/SAMPLELIST.${COHORTID}.lst \
--inputs INPUTFILE1=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_01_1.chr21.fastq.gz \
--inputs INPUTFILE2=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_01_2.chr21.fastq.gz \
--inputs INPUTFILE3=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_02_1.chr21.fastq.gz \
--inputs INPUTFILE4=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_02_2.chr21.fastq.gz \
--inputs INPUTFILE5=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_03_1.chr21.fastq.gz \
--inputs INPUTFILE6=gs://${BUCKETID}/ExomeSeq/hReads/${COHORTID}_03_2.chr21.fastq.gz \
--outputs OUTPUTFILE1=gs://${BUCKETID}/ExomeSeq/hResults/${COHORTID}_01.bam \
--outputs OUTPUTFILE2=gs://${BUCKETID}/ExomeSeq/hResults/${COHORTID}_02.bam \
--outputs OUTPUTFILE3=gs://${BUCKETID}/ExomeSeq/hResults/${COHORTID}_03.bam \
--env-vars REFERENCE1=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.amb,REFERENC
E2=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.ann,REFERENCE3=${LIFESCIENCESPATH}/
ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.bwt,REFERENCE4=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRC
h38.primary_assembly.genome.${SMALL}fa.fai,REFERENCE5=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.ge
nome.${SMALL}fa.intervals,REFERENCE6=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.p
ac,REFERENCE7=${LIFESCIENCESPATH}/ExomeSeq/hReference/GRCh38.primary_assembly.genome.${SMALL}fa.sa,SCRIPTFILE=${LIFESCIE
NCESPATH}/ExomeSeq/${SCRIPTFILENAME},SAMPLELIST=${LIFESCIENCESPATH}/ExomeSeq/SAMPLELIST.${COHORTID}.lst,INPUTFILE1=${LIF
ESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_01_1.chr21.fastq.gz,INPUTFILE2=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID
}_01_2.chr21.fastq.gz,INPUTFILE3=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_02_1.chr21.fastq.gz,INPUTFILE4=${LIFESC
IENCESPATH}/ExomeSeq/hReads/${COHORTID}_02_2.chr21.fastq.gz,INPUTFILE5=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_0
3_1.chr21.fastq.gz,INPUTFILE6=${LIFESCIENCESPATH}/ExomeSeq/hReads/${COHORTID}_03_2.chr21.fastq.gz,OUTPUTFILE1=${LIFESCIE
NCESPATH}/ExomeSeq/hResults/${COHORTID}_01.bam,OUTPUTFILE2=${LIFESCIENCESPATH}/ExomeSeq/hResults/${COHORTID}_02.bam,OUTP
UTFILE3=${LIFESCIENCESPATH}/ExomeSeq/hResults/${COHORTID}_03.bam \
--command-line="find ${LIFESCIENCESPATH}; /bin/bash ${LIFESCIENCESPATH}/ExomeSeq/${SCRIPTFILENAME} ${COHORTID} 4"
Thank you for your answer!
Finally, I made an input YML file for gcloud lifesciences from operation descriptions. I needed to understand basic functionality of gcloud lifesciences because I want to make full version of genome analysis pipeline, from FASTQ to snpEff/third party annotation/text extraction in google cloud (whole-exome in a day, whole-genome in 5 days for a single set). I already made it using genomics API, but am trying to upgrade it to use gcloud lifesciences.
I also tried gcsfuse, but setting up google cloud authentication is a little bit tricky.
Thanks,
$ more step03_bwa_mem_genome2.run
#SMALL=
SMALL=chr21.
LIFESCIENCESPATH=/gcloud-shared
#LIFESCIENCESPATH=/mnt
SCRIPTFILENAME=step03_bwa_mem_genome.sh
COHORTID=2_C_222
gcloud beta lifesciences pipelines run \
--logging gs://${BUCKETID}/ExomeSeq/hResults/step03_bwa_mem_genome.${COHORTID}.log \
--pipeline-file step03_bwa_mem_genome1.yml
$ cat step03_bwa_mem_genome1.yml
actions:
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_03_2.chr21.fastq.gz ${INPUTFILE6}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_03_1.chr21.fastq.gz ${INPUTFILE5}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_02_2.chr21.fastq.gz ${INPUTFILE4}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_02_1.chr21.fastq.gz ${INPUTFILE3}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_01_2.chr21.fastq.gz ${INPUTFILE2}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReads/2_C_222_01_1.chr21.fastq.gz ${INPUTFILE1}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/SAMPLELIST.2_C_222.lst ${SAMPLELIST}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/step03_bwa_mem_genome.sh ${SCRIPTFILE}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.sa ${REFERENCE7}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.pac ${REFERENCE6}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.intervals ${REFERENCE5}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.fai ${REFERENCE4}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.bwt ${REFERENCE3}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.ann ${REFERENCE2}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp gs://genconv1/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.amb ${REFERENCE1}
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- -c
- find /gcloud-shared; /bin/bash /gcloud-shared/ExomeSeq/step03_bwa_mem_genome.sh 2_C_222 4
entrypoint: bash
imageUri: asia.gcr.io/thermal-shuttle-199104/centos8-essential-software-genomics-custom-python3:0.4
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp ${OUTPUTFILE1} gs://genconv1/ExomeSeq/hResults/2_C_222_01.bam
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp ${OUTPUTFILE2} gs://genconv1/ExomeSeq/hResults/2_C_222_02.bam
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- commands:
- /bin/sh
- -c
- gsutil -m -q cp ${OUTPUTFILE3} gs://genconv1/ExomeSeq/hResults/2_C_222_03.bam
imageUri: google/cloud-sdk:slim
mounts:
- disk: gcloud-shared
path: /gcloud-shared
- alwaysRun: true
commands:
- /bin/sh
- -c
- gsutil -m -q cp /google/logs/output gs://genconv1/ExomeSeq/hResults/step03_bwa_mem_genome.2_C_222.log
imageUri: google/cloud-sdk:slim
environment:
COHORTID: 2_C_222
INPUTFILE1: /gcloud-shared/ExomeSeq/hReads/2_C_222_01_1.chr21.fastq.gz
INPUTFILE2: /gcloud-shared/ExomeSeq/hReads/2_C_222_01_2.chr21.fastq.gz
INPUTFILE3: /gcloud-shared/ExomeSeq/hReads/2_C_222_02_1.chr21.fastq.gz
INPUTFILE4: /gcloud-shared/ExomeSeq/hReads/2_C_222_02_2.chr21.fastq.gz
INPUTFILE5: /gcloud-shared/ExomeSeq/hReads/2_C_222_03_1.chr21.fastq.gz
INPUTFILE6: /gcloud-shared/ExomeSeq/hReads/2_C_222_03_2.chr21.fastq.gz
OUTPUTFILE1: /gcloud-shared/ExomeSeq/hResults/2_C_222_01.bam
OUTPUTFILE2: /gcloud-shared/ExomeSeq/hResults/2_C_222_02.bam
OUTPUTFILE3: /gcloud-shared/ExomeSeq/hResults/2_C_222_03.bam
REFERENCE1: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.amb
REFERENCE2: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.ann
REFERENCE3: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.bwt
REFERENCE4: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.fai
REFERENCE5: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.intervals
REFERENCE6: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.pac
REFERENCE7: /gcloud-shared/ExomeSeq/hReference/GRCh38.primary_assembly.genome.chr21.fa.sa
SAMPLELIST: /gcloud-shared/ExomeSeq/SAMPLELIST.2_C_222.lst
SCRIPTFILE: /gcloud-shared/ExomeSeq/step03_bwa_mem_genome.sh
resources:
regions:
- asia-northeast1
- asia-northeast2
- asia-northeast3
- asia-east1
- asia-east2
- asia-south1
virtualMachine:
bootDiskSizeGb: 20
disks:
- name: gcloud-shared
sizeGb: 10
machineType: n1-standard-1
preemptible: true
I would generally recommend using something like Cromwell, Nextflow, or Snakemake rather than using either API directly. They have more built in functionality for these types of tasks.
However, the output from gcloud beta lifesciences operations describe <operation name> will include the pipeline definition that gcloud created, which could be used as a starting point. One thing you'll notice there is that --inputs and --outputs automatically create environment variables so the LIFESCIENCESPATH variable and the --env-vars parameter are unnecessary which will simplify the command line significantly.
Related
I am trying to upload to elastic beanstalk with bitbucket and I am using the following yml file:
image: atlassian/default-image:2
pipelines:
branches:
development:
- step:
name: "Install Server"
image: node:10.19.0
caches:
- node
script:
- npm install
- step:
name: "Install and Build Client"
image: node:14.17.3
caches:
- node
script:
- cd ./client && npm install
- npm run build
- step:
name: "Build zip"
script:
- cd ./client
- shopt -s extglob
- rm -rf !(build)
- ls
- cd ..
- apt-get update && apt-get install -y zip
- zip -r application.zip . -x "node_modules/**"
- step:
name: "Deployment to Development"
deployment: staging
script:
- ls
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
APPLICATION_NAME: $APPLICATION_NAME
ENVIRONMENT_NAME: $ENVIRONMENT_NAME
ZIP_FILE: "application.zip"
All goes well until I reach the AWS deployment and I get this error:
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_SSH_KEY_FILE="$BITBUCKET_SSH_KEY_FILE" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy" \
--env=APPLICATION_NAME="$APPLICATION_NAME" \
--env=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
--env=AWS_DEFAULT_REGION="$AWS_REGION" \
--env=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
--env=ENVIRONMENT_NAME="$ENVIRONMENT_NAME" \
--env=ZIP_FILE="application.zip" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/aws-elasticbeanstalk-deploy:1.0.2
unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory
I'm unsure how to approach this as I've followed the documentation bitbucket lies out exactly and it doesn't look like there's any place to add a .pem file.
I have a Laravel app that was running fine up until a day ago.
Now, anytime I run a "sail up" command, Mysql will start and then immediately shut down with the following error:
[Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server
/entrypoint.sh: line 57: /usr/sbin/mysqld: cannot execute binary file: Exec format error
[Entrypoint] ERROR: Unable to start MySQL. Please check your configuration.
I'm not sure if this has anything to do with it or if it's just a coincidence but this issue seems to have started when I tried to run another "sail up" command on a different Laravel app.
I've tried the following
Tried to replicate in a fresh Laravel app. The same result occurs.
Tried to start the entrypoint with the -l and -c options. The same result occurs.
Tried to start the application with mysql server 5.7. The same result occurs.
Tried forwarding to another port 3307, instead of 3306. The same result occurs.
Both my docker-compose and Dockerfile are pretty standard...no changes from what Laravel provides:
Docker-composer
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./docker/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- redis
- meilisearch
- mailhog
- selenium
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
redis:
image: 'redis:alpine'
ports:
- '${FORWARD_REDIS_PORT:-6379}:6379'
volumes:
- 'sail-redis:/data'
networks:
- sail
healthcheck:
test: ["CMD", "redis-cli", "ping"]
retries: 3
timeout: 5s
meilisearch:
image: 'getmeili/meilisearch:latest'
ports:
- '${FORWARD_MEILISEARCH_PORT:-7700}:7700'
volumes:
- 'sail-meilisearch:/meili_data'
networks:
- sail
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--spider", "http://localhost:7700/health"]
retries: 3
timeout: 5s
mailhog:
image: 'mailhog/mailhog:latest'
ports:
- '${FORWARD_MAILHOG_PORT:-1025}:1025'
- '${FORWARD_MAILHOG_DASHBOARD_PORT:-8025}:8025'
networks:
- sail
selenium:
image: 'selenium/standalone-chrome'
volumes:
- '/dev/shm:/dev/shm'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sail-mysql:
driver: local
sail-redis:
driver: local
sail-meilisearch:
driver: local
Dockerfile
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
EXPOSE 8000
ENTRYPOINT ["start-container"]
I'm not really sure what the issue is. Anyone have any suggestions?
Thanks.
I'm trying to create a StatefulSet for my mysql database and for it to communicate with my pods. I had created a deployment for this but in its lifecycle and it was erasing my database. My problem is that my database uses private image and password in mysql root user I will post my StatefulSet :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
app.kubernetes.io/name: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
app.kubernetes.io/name: mysql
spec:
initContainers:
- name: init-mysql
image: rafaelribeirosouza86/shopping:myql
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/primary.cnf /mnt/conf.d/
else
cp /mnt/config-map/replica.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on primary (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
imagePullSecrets:
- name: regcred
containers:
- name: mysql
image: rafaelribeirosouza86/shopping:myql
imagePullPolicy: Always
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "-uroot", "-p$MYSQL_ROOT_PASSWORD", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
#command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
command: ["mysql", "-h", "127.0.0.1","-uroot","-p$MYSQL_ROOT_PASSWORD", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from primary. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='$MYSQL_ROOT_PASSWORD', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root --password=$MYSQL_ROOT_PASSWORD"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
imagePullSecrets:
- name: regcred
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
This is an example with some changes from that link
It creates normally but when I give a kubectl get pods it gives me the following error: Init:CrashLoopBackOff
So I try to see by kubectl describe pod mysql-0
on machine
Normal Created 13s (x3 over 35s) kubelet Created container init-mysql
Normal Started 11s (x3 over 33s) kubelet Started container init-mysql
Warning BackOff 9s (x4 over 27s) kubelet Back-off restarting failed container
in kubectl describe sts mysql:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 3m4s statefulset-controller create Pod mysql-0 in StatefulSet mysql successful
Does anyone have any idea of this error?
[Resolved]
I created a mysql container: docker run --name some-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag and changed my C:\Windows\System32\drivers\etc\hosts with ipv4 with mysql host: 192.168.18.x mysql
I want to cache the buildah images so that they are not pulled every time using github actions/cache. Can you please help me understand what layers should be cached. I started with ~/var/lib/containers and /var/lib/containers but that did not help
jobs:
# This workflow contains a single job called "greet"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Set up JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Install oc tool
run: mkdir /tmp/s2i/ && cd /tmp/s2i/ && curl -s https://api.github.com/repos/openshift/source-to-image/releases/latest | grep browser_download_url | grep linux-amd64 | cut -d '"' -f 4 | wget -qi - && tar xvf source-to-image*.gz && sudo mv s2i /usr/local/bin && rm -rf /tmp/s2i/
- name: Login to quay.io using Buildah
run: buildah login -u ${{ secrets.QUAY_USERNAME }} -p ${{ secrets.QUAY_PASSWORD }} quay.io
- name: Cache local Maven repository & buildah containers
uses: actions/cache#v2
with:
path: |
~/.m2/repository
key: ${{ runner.os }}-maven-buildah-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-buildah-
- name: Build with Maven
run: pwd && mvn -B clean install -DskipTests -f ./vertx-starter/pom.xml
- name: Create the Dockerfile
run: printf "FROM openjdk:8-slim\nARG JAR_FILE=target/*.jar\nRUN mkdir -p /deployments\nCOPY \${JAR_FILE} /deployments/app.jar\nRUN chown -R 1001:0 /deployments && chmod -R 0755 /deployments\nEXPOSE 8080 8443\n USER 1001\nENTRYPOINT [\"java\",\"-jar\",\"/deployments/app.jar\"]" > Dockerfile && cat Dockerfile
- name: Create image using buildah
run: buildah bud --layers --log-level debug --build-arg JAR_FILE="vertx-starter/target/app.jar" -t quay.io/himanshumps/vertx_demo .
I want to set up sawtooth with multiple validators and PoET engines.
Initially I am trying to set up with just 1 Validator, 1 PoET engine, 1 PoET registry, and the intkey Transaction processor(set up using NodeJs SDK). When I launch the network and connect the TP, it registers. But when I submit a transaction to the rest API, the response URL says PENDING. the validator logs say block validation passed but the block is not created.
version: "2.1"
volumes:
poet-shared:
services:
shell:
image: hyperledger/sawtooth-all:1.1
container_name: sawtooth-shell-default
entrypoint: "bash -c \"\
sawtooth keygen && \
tail -f /dev/null \
\""
validator-0:
image: hyperledger/sawtooth-validator:1.1
container_name: sawtooth-validator-default-0
expose:
- 4004
- 5050
- 8800
ports:
- "4004:4004"
volumes:
- poet-shared:/poet-shared
command: "bash -c \"\
sawadm keygen --force && \
mkdir -p /poet-shared/validator-0 || true && \
cp -a /etc/sawtooth/keys /poet-shared/validator-0/ && \
while [ ! -f /poet-shared/poet-enclave-measurement ]; do sleep 1; done && \
while [ ! -f /poet-shared/poet-enclave-basename ]; do sleep 1; done && \
while [ ! -f /poet-shared/poet.batch ]; do sleep 1; done && \
cp /poet-shared/poet.batch / && \
sawset genesis \
-k /etc/sawtooth/keys/validator.priv \
-o config-genesis.batch && \
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.consensus.algorithm=poet \
sawtooth.poet.report_public_key_pem=\
\\\"$$(cat /poet-shared/simulator_rk_pub.pem)\\\" \
sawtooth.poet.valid_enclave_measurements=$$(cat /poet-shared/poet-enclave-measurement) \
sawtooth.poet.valid_enclave_basenames=$$(cat /poet-shared/poet-enclave-basename) \
-o config.batch && \
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.poet.target_wait_time=5 \
sawtooth.poet.initial_wait_time=25 \
sawtooth.publisher.max_batches_per_block=100 \
-o poet-settings.batch && \
sawadm genesis \
config-genesis.batch config.batch poet.batch poet-settings.batch && \
sawtooth-validator -v \
--bind network:tcp://eth0:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--peering dynamic \
--endpoint tcp://validator-0:8800 \
--scheduler serial \
--network-auth trust
\""
environment:
PYTHONPATH: "/project/sawtooth-core/consensus/poet/common:\
/project/sawtooth-core/consensus/poet/simulator:\
/project/sawtooth-core/consensus/poet/core"
stop_signal: SIGKILL
rest-api-0:
image: hyperledger/sawtooth-rest-api:1.1
container_name: sawtooth-rest-api-default-0
expose:
- 8008
ports:
- "8008:8008"
command: |
bash -c "
sawtooth-rest-api \
--connect tcp://validator-0:4004 \
--bind rest-api-0:8008
"
stop_signal: SIGKILL
settings-tp-0:
image: hyperledger/sawtooth-settings-tp:1.1
container_name: sawtooth-settings-tp-default-0
expose:
- 4004
command: settings-tp -C tcp://validator-0:4004
stop_signal: SIGKILL
poet-engine-0:
image: hyperledger/sawtooth-poet-engine:1.1
container_name: sawtooth-poet-engine-0
volumes:
- poet-shared:/poet-shared
command: "bash -c \"\
if [ ! -f /poet-shared/poet-enclave-measurement ]; then \
poet enclave measurement >> /poet-shared/poet-enclave-measurement; \
fi && \
if [ ! -f /poet-shared/poet-enclave-basename ]; then \
poet enclave basename >> /poet-shared/poet-enclave-basename; \
fi && \
if [ ! -f /poet-shared/simulator_rk_pub.pem ]; then \
cp /etc/sawtooth/simulator_rk_pub.pem /poet-shared; \
fi && \
while [ ! -f /poet-shared/validator-0/keys/validator.priv ]; do sleep 1; done && \
cp -a /poet-shared/validator-0/keys /etc/sawtooth && \
poet registration create -k /etc/sawtooth/keys/validator.priv -o /poet-shared/poet.batch && \
poet-engine -C tcp://validator-0:5050 --component tcp://validator-0:4004 \
\""
poet-validator-registry-tp-0:
image: hyperledger/sawtooth-poet-validator-registry-tp:1.1
container_name: sawtooth-poet-validator-registry-tp-0
expose:
- 4004
command: poet-validator-registry-tp -C tcp://validator-0:4004
environment:
PYTHONPATH: /project/sawtooth-core/consensus/poet/common
stop_signal: SIGKILL
I found some errors here, but how to fix these?
You will need a minimum of two Transaction Processors per node to make the PoET work.
I have the same type of setup; but I am running TP's in Kubernetes and have three replica set(s) per Node in the network.
Besides look at the validator.toml and make sure that you have the bind to network 0.0.0.0, I have tried to bind it to a specific NIC ip and it does not work. Only 0.0.0.0 (all ips) seems to work fine.