Localhost port not available within a Github actions step - github-actions

I am trying to create a job which has a test app needed for integrated tests. The test app is a simple application which is built and run first and then the final build runs which uses port 8080 on localhost for its integrated tests. However port 8080 is not available when the second docker build runs.
jobs:
my-job:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Build and run testapp
run: >
docker build
-t testapp
-f testapp/Dockerfile .
docker run
-p 8080:80
-d -t testapp
docker build
-t app
-f Dockerfile .
Is this a limitation of github actions? I have started looking at service containers as a workaround, but is there any way that the port can be made available for the subsequent commands in the step?

Related

Use GITHub actions to make the "latest" version an old version

I'm using GitHub Actions and Docker WatchTower to update my images on the fly when I check-in my software (no, it is not crucial software. It is more important to have a lean CI/CD).
name: Docker Image CI
on:
push:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Build the Docker image
run: docker build . -t me/myrepo:${{github.run_number}}
#:$(date +%s)
#docker build --rm -t ne/myrepo .
- name: Login to Docker Hub
env:
DOCKER_USER: ${{ secrets.DOCKER_USER }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
run: |
echo $DOCKER_USER
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Push the new Tag to Docker Hub
run: |
docker push me/myrepo:${{github.run_number}}
This works very nicely.
But watch tower can only download the latest version of a version e.g. latest.
Best solution would be I could keep the incrementing versions on github actions and watchtower would take the highest version. I guess it cannot do that.
or -
I tag the latest version (e.g. 49) also as the latest. How would you do that with git hub actions? This should be nothing else than giving multiple tags to a build, no?
Well, I actually answered it almost myself when phrasing the question.
Simply create 2 builds and 2 images. One incrementing (so that you always could roll-back to an older version) and update the latest version.
Prioritize the latest so that it is available faster.
- name: Build the Docker image
run: docker build . -t me/myrepo:latest
- name: Build the Docker image
run: docker build .
-t me/myrepo:${{github.run_number}}
and then push it twice.
- name: Push the also the latest Tag to Docker Hub
run: |
docker push me/myrepo:latest
- name: Push the new Tag to Docker Hub
run: |
docker push me/myrepo:${{github.run_number}}

Github action with MySQL test database

I have a CodeIgniter Web app connected with Mysql that is developed in the docker. I would like to do some unit test in the GitHub action fo ci/cd pipeline. The problem is some of the function would require enquiry data from Mysql database. So may I know if there is a way to setup a MySQL instance on Github action and run some .sql file so that my test data is in the database?
You can use GitHub Actions service containers to connect to databases like mysql. You can find details at https://docs.github.com/en/actions/guides/about-service-containers
I think this script can help people to conduct unit test with MySQL database and a .sql script file to load table schema and data on Github action. I am using Codeigniter 4 with Mysql. But the process of setting up a database would be similar as long as you are using MySQL.
name: CI Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker-compose build
- name: up mysql and apache container runs
run: docker-compose up -d
#I am using codeIgniter4
- name: install dependencies
run: docker exec CI4_1 php composer.phar install
- name: buffering time for db container
uses: jakejarvis/wait-action#master
with:
time: '30s'
#db_1 is the name of database
- name: load database
run: docker exec -i mysql_1 mysql -uroot -proot db_1< ./testDatabase.sql
- name: unit test
run: docker exec CI4_1 ./vendor/bin/phpunit

How to add Chrome to a container to over come the error 'Failed to launch chrome' in circleCI

I'm trying to run Codecept.js on circleCI but I keep running into the same issue where it says Failed to launch chrome.
I believe it is a problem with puppeteer but I cannot find the issue online.
I've tried adding the following to my codecept.conf.js file.
helpers: {
Puppeteer: {
url: process.env.CODECEPT_URL || 'http://localhost:3030'
},
chrome: {
args: ["--headless", "--no-sandbox"]
}
},
I've tried to install chrome onto the container that I'm running:
docker-compose exec aubisque npx codeceptjs run --steps
As I thought it might be that chrome didn't exist. I couldn't figure out how to do this though. I have also read that puppeteer uses its own type of chrome :S.
acceptance:
working_directory: ~/aubisque-api
docker:
- image: circleci/node:latest-browsers
environment:
NODE_ENV: development
steps:
- checkout
- setup_remote_docker
- restore_cache:
name: Restore NPM Cache
keys:
- package-lock-cache-{{ checksum "package-lock.json" }}
- run:
name: Install git-crypt
command: |
curl -L https://github.com/AGWA/git-crypt/archive/debian/0.6.0.tar.gz | tar zxv &&
(cd git-crypt-debian && sudo make && sudo make install)
- run:
name: decrypt files
command: |
echo $DECRYPT_KEY | base64 -d >> keyfile
git-crypt unlock keyfile
rm keyfile
- run:
name: Build and run acceptance tests
command: |
docker-compose -f docker-compose-ci.yml build --no-cache
docker-compose -f docker-compose-ci.yml up -d
docker-compose exec aubisque npx codeceptjs run --steps
This is my circle/config.yml file where I run my acceptance tests. I am running the code in workflows and before I run this job I am running a job that installs the npm modules.

Deploy mysql during build from Dockerfile

I am building an application which has parent and child dependency and to build of my application which is the final stage of build i need to connect to mysql for it during build stage itself.
In this i am getting the error:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
I have mentioned my docker file code i am using and for mysql i have pulled image from dockerhub following instructions from below link:
https://dev.mysql.com/doc/mysql-installation-excerpt/5.5/en/docker-mysql-getting-started.html
And i was planning to run this as a separate container using bridge to communicate with my above container using below command:
docker run -d -name app-container-name --link mysql-container-name app-image-name
FROM maven:3.5.4-jdk-8 as maven
COPY ZP ZP
COPY CommonApp CommonApp
RUN cd ZP && mvn clean install
RUN cd CommonApp && mvn clean install package -U && mvn install:install-file -Dfile=/CommonApp/target/commonapp-0.0.1-SNAPSHOT.jar -DgroupId=com.z -DartifactId=commonapp -Dversion=0.0.1-SNAPSHOT -Dpackaging=jar;
FROM mysql:5.7
# ROOT PASSWORD
ENV MYSQL_ROOT_PASSWORD=root
ENV MYSQL_USER=root
ENV MYSQL_PASSWORD=root
ENV MYSQL_DATA_DIR=/var/lib/mysql \
MYSQL_RUN_DIR=/run/mysqld \
MYSQL_LOG_DIR=/var/log/mysql
RUN /etc/init.d/mysql start && \
mysql -u root -p$MYSQL_ROOT_PASSWORD -e "GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY 'root';FLUSH PRIVILEGES;"
#PORT
EXPOSE 3306
FROM maven:3.5.4-jdk-8
COPY ZCApp ZCApp
RUN cd ZCApp && mvn clean package -U
How should i approach this problem. How can i build mysql along with the application itself using dockerfile.?
Had the same issue when built maven project. What makes this different from similar requests is that here you don't link two running containers but instead you link docker daemon, preforming build process, to running container.
For Docker to get access to database during build you have to expose ports of database. Using --link will have no effect because it links containers (and you dont have second container at the moment) and btw is considered as obsolete technique.
You have to explicitly start database container before build process and somehow expose its ports for docker daemon to access them.
Option 1 - using host networking.
First start database:
docker run -d --network=host mysql
Then build:
docker built -t foo .
Docker will see database on localhost during build process because database uses host's network and doesn't need any port exposion.
Option 2 - Expose ports
First start database:
docker run -d -p 3306:3306 mysql
Then build:
docker built -t foo .
Docker will again see database on localhost during build process because port is exposed.
What you have to double check is your connection string in mvn. It has to use localhost and default tcp port 3306

Connect .NET Core Web API to MySQL on different docker container

I have 2 docker containers running on same virtual machine (Ubuntu server 18.04 on VMWare workstation 12 Player). The first one is MySql Container, which is running on port 3306 and the second one is asp.net core (v2.0) web api (port 5000 on vm and export outside through nginx with port 80 ). My VM api is 192.168.20.197
project architecture image
My connection string on web api project is: optionsBuilder.UseMySQL("server=localhost;port=3306;database=mydatabase;user=root;CharSet=utf8");
My docker file content is
FROM microsoft/dotnet:sdk AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "DemoMySql.dll"]
I have tried to make a HTTP request to the web api on VM but server response error(500) when i tried to interact with the database (the web api is still work normally when i make it return a sample string such as 192.168.20.197/api/values/samplestring). So How can i connect the web api to mysql on different container ?
p/s: Sorry for my bad grammar
Thanks for #Tao Zhou and #Nathan Werry advices, I solved the problem by replacing the localhost in connection string to the ip address of my virtual machine. Then I used docker --link tag (legacy feature of docker) to link the mysql container to the web api container.
docker run \
--name <webapi-container-name> \
-p 8082:8081 \
--link <mysql-container-name>:<mysql-image> \
-d <webapi-image>
For connecting from web api to mysql, you could add Compose which will create a shared network for web api and mysql, then you could access mysql with service container name.
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
You could access db by postgres://db:5432, refer Networking in Compose.
For another option, you could create your own Bridge networks to share the network between two containers. refer Bridge networks.