I have configured my Jenkins to run our Build Jobs and functional Tests in a docker container. For example, when I click on the "Build Now"-Button - Jenkins will build the Dockerfile which is in Git and run the container so the Buildsteps (Jenkinsfile) can be done in this container.
My Question is now: How can I start another Container with MySQL-Server installed and link them to my Build-Job-Container everytime I Build my Job.
Thanks for any tips.
one can use service discovery aka Consul as IPs in docker networks are granted dynamicaly. Or use static ip
docker run --net bridge --ip 172.17.0.254 -it ubuntu bash
Related
This might sound repeated question but it is not and this is a crazy bug I feel, however, let me quickly explain my setup:
A simple Spring bootstrap application that runs pretty well on my local and JDBC connection string in application.properites file is as follows.
spring.datasource.url=jdbc:mysql://minesql:3306/datamachine?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.password=****
The docker running instances are:
I copied (with the help of docker cp command) the war file to alpine (unix container) and running it in interactive mode to test and it is throwing exception as it is unable to ping the mysql server. I am certain that the database configurations are fine and clueless why the springboot app is failing to connect to mysql container instance. Note, the mysql container does have "datamachine" database created manually.
This is the error reported:
Please help me understand what I am missing here or what is going wrong.
Just in-case if you wish to know how I started these containers.
For mysql:
docker run -d --name minesql -e MYSQL_ROOT_PASSWORD=**** -p 3306:3306 mysql
Running Java app from the alpine container and this is how I am starting the alpine,
docker run -it --name unix alpine
The interactive mode present me the bash prompt to run the spring-boot war file. (..and running the war file after installing the java 8 in alpine)
You have two docker containers which are running and connected via default bridge network. From docker bridge documentation
Containers on the default bridge network can only access each other by
IP addresses, unless you use the --link option, which is considered
legacy. On a user-defined bridge network, containers can resolve each
other by name or alias.
If you need the second container to be able to resolve the name minesql from inside, you need to create a user-defined network bridge and connect the docker container containers to that.
Create a new network using
docker network create my-net
And add your containers as specified here
Other alternative is to use docker-compose and avoid manual creation of bridge networks for name resolution. For production environment, that would be ideal.
I have my html application that runs in apache2 server and I want to dockerize the html application that should be run in docker container using apache2 package. I tried but got docker build failed. I dont want to use nginx server help me with apache.
Here is the following Dockerfile content in html application
FROM apache2:2.4.18
WORKDIR /var/www/html/startapp
COPY . /var/www/docker
Then I tried to build this with docker using
sudo docker build -t startapp .
It returns:
Sending build context to Docker daemon 335.6MB
Step 1/3 : FROM apache2:2.4.18
pull access denied for apache2, repository does not exist or may require 'docker login'
If its not possible with apache2 so there is change to build by lampp server in ubuntu 16.0.4.
Try replacing the base image (the one that you are using is not available as on default docker registry).
FROM httpd:2.4
Take a look at https://hub.docker.com/_/httpd/ for more information.
It seems like you are trying to use a non-official docker image for Apache, So either you build apache2 image using its Dockerfile if you have it. Or you can login to the private repository that holds apache2 image if you have its credentials. Otherwise you may use the official Apache docker image
I discovered docker last week and am playing around withit for a decent time.
Now I want to deploy a Website inside a Container. The Website is already finished and I got all the files on my host system. It needs php, java, tomcat and - and here is the problem - a mysql-db.
So I created a Dockerfile, using alpine:latest as base image and after that installing the above named applications one by one.
FROM alpine:latest
ENV http_proxy http://not_important/
RUN apk update
RUN apk --no-cache --quiet add openjdk8
RUN apk --no-cache --quiet add nano
RUN apk --no-cache --quiet add php7
RUN apk --no-cache --quiet add mysql
RUN apk --no-cache --quiet add phpmyadmin
RUN mkdir -p /usr/local/tomcat/
COPY apache-tomcat-9.0.4.tar.gz /usr/local/tomcat/
RUN cd /usr/local/tomcat/ && tar xzf /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
RUN mv /usr/local/tomcat/apache-tomcat-9.0.4/* /usr/local/tomcat
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4
RUN rm -r /usr/local/tomcat/apache-tomcat-9.0.4.tar.gz
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
But now, I dont rly know how to finish my work. How am I able to start the mysql-db and access it with phpmyadmin?
I run the container with the following command:
docker run --name alpine_custom -dit -p 30000:8080 -p31000:80 alpine:custom
The tomcat is running on port 30000 without a problem and I want phpmyadmin to be accessable over port 31000. I do have a working MySQL-DB on my Host and manage it with phpmyadmin (meaning, there are two containers, the phpmyadmin container is linked with the database)...
Is it even possible to do it like I want it, or do I have to deploy a second container with a database which is linked with my alpine container (and a third one with phpmyadmin...)?
I am thankful for every answer, thank you in advance
Sincerely
Telvanis :)
PS: I know, the Dockerfile isn't very good but i think its enough for my needs ^^
Try to avoid having it "all-in-one".
This is the idea behind Docker, to go from something "monolithic" to something which is separated to components. This approach gives you an advantage when you want to scale up/down your app, update specific components without rebuilding the whole app... etc.
Try to avoid the installation & configuration of every technology on your own
I remember myself trying to do so with MySQL. I spent much time and had no result. Ended up using the official image. The installation of a software inside docker might have tricky parts and not be the same with the installation one does in a VM.
So, I would propose to start searching for the official images of the technologies that you are trying to put into use. Docker hub has plenty and most of them also provide guidelines on how to use/configure them. For example:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/
https://hub.docker.com/_/mysql/
https://hub.docker.com/_/openjdk/
...you get the idea.
Your running containers will have names. Docker offers a DNS mechanism so that your containers can connect to each other by using these names. For example if you have a container for your MySQL database named my_app_db listening on port 5000, configure the phpmyadmin container to connect there. An important notice here: don't try these on the default network, because it will not work. Define your own test-network.
Dealing with 3,4,5... or maybe more containers will make you type commands to build them, run them, start/stop them. Here is where docker-compose comes in and proves to be very handy. Within a docker-compose.yml file, you can define a "composition" of inter-connecting containers and handle them with single commands like docker-compose up, docker-compose down etc...
Working example:
comes from here, but is slightly modified...
docker-compose.yml file:
version: '2'
services:
mysql:
image: mysql:latest
container_name: phpmyadmin_testing_mysql
environment:
- MYSQL_ROOT_PASSWORD=test123
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin_testing
volumes:
- /sessions
ports:
- 8090:80
environment:
- PMA_ARBITRARY=1
- TESTSUITE_PASSWORD=test123
depends_on:
- mysql
To run, simply use docker-compose up. To connect, use:
server: phpmyadmin_testing_mysql (the name of the MySQL container)
username: root
password: test123
all
Currently, I have written a service in the docker container.
Currently, when I exit from my container my service is not running which is expected but when I see
"sudo docker exec -it ps -ef" it shows that MySQL which is installed in my container is up and running if I want the same kind of behavior to my service then what should I do?
Thanks in advance.
You want to run your container detached* with
docker run -d <image_name>
Thanks
I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)