to deploy docker on apache2 server - html

I have my html application that runs in apache2 server and I want to dockerize the html application that should be run in docker container using apache2 package. I tried but got docker build failed. I dont want to use nginx server help me with apache.
Here is the following Dockerfile content in html application
FROM apache2:2.4.18
WORKDIR /var/www/html/startapp
COPY . /var/www/docker
Then I tried to build this with docker using
sudo docker build -t startapp .
It returns:
Sending build context to Docker daemon 335.6MB
Step 1/3 : FROM apache2:2.4.18
pull access denied for apache2, repository does not exist or may require 'docker login'
If its not possible with apache2 so there is change to build by lampp server in ubuntu 16.0.4.

Try replacing the base image (the one that you are using is not available as on default docker registry).
FROM httpd:2.4
Take a look at https://hub.docker.com/_/httpd/ for more information.

It seems like you are trying to use a non-official docker image for Apache, So either you build apache2 image using its Dockerfile if you have it. Or you can login to the private repository that holds apache2 image if you have its credentials. Otherwise you may use the official Apache docker image

Related

How to install sudo and nano command in MySql docker image

I am trying to connect Django application with MySql docker container. I am using the latest version of MySql i.e MySql 8.0 to build a container. I was able to build the MySql container successfully but I am not able to connect it using Django's default MySql Connector. When I run the docker-compose up command I get the error mentioned below.
django.db.utils.OperationalError: (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
I started looking for the solution and got to know that MySql has released a major change in its default authentication plugin which is not support by most of the MySql Connectors.
To fix this issue I will have to set the default-authentication-plugin to mysql_native_password in my.cnf file of MySql container.
I logged into container using command docker exec -it <conatiner id> /bin/bash and was also able to locate the my.cnf file inside the container.
To edit the my.cnf file I will have to use the nano command as stated below.
nano my.cnf
But unfortunately nano command is not installed in MySql Container. To install nano I will need sudo installed in container.
I tried installing sudo using below mentioned command but it did not work.
apt-get install sudo
error -
Reading package lists... Done
Building dependency tree
Reading state information... Done
What are the possible solutions to fix this issue.
In general you shouldn't try to directly edit files in containers. Those changes will get lost as soon as the container is stopped and deleted; this happens extremely routinely since many Docker options can only be set at startup time, and the standard way to update the software in a container is to recreate it with a newer image. In your case, you also can't live-edit a configuration file the main container process needs at startup time, because it will have already read the configuration file by the time you're able to edit it.
The right way to do this is to inject the modified configuration file at container startup time. If you haven't already, get the default configuration file out of the image
docker run --rm mysql:8 cat /etc/mysql/my.cnf > my.cnf
and edit it, directly, on your host, using your choice of text editor. When you launch the container, inject the modified file
docker run -v $PWD/my.cnf:/etc/mysql/my.cnf ... mysql:8
or, in Docker Compose,
volumes:
- ./my.cnf:/etc/mysql/my.cnf
The Docker Hub mysql image documentation has some more suggestions on ways to set this; see "Using a custom MySQL configuration file" there.
While docker exec is an extremely useful debugging tool, it shouldn't be part of your core workflow, and I'd recommend trying to avoid it in cases like this. (With the bind-mount approach, you can add the modified config file to your source control system and blindly docker-compose up like normal without knowing this detail; a docker exec approach you'd have to remember and repeat by hand every time you started the container stack.)
Also note that you don't need sudo in Docker at all. Every context where you can run something (Dockerfiles, docker run, docker exec) has some way to explicitly specify the user ID, so you can docker exec -u root .... sudo generally depends on things like users having passwords and interactive prompting, which works well for administering a real Linux host but doesn't match a typical Docker environment.
The issue is not with sudo because you've already permissions to install pacakegs.
You should instead update package manager before to install new packages in order to update package repositories:
RUN apt-get update
RUN apt-get install nano
Mysql build the image with oracle linux, run the commands to install nano:
microdnf update
microdnf install nano sudo -y
And edit the my.cnf with nano

How to link two containers for a Jenkins Job?

I have configured my Jenkins to run our Build Jobs and functional Tests in a docker container. For example, when I click on the "Build Now"-Button - Jenkins will build the Dockerfile which is in Git and run the container so the Buildsteps (Jenkinsfile) can be done in this container.
My Question is now: How can I start another Container with MySQL-Server installed and link them to my Build-Job-Container everytime I Build my Job.
Thanks for any tips.
one can use service discovery aka Consul as IPs in docker networks are granted dynamicaly. Or use static ip
docker run --net bridge --ip 172.17.0.254 -it ubuntu bash

Connecting to percona docker from a java docker container

I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)

Docker container stops running only when volume is attached

I have written a dockerfile that runs mysql on an ubuntu image. The Dockerfile is:
FROM ubuntu
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
RUN sed -i '43s/.*/bind-address = 0.0.0.0/' /etc/mysql/mysql.conf.d/mysqld.cnf
EXPOSE 3306
ENTRYPOINT service mysql start && bash
If I run:
docker run -dit mysql-server
after building the container everything works fine and my Apache/PHP container can communicate with it. However, if I run it with a volume attached (docker run -dit -v ~/vol/:/var/lib/mysql/ mysql-server) the container will stop running after 30 seconds (I'm pretty sure it's the same amount of time every time).
Does anyone know a way I can keep the container up and mount a volume? I've never had this problem before and can't find anything else online (I've been looking a while). Thanks.
This is because you are masking the contents of /var/lib/mysql with the contents of ~/vol which I'm assuming is empty. As such the MySQL server can't start as it's missing database files. I would personally use the official image over your custom implementation as it will handle what your looking for here is the link to Dockerhub. It has options for mounting your custom my.cnf file if you need those changes. However by default the image does bind to 0.0.0.0. See the Dockerhub link for config options.
Hope this helps
Dylan

docker to connect with mysql database of host system and dump the sql file into host system and then host a web application

I am new to Docker but i have read quite about it. Now my requirement is:
I will give my client a shell script which he would run on a base ubuntu os on a completely new system. The docker image should use the database of host system. The shell script will do all the prerequisites of installing docker, mysql, etc. and will run a docker image. As the image is not available locally, it will pull from docker repository.
Now my problem is that i dont want to give my client the sql dump file just like that. The dump is included in the image and once the images is run i want the image to connect to the host database and dump the data and then host the webapp.
My docker file is :
FROM ubuntu:14.04
MAINTAINER test_manoj
RUN pip install requirements.txt
ADD . /home/myapp/
RUN sudo apt-get install -y supervisor
WORKDIR /home/myapp/
EXPOSE 8000
EXPOSE 80
cmd ["supervisord", "-c", "/home/myapp/supervisord.conf"]
There are some more apt-get install-s but i didnt find it useful to mention here. So basically i am installing nginx, uwsgi, supervisor.
I have exposed port 8000 for socket uwsgi connections and port 80 for nginx.
My docker run command is :
docker run --detach --net=host -v /var/run/mysqld.sock:/var/run/mysqld/mysqld.sock manoj/mydocker
I am using -v to connect the host mysql to container's mysql.
I have already found a work around for my problem that is running
docker run --rm --detach --net=host -v /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysqld.sock manoj/mydocker mysq -uroot -proot db_name < dump.sql
before the main run command. I know this works but is there any other way to do this? And is there any other way i can use host's mysql without providing -v tag?