I'm Creating a custom docker Image based on MySQL official image.
My docker files looks like this
# Derived from official mysql image (our base image)
FROM mysql
# Add a database
ENV MYSQL_DATABASE company
# Add the content of the sql-scripts/ directory to your image
# All scripts in docker-entrypoint-initdb.d/ are automatically
# executed during container startup
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
In the folder Sql-scrips i have 2 files:
CreateTable.sql - with a create table instruction and InsertData.sql with some insert instructions.
All works fine when i create the container.
My question is :
How can i use some external variables when inserting data into the mysql. For example i have a front end interface where user can choose the name and password( php built ) and i want to insert that info into mysql database on container creation.
Related
Here's a sample for Mysql and I create a dockerfile as my photo here
My database needs:
Tech MySQL
Database table: user
Database fields:user_name|password|Type
However, I don't why it doesn't appear.
enter image description here
from mysql:latest
copy script.mysql
cmd bash script_mysql
I build it but get "failed to solve with frontend dockerfile.v0: failed to create LLB definition: dockerfile parse error line 2: COPY requires at least two arguments, but only one was provided. Destination could not be determined."
actualy you have got a syntax error in your docker file, for the COPY commands in docker build you need to specify the destination directory, where you want to copy the script.mysql file in container:
COPY script.mysql <destination_directory_in_container>
Add to that there are missing configurations in your Dockerfile check the mysql_images_docs form more infos.
I would like to create a Dockerfile in order to create a container that has already mysql installed and my databases created.
I have an sql folder that contains my *.sql files and a script folder that contains my db_builder.sh script that does all the work I need (create the databases, import the needed sql files, etc...).
The only thing I'm missing is to run the mysql server before the db_builder.sh script runs. Also I need to know what would be the default password of the root user.
FROM ubuntu:18.04
ADD sql src/sql
ADD scripts src/scripts
RUN apt-get update && apt-get install mysql-server -y
# somehow start mysql ???
RUN src/scripts/db_builder.sh
I solved my issue by:
1) creating the Dockerfile FROM MySQL image instead of Ubuntu image
2) splitting my db_builder.sh into two scripts:
- prepare_sql_files.sh -> which prepares the needed sql files to be imported
- db_import.sh -> which actually does the import
3) set RUN the prepare_sql_files.sh in the Dockerfile, while just placing (ADD) the db_import.sh in /docker-entrypoint-initdb.d because of this feature of the mysql docker image:
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
So my dockerfile now looks like this:
FROM mysql:latest
ADD sql /src/sql
ADD scripts /src/scripts
RUN /src/scripts/prepare_sql_files.sh
ADD /src/scripts /docker-entrypoint-initdb.d
I am trying to create a docker image from a mysql container.
The problem is that db of the new image is clean, but
files/folders, which I create manually
in the origin container before commit, are copied.
base mysql image is official 5.6
docker is 1.11.
I checked that folder
/var/lib/mysql/d1 appears when a db is created but new image
doesn't persist this folder, though folders in / root are persisted.
Several things happening here:
First, docker commit is a code smell. It tends to be used by those creating images with a manual process, rather than automating their builds with a Dockerfile that would allow for easy recreation. If at all possible, I recommend you transition to a Dockerfile for your image creation.
Next, a docker commit will not capture changes made to a volume. And this same issue occurs if you try to update a volume with a RUN step in a Dockerfile. Both of these capture changes to the container filesystem and store those changes as a layer in the docker image, and the volumes are not part of the container filesystem. This is also visible if you run docker diff against a container. In this case, the upstream image has defined the volume in their Dockerfile:
VOLUME /var/lib/mysql
And docker does not have a command to undo a created volume from the Dockerfile. You would need to either directly modify the image definition from outside of docker (not recommended) or build your own upstream image with that step removed (recommended).
What the mysql image does provide is the ability to inject your own database creation scripts in /docker-entrypoint-initdb.d, which you can add with your own image that extends mysql, or mount as a volume. This is where you would inject your schema, or initialize from a known backup for development.
Lastly, if the goal is to have persistence, you should store your data in a volume, not by committing containers:
docker run -v mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
The volume allows you to recreate the container, upgrade to a newer version of mysql when patches are released (e.g. security fixes) without losing your data.
To backup the volume this will export to a tgz:
docker run --rm -v mysql-data:/source busybox tar -cC /source . >backup.tgz
And to restore a volume, this creates one from a tgz:
docker run --rm -i -v mysql-data:/target busybox tar -xC /target <backup.tgz
You can make data persist by using docker commit command like below.
docker commit CONTAINER_ID REPOSITORY:TAG
docker commit | Docker Documentation
But just as BMitch's answer said, a docker commit will not capture changes made to a volume.
And usually you should use a volume to store data permanently and let a container be ephemeral without data being stored in itself.
So I guess many people think that trying to persist data without using a volume is a bad practice.
But there are some cases you might consider committing and freeze data into an image.
For example, it's handy when you have an image with all the tables and records in it if you use the image for automated test in CI.
In the case of github actions, only thing you need to do is just pull the image and create the database container and run tests against the database.
No need to think about migration of data.
I'm inheriting from the mysql Dockerfile and want to move a VOLUME (/var/lib/mysql) back inside the container so I can distribute it from a registry.
Is there a way in my downstream Dockerfile to (a) undo the VOLUME declaration or (b) replace /var/lib/mysql with a symlink?
I'm giving up on this -- seems simpler to distribute a zipped copy of the DB data directory. If you have a better option, please post.
I had the exact same problem, just with another database (arangodb).
However, I did not find a direct solution for this problem, but in my case (this should also work with mysql), I simply changed the data directory of my database to a non-volume directory in the Dockerfile.
For now, this seems like the best solution, as you can build a full image that contains your data.
As L0j1k has argued vividly in general it is a very bad idea to have your data dir inside of the container. However there are situations where it makes sense. Like for automated tests, run a container with testdata check that everything works as expected and throw it away. Also on OSX & Windows volumes aren't native mounds (because docker runs in a VM) and they can be painfully slow. So you might be better of with copying your data from and to the container, depending on your situation.
While you can't undo the VOLUME directive you can simply create a new data dir and tell Mysql to use that:
FROM mariadb:latest
# Create data dir in /var/lib/data
RUN mkdir /var/lib/data
RUN chown mysql.mysql /var/lib/data
# Change data dir from /var/lib/mysql to /var/lib/data
RUN sed -i 's/\/var\/lib\/mysql/\/var\/lib\/data/g' /etc/mysql/my.cnf
Use with caution.
DO NOT ship your database data in the same image as your database! This is an antipattern and will create bigger problems almost immediately. Ship the data separately as an archive which you then mount into your database container via bind-mount (-v /home/foo/db:/var/lib/mysql). Bind-mount volumes in your docker run statement will override any VOLUME Dockerfile directive. Alternatively, create some automation to dump the database and ship that to your containers, then restore using the dump. Whatever you do will be better than creating an image with your data in the database image. Just as one example of why this is a bad idea: What happens when you need to move the data/database mutant which now has changes? You'll probably use docker export to dump the entire container's filesystem into a new image, and now you're passing around a big blob of crap which is hard to audit. Docker containers (and microservices in general) are designed to be ephemeral and stateless, which means you can hose any one container and recreate it and it'll continue working. You can't do this if you ship your blob of data inside the database image.
With respect to the VOLUME directive in that Dockerfile: Remember that Dockerfiles are used during docker build and therefore do not (and cannot) contain host-dependent information or actions. So the VOLUME /var/lib/mysql isn't making your image impossible to distribute. What that directive does is create a generic (i.e. non-bind-mount) data volume that persists the data of that directory beyond the lifetime of the container. It is not the same thing as a bind-mount volume for example in docker run -v "/var/docker/app/data:/var/lib/mysql" .... This Dockerfile directive does not prevent you from distributing the image because it does not specify host-dependent information.
I want to move containers from one host to another. The containers have updated data in their filesystem, so I do not want to move the original images (docker save) but containers (using docker export).
So I use
docker export l4bnode > l4bnode.tar
on the old host, copy the file to new host, and import image
cat l4bnode.tar | docker import - andi/l4bnode
on the new one. But.. it looks like all the configuration data I had in the Dockerfile (and that I also could specify/had specified in the command line when running the container) is lost. I tried
docker run andi/l4bnode
and get
docker: Error response from daemon: No command specified.
Using docker inspect, I see that all data on the imported image is empty, though it is set on the exported running container. I mainly am missing startup command, working directory, environment variables and exposed ports (some of which I have to change then due to the migration and new environment).
How can I apply the original configuration on the new host, or preferrably, migrate it properly?
You can commit the current container state as new image. Then use save/load on the new image.
That being said this is something you generally should try to avoid. Runtime data should be kept in volumes, any configuration changes should happen via Dockerfile rebuilds.