How is MySQL in Docker used in Production when updates are necessary.
For example, adding a column or table, etc.
Is there a way of using Liquibase?
Technically, you can run MySQL in a Docker container just like you'd run MySQL on a VM. Once deployed in a container, you can run any MySQL SQL via mysql client (or any client including JDBC) as long as you have the container running at a resolvable address and have the right credentials. The client doesn't know (or care) that your MySQL server is running in a container - all client cares about is the host, port, database and user/password values.
That said, you need to make sure you mount a volume for your container so that MySQL data can be "externalized" and you don't lose everything just because you ran a docker rm. With plain Docker, you can use the -v option to mount a voulme from the Docker host VM or an external disk (such as EBS or EFS/NFS). With Kubernetes, you can use a statefulset with a persistentVolumeClaim to make sure you preserve the storage no matter what happens to your container.
Mysql on docker acts mostly as an standalone Mysql installation. Be careful to configure a volume for the data or you will lose upon container restart or termination.
Said that, you can use any mysql consuming app, as you only need is to expose the port, and configure crendentials.
Related
Can I run a docker container with mysql, and save my database (data), outside the container?
Yes, you can. You can use bind mounts when creating the docker container to mount a path on the host to some path inside the container:
https://docs.docker.com/storage/bind-mounts/
You could, for example, mount the host OS' /home//mysqldata as /var/lib/mysql inside the container. When a process inside the docker container tries to read/write files in /var/lib/mysql inside the container, that will actually be reading/writing data in the host OS' /home//mysqldata directory/folder. For example:
docker run -it --mount type=bind,source=/home/bob/mysqldata,target=/var/lib/mysql <some_image_name>
Do note that docker volumes can also be used for this although those work differently than bind mounts, so make sure you're using a bind mount (type=bind).
Also, I've seen at least one scenario where using a bind mount won't work for MySQL data. In my case it was using a bind mount for a docker container that was running inside a Vagrant box using a directory that was a VirtualBox shared folder. In that case I was getting some kernel/block level errors that prevented MySQL from setting certain file modes or making low-level calls to some of the files in the data dir which ultimately prevented MySQL from starting. I forget now exactly what error it was throwing (I can go back and check) but I had to switch to a volume instead of a bind mount. That was fine for my use case but just be aware if you use a bind mount and MySQL fails to start due to some lower-level disk call.
I should also add that it's not clear from your question /why/ you want to do this so I can't advocate that doing this will be good/do what you want. Only one MySQL process should be writing to the MySQL data directory at a time and the files are binary files so trying to read them with something other than MySQL seems odd. But, if you have a use case where you want something outside of Docker to read the MySQL data files, the bind mount might do what you want.
I am attempting to connect to a MySQL Database that is dockerized and stored in EC2 from a Flask Application. In order to get to the database manually, you have to ssh in with a pem file and then exec into the docker image to get to the database. How would I go about connecting to this from the application itself. I have tried using both sql alchemy or mysql but if I try and use the ip address to the ec2 instance it just times out. My guess is I need to do something with a Dockerfile within the Flask app maybe? I am fairly new to Flask and Docker so I am not sure what the best course of action is and could not find a lot of information online.
In short, if you are able to connect to the MySQL server with one of MySQL client tools, then you'll be able to connect to it with Flask.
The process is not that simple, I can't give you commands to execute without any additional info, but overall logic is the following.
MySQL has a server inside the docker container working on 3306 port. First, that port should be exposed to the server and probably bind to the server's 3306 port, this may be done on the docker run command.
You need to run something like docker run -p 3306:3306. See more details here
Second, if you want to connect to the instance from remote, you need to make sure the port is accessible from wherever you want, docker will automatically add an iptable record for you, but make sure if there is not a firewall blocking the port.
Given you are receiving a time-out, most probably the issue is that Flask is not able to connect through that port.
How can I persist data from my mysql container? I'd really like to mount /var/lib/mysql from the container to the host machine. This indeed creates the directory, but when I create data, stop my application, and start a new one with the mounted directory, nothing is there. I've messed around with giving the directory all permissions and changing the user and group to root but nothing seems to work. I keep seeing people saying to use a data container, but I don't see how that can work with Amazon ec2 container service (ECS), considering each time I stop and start a task it would create a new data container rather than use an existing one. Please help.
Thank you
Simply run your containers with something like this:
docker run -v /var/lib/mysql:/var/lib/mysql -t -i <container_id> <command>
You can keep your host /var/lib/mysql and mount it to each one of the containers. Now this is not going to work with EC2 Container service unless all of your servers that you use for containers map to a common /var/lib/mysql (Perhaps nfs mounted from a master EC2 instance)
AWS EFS is going to be great for this once it becomes widely available.
I'm using openshift to build my apps.
And I add mysql to my gear.
but, if I want to reach my database. I can't use Navicat which is my usual way to manage my database. I must ssh to my openshift server and then use command line 'mysql' to reach my database which is a bad way compared to Navicat.
So, how can I reach my database in Openshift with Navicat?
I've used env | grep MYSQL to get my mysql configration and use it in Navicat.
However, all is none effect.
If its a scalable application you should be able to connect to it externally via the connection information supplied by the environment variables. If its not a scalable app, then you'll need to use the rhc port-forward command to forward the necessary ports needed to connect.
Take a look at the following article here for more information. https://www.openshift.com/blogs/getting-started-with-port-forwarding-on-openshift
I have a server running 5 or 6 small Rails apps. All their attached files are on S3 and they all use MySQL as database. Each app has its own user and runs some thins. There is an nginx server doing the load balancing and domain routing.
I plan to replace this server by a Docker installation : one server with one container per app, with a nginx in front.
My question is : where would you put the database part ?
I mainly see 4 possibilities :
1) One Mysql server inside of each app container. This seams not to be Docker's philosophy I think. It would require each container's data to be backuped individually.
2) A unique MySQL container for all apps.
3) A standard MySQL installation on the host Docker server.
4) A separate MySQL server for all apps.
What would you do ?
PS : I know Docker is not production ready yet, I plan to use it for staging at the moment and switch if I'm happy with it.
It depends on several factors. Here are some questions to help you to decide.
Are the 5-6 apps very similar (i.e., in Docker terms, you could base them on a common image), and are you thinking about deploying more of them, and/or migrating some of them to other servers?
YES: then it makes sense to embed the MySQL server in each app, because it will "stick around" with the app, with minimal configuration effort.
NO: then there is no compelling reason to embed the MySQL server.
Do you want to be able to scale those apps (i.e. load balance requests for a single app on multiple containers), or to scale the MySQL server (to e.g. a master/slave replicated setup) ?
YES: then you cannot embed the MySQL server, otherwise, scaling one tier would scale the other tier, which will lead to though headaches.
NO: then nothing prevents you from embedding the MySQL server.
Do you think that there will be a significant database load on at least one of those apps?
YES: then you might want to use separate MySQL servers, because a single app could impede the others.
NO: then you can use a single MySQL server.
Embedding the MySQL server is fine if you want a super-easy-to-deploy setup, where you don't need scalability, but you want to be able to spin up new instances super easily, and you want to be able to move instances around without difficulty.
The most flexible setup is the one where you deploy one app container + one MySQL container for each app. If you want to do that, I would suggest to wait for Docker 0.7, which will implement links, which will let you have a basic service discovery mechanism, so that each app container can easily discover the host/port of its database container.
I wouldn't deploy MySQL on the host; if you want a single MySQL install, you can achieve the same result by running a single MySQL container and running it with -p 3306:3306 (it will route the host's 3306/tcp port to the MySQL container's 3306/tcp port).
Since the 5 or 6 apps are small as you described, I will definitely exclude the option of installing a separate MySQL per container for two reasons:
It is waste of server resources, it is almost equivalent to installing MySQL 5 or 6 times on the same server.
It is less flexible (cannot scale DB independently from the apps) and harder to backup.
Having a dedicated MySQL container or installing MySQL on the host directly (i.e. not dockerizied), should have almost the same performance (at the end you will have a native mysql process on the host regardless if it is in the container or not).
The only difference is that you have to mount a volume to persist the data outside the MySQL
container, so having a dedicated MySQL container is a better option.