I have a docker container and need the abillity of mysqldump and importing sql files to a external mysql server. So i dont need the Server engine in my docker container. I just need the abillity of running mysqlcommands.
So how can i just install this abiolity with minimal space requirement in my docker container?
Related
I just started using Docker (CE), so I am a relative novice.
I installed on UBUNTU 18.04.
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
sudo docker run --name some-postgres -v "/home/parallels/Desktop/Orthanc Dropbox/OrthancConfigs/postgres:/var/lib/postgresql/data" -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres
I then installed DBeaver on UBUNTU, and I am able to connect to postgres after running the image/container (I need to read more to understand the difference). I wanted to configure it to use a local filesystem folder to store the database rather than have them non-persistent when I run the image. There is probably an issue with having a "space" in the Orthanc Dropbox filename, but not sure that I can change that because I'm running their client and it does that automatically. You can use symlinks with Dropbox, but the actual files to sync have to be in the Dropbox folder, not the symlinks. It would be nice if they supported the opposite arrangement.
I see there are quite a few config options, especially when you use a .yaml config file. Ideally, I'll want to start up the Postgres Docker when I boot the system and shut it down when I shut down the system so that the operation is relatively seamless, but I want the database files stored on the system file system or on a mounted file system folder, probably within the Dropbox folder itself because it automatically syncs everything in that folder with the Dropbox Cloud.
If I can get that to work, I'll also probably want to do the same for a LAMP stack (MySQL, Apache, PHP 7.4, also with the DB as above) and for an NGINX server. It would actually even be nice to package NGINX, PHP-FPM & Postgres in my own custom container.
So the goal is:
Custom Docker container with Postgres, NGINX, PHP-FPM 7.4, with DB, web directories and config files on file system.
Custom Docker container with Apache, MySQL, PHP 7.4, with DB, web directories and config files on file system.
I can read the documentation a little further, but I presume this really isn't that difficult, and it seems like there should be some already made Docker images that do something pretty similar.
Another option is to use Docker images/containers for Postgres and MySQL with the database files on my file system, and then just install Apache, NGINX, PHP/PHP-FPM, on my system natively. That way I can use Docker for the Databases and my system for the rest.
So:
Custom Docker container with Postgres, with DB on file system.
Custom Docker container with MySQL, with DB on file system.
I have some .sh scripts to make rolling backups of some database files, so I presume there would be a way to use those with the Docker images if I wanted to, although that might not be necessary with the backup in the Cloud.
Thanks.
Databases, whether MySQL or Postgres, have rather strict requirements on filesystems.
Its unlikely that an archival/sharing based Dropbox connector meets these requirements.
Multiple concurrent instantiates on the same storage there certainly won't be possible.
Recommend running locally and have the backup mechanism push their backups to your Dropbox based storage. That way you'll actually have a backup, in addition to a disaster recovery.
I've been plugging around with Docker for the last few days and am hoping to move my Python-MySQL webapp over to Docker here soon.
The corollary is that I need to use Docker volumes and have been stumped lately. I can create a volume directly by
$ docker volume create my-vol
Or indirectly by referencing a nonexistent volume in a docker run call, but I cannot figure out how to populate these volumes with my .sql database file, without copying the file over via a COPY call in the Dockerfile.
I've tried directly creating the volume within the directory containing the .sql file (first method mentioned above) and mounting the directory containing the .sql file in my 'docker run' call, which does move the .sql file to the container (I've seen it by navigaating the bash shell inside the container) but when running a mariadb container connecting to the database-containing mariadb container (as suggested in the mariadb docker readme file), it only has the standard databases (information_schema, mysql, performance_schema)
How can I create a volume containing my pre-existing .sql database?
When working with mariadb in a docker container, the image supports running .sql files as a part of the first startup of the container. This allows you to push data into the database before it is made accessible.
From the mariadb documentation:
Initializing a fresh instance
When a container is started for thefirst time, a new database with the specified name will be created and
initialized with the provided configuration variables. Furthermore, it
will execute files with extensions .sh, .sql and .sql.gz that are
found in /docker-entrypoint-initdb.d. Files will be executed in
alphabetical order. You can easily populate your mariadb services by
mounting a SQL dump into that directory and provide custom images with
contributed data. SQL files will be imported by default to the
database specified by the MYSQL_DATABASE variable.
This means that if you want to inject data into the container, when it starts up for the first time. In your Dockerfile, COPY the .sql file into the container at the path /docker-entrypoint-initdb.d/myscript.sql - and it will be invoked on the database that you specified in the environment variable MYSQL_DATABASE.
Like this:
FROM mariadb
COPY ./myscript.sql /docker-entrypoint-initdb.d/myscript.sql
Then:
docker run -e MYSQL_DATABASE=mydb mariadb
There is then the question of how you want to manage the database storage. You basically have two options here:
Create a volume binding to the host, where mariadb stores the database. This will enable you to access the database storage files easily from the host machine.
An example with docker run:
docker run -v /my/own/datadir:/var/lib/mysql mariadb
Create a docker volume and bind it to the storage location in the container. This will be a volume that is managed by docker. This volume will persist the data between restarts of the container.
docker volume create my_mariadb_volume
docker run -v my_mariadb_volume:/var/lib/mysql mariadb
The is also covered in the docs for the mariadb docker image. I can recommend reading it from top to bottom if you are going to use this image.
How is MySQL in Docker used in Production when updates are necessary.
For example, adding a column or table, etc.
Is there a way of using Liquibase?
Technically, you can run MySQL in a Docker container just like you'd run MySQL on a VM. Once deployed in a container, you can run any MySQL SQL via mysql client (or any client including JDBC) as long as you have the container running at a resolvable address and have the right credentials. The client doesn't know (or care) that your MySQL server is running in a container - all client cares about is the host, port, database and user/password values.
That said, you need to make sure you mount a volume for your container so that MySQL data can be "externalized" and you don't lose everything just because you ran a docker rm. With plain Docker, you can use the -v option to mount a voulme from the Docker host VM or an external disk (such as EBS or EFS/NFS). With Kubernetes, you can use a statefulset with a persistentVolumeClaim to make sure you preserve the storage no matter what happens to your container.
Mysql on docker acts mostly as an standalone Mysql installation. Be careful to configure a volume for the data or you will lose upon container restart or termination.
Said that, you can use any mysql consuming app, as you only need is to expose the port, and configure crendentials.
I found out about Vitess which let you shard MySQL database.
I want to use the docker image of both MariaDB and Vitess but I'm not quite sure what to do next. I'm using CentOS 7.
I installed the images
docker pull mariadb
docker pull vitess/root
docker pull vitess/orchestrator
Log inside the vitess image
sudo docker run -ti vitess/root bash
As the website said, make build
make build
I set up the variables
export VTROOT=/vt
export VTDATAROOT=/vt/vtdataroot
The manual said it was in the home directory but in the image it's at root.
But after that I'm stuck. I laucnh zookeeper : ./zk-up.sh
Starting zk servers... Waiting for zk servers to be ready... Started zk servers. ERROR: logging before flag.Parse: E0412
00:31:26.378586 132 syslogger.go:122] can't connect to syslog
W0412 00:31:26.382527 132 vtctl.go:80] cannot connect to syslog:
Unix syslog delivery error Configured zk servers.
Oops, okay, let's continue...
./vtctld-up.sh for the web internace
Starting vtctld...
Access vtctld web UI at http://88bdaff4e181:15000
Obviously I cannot access that link since it's in docker on a headless server
./vttablet-up.sh suppose to bring up 3 vttablets, but MariaDB is in another docker, not yet started and if I open the file it is not apparent how to set it up.
Is there any MySQL or PostgreSQL sharding solution more easily installable?
Or how can I set this up?
(Docker noob here sorry)
Thanks!
If you need multiple container orchestrated, best bet is to use docker-compose. You can define all the application dependencies as separate containers and network them to be accessible from each other.
I want to connect my already running jenkins container with mysql database which is another container. Have created a database named jenkins and user named jenkins in mysql.
Can it be done without using the run command coz run installs a fresh image and i want to use the existing one.
You can use docker network connect to connect both containers to the same network, so they can communicate. See docker network connect