I'm trying to import database schema to mysql service through following statment
mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
and it return mysql: not found. I have even tried the following command
docker exec -i mysql mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
Even though received error + docker exec -i mysql mysql --user=$DB_USERNAME --password=$DB_PASSWORD 5i < DB_Schema.sql
Error: No such container: mysql
What would be the best way to use mysql so that I can import a stance of DB into it for testing purpose and how?
Please find the .yml file below.
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# Specify a docker image from Docker Hub as your build environment.
# All of your pipeline scripts will be executed within this docker image.
image: php:8.0-fpm-alpine
# All of your Pipelines will be defined in the `pipelines` section.
# You can have any number of Pipelines, but they must all have unique
# names. The default Pipeline is simply named `default`.
pipelines:
default:
# Each Pipeline consists of one or more steps which each execute
# sequentially in separate docker containers.
# name: optional name for this step
# script: the commands you wish to execute in this step, in order
- parallel:
- step:
name: Installing Dependancies and Composer
caches:
- composer
script:
# Your Pipeline automatically contains a copy of your code in its working
# directory; however, the docker image may not be preconfigured with all
# of the PHP/Laravel extensions your project requires. You may need to install
# them yourself, as shown below.
- apt-get update && apt-get install -qy git curl libmcrypt-dev unzip libzip-dev libpng-dev zip git gnupg gnupg2 php-mysql
- docker-php-ext-configure gd --enable-gd --with-freetype --with-jpeg --with-webp && \
- docker-php-ext-install gd && \
- docker-php-ext-install exif && \
- docker-php-ext-install zip && \
- docker-php-ext-install pdo pdo_mysql
- rm -rf ./vendor
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install --ignore-platform-reqs
- composer dump-autoload
# Here we create link between the .env.pipelines file and the .env file
# so that our database can retreieve all the variables inside .env.pipelines
- ln -f -s .env.pipelines .env
artifacts:
- vendor/**
- step:
name: Installing and Running npm
image: node:16
caches:
- node
script:
- npm install -g grunt-cli
- npm install
- npm run dev
artifacts:
- node_modules/**
- step:
name: Running Test
deployment: local
script:
# Start up the php server so that we can test against it
- php artisan serve &
# # Give the server some time to start
- sleep 5
# - php artisan migrate
- docker ps
- docker container ls
- mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
# - docker exec -i mysql mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD -e "SHOW DATABASES"
- php artisan optimize
- php artisan test
services:
- mysql
- docker
# You might want to create and access a service (like a database) as part
# of your Pipeline workflow. You can do so by defining it as a service here.
definitions:
services:
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: $DB_DATABASE
MYSQL_USER: $DB_USERNAME
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_PASSWORD
SERVICE_TAGS: mysql
SERVICE_NAME: mysql
You cannot install/update/change you main image in the first step for them to be there in the last step. Make your custom Docker image with all those installations, which will make it faster to run the pipeline and will let you use other tools you need in your pipeline.
I prefer to use the "mysql" client outside Docker and have it reach into the Docker container based on the port mapping set up. Then, conceptually, it is like reading to a "mysqld" server on a separate "server".
LOAD DATA INFILE and INSERT, including use of mysql ... < dump.sql works fine.
I am trying to create a container with a MySQL database and add a schema to these database.
My current Dockerfile is:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
COPY files/epcis_schema.sql /data/epcis_schema.sql
# Change the working directory
WORKDIR data
CMD mysql -u $MYSQL_USER -p $MYSQL_PASSWORD $MYSQL_DATABASE < epcis_schema.sql
In order to create the container I am following the documentation provided on Docker and executing this command:
docker run --name ${CONTAINER_NAME} -e MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD} -e MYSQL_USER=${DB_USER} -e MYSQL_PASSWORD=${DB_USER_PASSWORD} -e MYSQL_DATABASE=${DB_NAME} -d mvpgomes/epcisdb
But when I execute this command the Container is not created and in the Container status it is possible to see that the CMD was not executed successfully, in fact only the mysql command is executed.
Anyway, is there a way to initialize the database with the schema or do I need to perform these operations manually?
I had this same issue where I wanted to initialize my MySQL Docker instance's schema, but I ran into difficulty getting this working after doing some Googling and following others' examples. Here's how I solved it.
1) Dump your MySQL schema to a file.
mysqldump -h <your_mysql_host> -u <user_name> -p --no-data <schema_name> > schema.sql
2) Use the ADD command to add your schema file to the /docker-entrypoint-initdb.d directory in the Docker container. The docker-entrypoint.sh file will run any files in this directory ending with ".sql" against the MySQL database.
Dockerfile:
FROM mysql:5.7.15
MAINTAINER me
ENV MYSQL_DATABASE=<schema_name> \
MYSQL_ROOT_PASSWORD=<password>
ADD schema.sql /docker-entrypoint-initdb.d
EXPOSE 3306
3) Start up the Docker MySQL instance.
docker-compose build
docker-compose up
Thanks to Setting up MySQL and importing dump within Dockerfile for clueing me in on the docker-entrypoint.sh and the fact that it runs both SQL and shell scripts!
I am sorry for this super long answer, but, you have a little way to go to get where you want. I will say that normally you wouldn't put the storage for the database in the same container as the database itself, you would either mount a host volume so that the data persists on the docker host, or, perhaps a container could be used to hold the data (/var/lib/mysql). Also, I am new to mysql, so, this might not be super efficient. That said...
I think there may be a few issues here. The Dockerfile is used to create an image. You need to execute the build step. At a minimum, from the directory that contains the Dockerfile you would do something like :
docker build .
The Dockerfile describes the image to create. I don't know much about mysql (I am a postgres fanboy), but, I did a search around the interwebs for 'how do i initialize a mysql docker container'. First I created a new directory to work in, I called it mdir, then I created a files directory which I deposited a epcis_schema.sql file which creates a database and a single table:
create database test;
use test;
CREATE TABLE testtab
(
id INTEGER AUTO_INCREMENT,
name TEXT,
PRIMARY KEY (id)
) COMMENT='this is my test table';
Then I created a script called init_db in the files directory:
#!/bin/bash
# Initialize MySQL database.
# ADD this file into the container via Dockerfile.
# Assuming you specify a VOLUME ["/var/lib/mysql"] or `-v /var/lib/mysql` on the `docker run` command…
# Once built, do e.g. `docker run your_image /path/to/docker-mysql-initialize.sh`
# Again, make sure MySQL is persisting data outside the container for this to have any effect.
set -e
set -x
mysql_install_db
# Start the MySQL daemon in the background.
/usr/sbin/mysqld &
mysql_pid=$!
until mysqladmin ping >/dev/null 2>&1; do
echo -n "."; sleep 0.2
done
# Permit root login without password from outside container.
mysql -e "GRANT ALL ON *.* TO root#'%' IDENTIFIED BY '' WITH GRANT OPTION"
# create the default database from the ADDed file.
mysql < /tmp/epcis_schema.sql
# Tell the MySQL daemon to shutdown.
mysqladmin shutdown
# Wait for the MySQL daemon to exit.
wait $mysql_pid
# create a tar file with the database as it currently exists
tar czvf default_mysql.tar.gz /var/lib/mysql
# the tarfile contains the initialized state of the database.
# when the container is started, if the database is empty (/var/lib/mysql)
# then it is unpacked from default_mysql.tar.gz from
# the ENTRYPOINT /tmp/run_db script
(most of this script was lifted from here: https://gist.github.com/pda/9697520)
Here is the files/run_db script I created:
# start db
set -e
set -x
# first, if the /var/lib/mysql directory is empty, unpack it from our predefined db
[ "$(ls -A /var/lib/mysql)" ] && echo "Running with existing database in /var/lib/mysql" || ( echo 'Populate initial db'; tar xpzvf default_mysql.tar.gz )
/usr/sbin/mysqld
Finally, the Dockerfile to bind them all:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
# init_db will create the default
# database from epcis_schema.sql, then
# stop mysqld, and finally copy the /var/lib/mysql directory
# to default_mysql_db.tar.gz
RUN /tmp/init_db
# run_db starts mysqld, but first it checks
# to see if the /var/lib/mysql directory is empty, if
# it is it is seeded with default_mysql_db.tar.gz before
# the mysql is fired up
ENTRYPOINT "/tmp/run_db"
So, I cd'ed to my mdir directory (which has the Dockerfile along with the files directory). I then run the command:
docker build --no-cache .
You should see output like this:
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM mysql
---> 461d07d927e6
Step 1 : MAINTAINER (me) <email>
---> Running in 963e8de55299
---> 2fd67c825c34
Removing intermediate container 963e8de55299
Step 2 : ADD files/run_db files/init_db files/epcis_schema.sql /tmp/
---> 81871189374b
Removing intermediate container 3221afd8695a
Step 3 : RUN /tmp/init_db
---> Running in 8dbdf74b2a79
+ mysql_install_db
2015-03-19 16:40:39 12 [Note] InnoDB: Using atomics to ref count buffer pool pages
...
/var/lib/mysql/ib_logfile0
---> 885ec2f1a7d5
Removing intermediate container 8dbdf74b2a79
Step 4 : ENTRYPOINT "/tmp/run_db"
---> Running in 717ed52ba665
---> 7f6d5215fe8d
Removing intermediate container 717ed52ba665
Successfully built 7f6d5215fe8d
You now have an image '7f6d5215fe8d'. I could run this image:
docker run -d 7f6d5215fe8d
and the image starts, I see an instance string:
4b377ac7397ff5880bc9218abe6d7eadd49505d50efb5063d6fab796ee157bd3
I could then 'stop' it, and restart it.
docker stop 4b377
docker start 4b377
If you look at the logs, the first line will contain:
docker logs 4b377
Populate initial db
var/lib/mysql/
...
Then, at the end of the logs:
Running with existing database in /var/lib/mysql
These are the messages from the /tmp/run_db script, the first one indicates that the database was unpacked from the saved (initial) version, the second one indicates that the database was already there, so the existing copy was used.
Here is a ls -lR of the directory structure I describe above. Note that the init_db and run_db are scripts with the execute bit set:
gregs-air:~ gfausak$ ls -Rl mdir
total 8
-rw-r--r-- 1 gfausak wheel 534 Mar 19 11:13 Dockerfile
drwxr-xr-x 5 gfausak staff 170 Mar 19 11:24 files
mdir/files:
total 24
-rw-r--r-- 1 gfausak staff 126 Mar 19 11:14 epcis_schema.sql
-rwxr-xr-x 1 gfausak staff 1226 Mar 19 11:16 init_db
-rwxr-xr-x 1 gfausak staff 284 Mar 19 11:23 run_db
Another way based on a merge of serveral responses here before :
docker-compose file :
version: "3"
services:
db:
container_name: db
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=mysql
- MYSQL_DATABASE=db
volumes:
- /home/user/db/mysql/data:/var/lib/mysql
- /home/user/db/mysql/init:/docker-entrypoint-initdb.d/:ro
where /home/user.. is a shared folder on the host
And in the /home/user/db/mysql/init folder .. just drop one sql file, with any name, for example init.sql containing :
CREATE DATABASE mydb;
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'%' IDENTIFIED BY 'mysql';
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'localhost' IDENTIFIED BY 'mysql';
USE mydb
CREATE TABLE CONTACTS (
[ ... ]
);
INSERT INTO CONTACTS VALUES ...
[ ... ]
According to the official mysql documentation, you can put more than one sql file in the docker-entrypoint-initdb.d, they are executed in the alphabetical order
The other simple way, use docker-compose with the following lines:
mysql:
from: mysql:5.7
volumes:
- ./database:/tmp/database
command: mysqld --init-file="/tmp/database/install_db.sql"
Put your database schema into the ./database/install_db.sql. Every time when you build up your container, the install_db.sql will be executed.
I've tried Greg's answer with zero success, I must have done something wrong since my database had no data after all the steps: I was using MariaDB's latest image, just in case.
Then I decided to read the entrypoint for the official MariaDB image, and used that to generate a simple docker-compose file:
database:
image: mariadb
ports:
- 3306:3306
expose:
- 3306
volumes:
- ./docker/mariadb/data:/var/lib/mysql:rw
- ./database/schema.sql:/docker-entrypoint-initdb.d/schema.sql:ro
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
Now I'm able to persist my data AND generate a database with my own schema!
After Aug. 4, 2015, if you are using the official mysql Docker image, you can just ADD/COPY a file into the /docker-entrypoint-initdb.d/ directory and it will run with the container is initialized. See github: https://github.com/docker-library/mysql/commit/14f165596ea8808dfeb2131f092aabe61c967225 if you want to implement it on other container images
The easiest solution is to use tutum/mysql
Step1
docker pull tutum/mysql:5.5
Step2
docker run -d -p 3306:3306 -v /tmp:/tmp -e STARTUP_SQL="/tmp/to_be_imported.mysql" tutum/mysql:5.5
Step3
Get above CONTAINER_ID and then execute command docker logs to see the generated password information.
docker logs #<CONTAINER_ID>
Since I struggled with this problem recently, I'm adding a docker-compose file that really helped me:
version: '3.5'
services:
db:
image: mysql:5.7
container_name: db-container
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
- "./scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
MYSQL_USER: test-user
MYSQL_PASSWORD: password
ports:
- '3306:3306'
healthcheck:
test: "/usr/bin/mysql --user=root --password=password --execute \"SHOW DATABASES;\""
interval: 2s
timeout: 20s
retries: 10
You just need to create a scripts folder in the same location as the docker-compose.yml file above.
The scripts folder will have 2 files:
schema.sql: DDL scripts (create table...etc)
data.sql: Insert statements that you want to be executed right after schema creation.
After this, you can run the command below to erase any previous database info (for a fresh start):
docker-compose rm -v -f db && docker-compose up
For the ones not wanting to create an entrypoint script like me, you actually can start mysqld at build-time and then execute the mysql commands in your Dockerfile like so:
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "CREATE DATABASE somedb;" && \
mysql -e "CREATE USER 'someuser'#'localhost' IDENTIFIED BY 'somepassword';" && \
mysql -e "GRANT ALL PRIVILEGES ON somedb.* TO 'someuser'#'localhost';"
or source a prepopulated sql dump:
COPY dump.sql /SQL
RUN mysqld_safe & until mysqladmin ping; do sleep 1; done && \
mysql -e "SOURCE /SQL;"
RUN mysqladmin shutdown
The key here is to send mysqld_safe to background with the single & sign.
After to struggle a little bit with that, take a look the Dockerfile using named volumes (db-data).
It's important declare a plus at final part, where I mentioned that volume is [external]
All worked great this way!
version: "3"
services:
database:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
volumes:
- db-data:/docker-entrypoint-initdb.d
environment:
- MYSQL_DATABASE=sample
- MYSQL_ROOT_PASSWORD=root
volumes:
db-data:
external: true
Below is the Dockerfile I used successfully to install xampp, create a MariaDB with scheme and pre populated with the info used on local server(usrs,pics orders,etc..)
FROM ubuntu:14.04
COPY Ecommerce.sql /root
RUN apt-get update \
&& apt-get install wget -yq \
&& apt-get install nano \
&& wget https://www.apachefriends.org/xampp-files/7.1.11/xampp-linux-x64-7.1.11-0-installer.run \
&& mv xampp-linux-x64-7.1.11-0-installer.run /opt/ \
&& cd /opt/ \
&& chmod +x xampp-linux-x64-7.1.11-0-installer.run \
&& printf 'y\n\y\n\r\n\y\n\r\n' | ./xampp-linux-x64-7.1.11-0-installer.run \
&& cd /opt/lampp/bin \
&& /opt/lampp/lampp start \
&& sleep 5s \
&& ./mysql -uroot -e "CREATE DATABASE Ecommerce" \
&& ./mysql -uroot -D Ecommerce < /root/Ecommerce.sql \
&& cd / \
&& /opt/lampp/lampp reload \
&& mkdir opt/lampp/htdocs/Ecommerce
COPY /Ecommerce /opt/lampp/htdocs/Ecommerce
EXPOSE 80
I'm trying to set up automatic testing of django project using CI/CD gitlab. The problem is, I can't connect to the Mysql database in any way.
gitlab-ci.yml
services:
- mysql:5.7
variables:
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "dbpass"
MYSQL_USER: "username"
MYSQL_PASSWORD: "dbpass"
stages:
- test
test:
stage: test
before_script:
- apt update -qy && apt-get install -qqy --no-install-recommends default-mysql-client
- mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --host=$MYSQL_HOST --execute="SHOW DATABASES; ALTER USER '$MYSQL_USER'#'%' IDENTIFIED WITH mysql_native_password BY '$MYSQL_PASSWORD'"
script:
- apt update -qy
- apt install python3 python3-pip virtualenvwrapper -qy
- virtualenv --python=python3 venv/
- source venv/bin/activate
- pwd
- pip install -r requirement.txt
- python manage.py test apps
With this file configuration, I get error
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
What have I tried to do
add to mysql script tcp connection unstead socket
mysql --protocol=TCP --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --host=$MYSQL_HOST --execute="SHOW DATABASES; ALTER USER '$MYSQL_USER'#'%' IDENTIFIED WITH mysql_native_password BY '$MYSQL_PASSWORD'"
And in this case I got
ERROR 2002 (HY000): Can't connect to MySQL server on 'localhost' (99)
How do I set up properly?
There can be multiple reasons for your issue:
Incorrect MySQL version.
Solution: Use mysql:5.7 instead of mysql:latest
MySQL host is missing.
Solution: add MYSQL_HOST in the variables with the hostname of the MySQL server. (Should be mysql when using mysql:5.7 in services key)
Django uses different credentials of DB.
Solution: check that the credentials in the variables section of your .gitlab-ci.yml and compare against Django's settings.py. They should be the same.
MySQL client not installed.
Solution: install the mysql-client in the script section and check if it is able to connect.
Here is a sample script that installs MySQL client and connects to the database in a debian based image (or a python:latest image):
script:
- apt-get update && apt-get install -y git curl libmcrypt-dev default-mysql-
- mysql --version
- sleep 20
- echo "SHOW tables;"| mysql -u root -p"$MYSQL_ROOT_PASSWORD" -h "${MYSQL_HOST}" "${MYSQL_DATABASE}"
Here is a complete and valid example of using MySQL 5.7 as a service and a python image with mysql-client installed successfully connecting to the MySQL database:
stages:
- test
variables:
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "dbpass"
MYSQL_USER: "username"
MYSQL_PASSWORD: "dbpass"
MYSQL_HOST: mysql
test:
image: python:latest
stage: test
services:
- mysql:5.7
script:
- apt-get update && apt-get install -y git curl libmcrypt-dev default-mysql-client
- mysql --version
- sleep 20
- echo "SHOW tables;" | mysql -u root -p"$MYSQL_ROOT_PASSWORD" -h "${MYSQL_HOST}" "${MYSQL_DATABASE}"
- echo "Database host is '${MYSQL_HOST}'"
You need to use service name as database hostname. In this case MYSQL_HOST should be mysql.
You can see example on Gitlab page and read about how services are linked to the job
I see there is an accepted answer but with mysql 8.0 and python3:buster some things broke. The Python Debian images ship with mariadb and it is not easy to set up the standard mysql-client packages, resulting in the error:
"django.db.utils.OperationalError: 2059, “Authentication plugin..."
I got a working YAML below, using Ubuntu as the base image and mysql 8.0 as a service. You could either use the root user in both the .gitlab-ci and the test_settings or give the MYSQL user the privileges to create new databases and alter existing ones.
The initial MYSQL_DB _USER and _PASS variables can be set in Gitlab under Settings -> CI/CD -> Variables.
.gitlab-ci.yml:
variables:
# "When using a service (e.g. mysql) in the GitLab CI that needs environtment variables
# to run, only variables defined in .gitlab-ci.yml are passed to the service and
# variables defined in GitLab GUI are unavailable."
# https://gitlab.com/gitlab-org/gitlab/-/issues/30178
# DJANGO_CONFIG: "test"
MYSQL_DATABASE: $MYSQL_DB
MYSQL_ROOT_PASSWORD: $MYSQL_PASS
MYSQL_USER: $MYSQL_USER
MYSQL_PASSWORD: $MYSQL_PASS
# -- In your django settings file for the test environment you could put:
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.mysql',
# 'NAME': os.environ.get('MYSQL_DATABASE'),
# 'USER': os.environ.get('MYSQL_USER'),
# 'PASSWORD': os.environ.get('MYSQL_PASSWORD'),
# 'HOST': 'mysql',
# 'PORT': '3306',
# 'CONN_MAX_AGE':60,
# },
# }
# -- You could us '--settings' to specify a custom settings file on the command line
# -- below or use an environment variable to trigger an include in your settings:
# if os.environ.get('DJANGO_CONFIG')=='test':
# from .settings_test import * # or specific overrides
#
default:
image: ubuntu:20.04
# -- Pick zero or more services to be used on all builds.
# -- Only needed when using a docker container to run your tests in.
# -- Check out: http://docs.gitlab.com/ee/ci/docker/using_docker_images.html#what-is-a-service
services:
- mysql:8.0
# This folder is cached between builds
# http://docs.gitlab.com/ee/ci/yaml/README.html#cache
# cache:
# paths:
# - ~/.cache/pip/
before_script:
- echo -e "Using Database $MYSQL_DB with $MYSQL_USER"
- apt --assume-yes update
- apt --assume-yes install apt-utils
- apt --assume-yes install net-tools python3.8 python3-pip mysql-client libmysqlclient-dev
# - apt --assume-yes upgrade
- pip3 install -r requirements.txt
djangotests:
script:
# -- The MYSQL user gets only permissions for MYSQL_DB and therefor cant create a test_db.
- echo "GRANT ALL on *.* to '${MYSQL_USER}';"| mysql -u root --password="${MYSQL_ROOT_PASSWORD}" -h mysql
# -- use python3 explicitly. see https://wiki.ubuntu.com/Python/3
- python3 manage.py test
migrations:
script:
- python3 manage.py makemigrations
- python3 manage.py makemigrations myapp
- python3 manage.py migrate
- python3 manage.py check
The go app simply inserts a harcoded value in a mysql table and spits it back out. This is done using this database driver. It works fine on linux servers, but during gitlab's CI this returns:
dial tcp 127.0.0.1:3306: connect: connection refused
This is the .gitlab-ci.yml
image: mysql
services:
- mysql:latest
variables:
MYSQL_DATABASE: storage
MYSQL_ROOT_PASSWORD: root
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_TRANSAPORT: tcp
MYSQL_ADDRESS: "127.0.0.1:3306"
job:
script:
- apt-get update -qq && apt-get install -qq curl && apt-get install -qq git
- echo "SHOW GLOBAL VARIABLES LIKE 'PORT';" | mysql --user="$MYSQL_USER" --password="$MYSQL_ROOT_PASSWORD" --host="$MYSQL_HOST" "$MYSQL_DATABASE"
- curl -O https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz
- tar -C /usr/local -xzf go1.10.1.linux-amd64.tar.gz
- rm go1.10.1.linux-amd64.tar.gz
- echo "export PATH=\$PATH:/usr/local/go/bin" >> ~/.bashrc
- echo "export GOPATH=\$HOME/go" >> ~/.bashrc
- echo "export PATH=\$PATH:\$GOPATH/bin" >> ~/.bashrc
- source ~/.bashrc
- go get github.com/go-sql-driver/mysql
- go build main.go
- ./main
Is there a standard way to use mysql from golang during CI?
I'm getting ERROR 2002 (HY000): Can't connect to local MySQL when trying to execute a mysql command during my CI process.
Here is my bitbucket-pipelines.yml file
image: theotherperson/php-ci:5.6
pipelines:
default:
- step:
caches:
- composer
script:
- apt-get update && apt-get install -y unzip mysql-client
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install --no-scripts --no-plugins
- cp test-assets/vhosts/000-default.conf /etc/apache2/sites-enabled/000-default.conf
- cp test-assets/hosts/hosts /etc/hosts
- rm /var/www/html/index.html
- cp -R $BITBUCKET_CLONE_DIR /var/www/html
- service apache2 restart
- mysql -u root -p$MYSQL_ROOT_PASSWORD -e "test < $BITBUCKET_CLONE_DIR/data/test/test.sql"
- phantomjs --webdriver=4444 &
- vendor/bin/behat -p test_behat
services:
- mysql
definitions:
services:
mysql:
image: mysql
environment:
MYSQL_DATABASE: 'test'
MYSQL_ROOT_PASSWORD: 'mypassword'
And here is the error:
+ mysql -u root -p$MYSQL_ROOT_PASSWORD -e "test < $BITBUCKET_CLONE_DIR/data/test/test.sql"
Enter password: ERROR 2002 (HY000): Can't connect to local MySQL
What do I need to do to be able to access mysql from this command line?
look at their documentation
Host name: 127.0.0.1 (avoid using localhost, as some clients will attempt to connect via a local "Unix socket", which will not work in Pipelines)