Error while running docker compose up while connecting to sql database - mysql

I have setup docker environment in my local machine to run our development code by pointing to dev mysql database url. Done all configurations and was able to build the docker using docker-compose build command. Even though i was able to build it successfully , on running the command docker-compose up , i am getting numerous errors.
Few of the errors are :
ERROR - TransactionManager Failed to start new registry transaction.
ERROR- AsyncIndexer. Error while indexing
I saw another answers in stackoverflow and updated my code accordingly like limiting max active connections to field to 50 in master-datasources.xml file and changing the configuration url to the below format.
jdbc:mysql://${db_url}:${db_port}/${carbon_db}?autoReconnect=true&relaxAutoCommit=true
Docker version : 19.03.5
Docker compose version : 1.24.1
My docker-compose.yml file:
services:
wso2am:
build: ./apim
image: wso2am:2.1.0
env_file:
- sample.env
ports:
- "9444:9444"
- "8281:8281"
- "8244:8244"
depends_on:
- wso2is-km
wso2is-km:
build: ./is-as-km
image: wso2is-km:5.3.0
env_file:
- sample.env
ports:
- "9443:9443"
My sample.env file:
HOST_NAME=<hostname>
DB_URL=<db_connection_url>
DB_USER=admin
DB_PASS=adminpassword
DB_PORT=3306
CARBON_DB=carbondb
APIM_DB=apimdb
ADMIN_PASS=<wso2_password>
Can anyone provide a solution for this issue.

Related

'Unknown MySQL server host' when attaching to MySQL container

I am trying to attach to my MySQL container to ensure the data I wanted to transfer to the volume and container is being applied correctly. Ideally I was trying to do this by attaching via CLI through the Docker Desktop software. However trying to run mysql I get a Unknown MySQL server host '127.0.0.1:3306' (-2) error. I have tried changing the MYSQL_HOST variable to 127.0.0.1, 0.0.0.0, and localhost.
Here is a copy of my docker-compose.yml file:
version: "3.7"
services:
mysql:
image: mysql
env_file: compose.env
volumes:
- db-data:/var/lib/mysql
ports:
- 3306:3306
volumes:
db-data:
UPDATE: So I removed the MYSQL_HOST environment variable from compose.env and now I can attach and query data. Not sure why this was a conflict though.

Why is my TeamCity internal NuGet feed missing part of its URL?

My TeamCity server seems to be using a broken URL for its built-in NuGet feed.
I'm running it in a docker container using the official JetBrains image. I'm not behind a reverse proxy. I have configured the "server URL" setting.
I can use the feed in Visual Studio using the full URL (unauthenticated guest access) and it all works great. It's adding packages from build artifacts, Visual Studio can pull them.
It's just that the TeamCity property that is supposed to contain the feed URL is broken, as shown in the screen shot. So my builds are failing like this:
/usr/share/dotnet/sdk/3.1.302/NuGet.targets(128,5): error : Unable to load the service index for source http://teamcity:8111/guestAuth/app/nuget/feed/TigraOss/TigraOSS/v3/index.json.
Those are internally generated and not something I've edited, so I'm a bit confuzzled. Any ideas on how to fix this? (I've tried restarting the server, obviously).
Update
I think this might be due to the fact that everything is running in docker containers. A bit later in the parameters screen (off the bottom of the screen shot above) is another line:
teamcity.serverUrl http://teamcity:8111
I think this is coming from my docker-compose.yml file:
agent:
image: jetbrains/teamcity-agent
container_name: teamcity-agent
restart: unless-stopped
privileged: true
user: "root"
environment:
- SERVER_URL=http://teamcity:8111
- AGENT_NAME=ubuntu-ovh-vps-tigra
- DOCKER_IN_DOCKER=start
volumes:
- agentconfig:/data/teamcity_agent/conf
- agentwork:/opt/buildagent/work
- agentsystem:/opt/buildagent/system
- agent1_volumes:/var/lib/docker
I tried changing the SERVER_URL value in my docker-compose.yml file and restarting the agent container, but it looks like once the agent config file is created, the value is sticky and I need to go in and hand-edit that.
Now I have the agent using the full FQDN of the server, so we'll see if that works.
I think this is caused by my complicated docker-in-docker build. I am running TeamCity server and the linux build agent in docker containers built with docker-compose. Here's my docker-compose.yml file with secrets removed:
version: '3'
services:
db:
image: mariadb
container_name: teamcity-db
restart: unless-stopped
env_file: .env
volumes:
- mariadb:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
teamcity:
depends_on:
- db
image: jetbrains/teamcity-server
container_name: teamcity
restart: unless-stopped
environment:
- TEAMCITY_SERVER_MEM_OPTS="-Xmx750m"
volumes:
- datadir:/data/teamcity_server/datadir
- logs:/opt/teamcity/logs
ports:
- "8111:8111"
agent:
image: jetbrains/teamcity-agent
container_name: teamcity-agent
restart: unless-stopped
privileged: true
user: "root"
environment:
SERVER_URL: http://fully.qualified.name:8111
AGENT_NAME: my-agent-name
DOCKER_IN_DOCKER: start
volumes:
- agentconfig:/data/teamcity_agent/conf
- agentwork:/opt/buildagent/work
- agentsystem:/opt/buildagent/system
- agent1_volumes:/var/lib/docker
volumes:
mariadb:
datadir:
logs:
agentconfig:
agentwork:
agentsystem:
agent1_volumes:
networks:
default:
When I first created everything, I had the SERVER_URL variable set to `http://teamcity:8111". This works because Docker maps the host name to the service name, which is also 'teamcity' so that host is resolvable within the docker composition.
The problem comes when doing a build step inside yet another container.
I am building .NET Core and the .NET SDK is not installed on the machine,
so I have to run the build using the .NET Core SDK container.
The agent passes in the URL of the NuGet feeed, which is pointing to the docker service name, and the build container can't "see" that host name. I'm not sure why not. I tried passing in --network teamcity_default as a command line argument to docker run, but it says that network doesn't exist.
I found two ways to get things to work.
Edit the build step to use the FQDN of the nuget feed, and don't use the teamcity built-in parameter %teamcity.nuget.feed.guestAuth.feed-id.v3%. I don't like this solution much because it sets me up for a breakage in the future.
Find the docker volume where the teamcity agent config is stored. In my case, it was /var/lib/docker/volumes/teamcity_agentconfig/_data. Edit the buildAgent.properties file and set serverUrl=http\://fully.qualified.name\:8111. Then docker-compose restart agent. Then you can safely use %teamcity.nuget.feed.guestAuth.feed-id.v3% in containerized builds.
I haven't tested this, but I think you may be able to avoid all this in the first place by using a fully-qualified server name in the docker-compose.yml file. However you have to do this right from the start, because the moment you run docker-compose up the agent config filesystem is created and becomes permanent.

How to access redmine log folder inside a docker after a docker-compose?

I am a noob with docker, and I try to implement a redmine+mysql container in Windows environment and add production mysql dump in it after that.
I have an issue when trying to access to redmine after my execution of sql script production dump, after sql launch I only have Internal error when browsing redmine with docker.
I don't know how to change the database name in the data-compose file , if I replace 'redmine' with anything else my script is broken.
Also I don't know how I can access to redmine error logs folder in my docker to fix the issue.
Here is my docker-compose file :
version: '3.7'
services:
db:
image: mysql:5.5.47
restart: always
ports:
- 3306:3306
volumes:
- .\mysql_files\data-mysql:/var/lib/mysql
- .\mysql_files\backup-mysql:/var/lib/mysql/backup
- .\mysql_files\dump-mysql:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: monmdp
MYSQL_DATABASE: redmine
redmine:
image: redmine:4.0.3
restart: always
ports:
- 8080:3000
depends_on:
- db
volumes:
- .\redmine_files\files:/usr/local/redmine/files
- .\redmine_files\logs:/var/log/redmine
environment:
REDMINE_DB_MYSQL: db
REDMINE_DB_PASSWORD: monmdp
as you can see I tried to access to redmine log folder in this line :
- .\redmine_files\logs:/var/log/redmine
but the folder is still empty :(
expected result : can browse redmine with production data dump
current result : Can't browse redmine and can't access log folder to check what's wrong.
Thanks for your help
From what I understand, you are trying to access the logs of redmine container but you found the .\redmine_files\logs directory empty. First, you need to check whether there are logs in the docker /var/log/redmine directory or not. You can do so by running commands in redmine shell itself. Use the command docker exec -it redmine /bin/bash to move to redmine shell and cd to /var/log/redmine to check if the logs are present there or not. If you don't find the logs there then that means that there was not logs to be replicated to ./redmine_files/logs.
If you find the logs in /var/log/redmine then there must be some issue with your docker-compose file, but it seems fine to me. Also, as #Mihai has suggested you can check the logs of redmine using the command sudo docker-compose logs redmine to see if it is running properly or not.
this docked image host redmine under /usr/src/redmine/ so you should use
- .\redmine_files\logs:/usr/src/redmine/log

Docker-compose problem copying MySQL config in CircleCI

The following works fine on a local machine, but fails when checked into CircleCI:
mysql:
image: mysql:5.7
ports:
- 3306:3306
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_ROOT_HOST=%
restart: always
volumes:
- ./docker/mysql/mysqld.cnf:/etc/mysql/conf.d/mysql.cnf
There is a file at ./docker/mysql/mysqld.cnf under the checked out project.
The error shown on CircleCi is:
ERROR: for proj-server_mysql_1 Cannot start service mysql: b'oci
runtime error: container_linux.go:265: starting container process
caused "process_linux.go:368: container init caused
\"rootfs_linux.go:57: mounting
\\\"/home/circleci/max/proj-server/docker/mysql/mysqld.cnf\\\"
to rootfs
\\\"/var/lib/docker/aufs/mnt/4a9af90d342b491ae92af5a88360d2e34fce0d21c15f8a648767e89fb51aa\\\"
at
\\\"/var/lib/docker/aufs/mnt/4a9af90d342b491ae92af5a88360d2e34fce0d21c15f8a648767e89fb51aa/etc/mysql/conf.d/mysql.cnf\\\"
caused \\\"not a directory\\\"\""\n: Are you trying to mount a
directory onto a file (or vice-versa)? Check if the specified host
path exists and is the expected type'
It's not possible to use volume mounting with the docker executor, but with using the machine executor it's possible to mount local directories to your running Docker containers. You can learn more about the machine executor here on our docs page.
https://support.circleci.com/hc/en-us/articles/360007324514-How-can-I-mount-volumes-to-docker-containers-

In jenkins, docker-compose mysql 0mERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'

I have a docker-compose.yml that sets up an API service, a test MYSQL database, and a java task worker app. I then run API integration tests against the local stack. Each time I run it, I execute docker-compose rm -v to ensure that my DB is the same for each test run.
Locally, my docker-compose file sets up my images properly.
In Jenkins, using the same docker-compose file, I get the above error, copied here:
0mERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
It appears something might be going on with the hostname of the image.
Here's the Dockerfile for my DB setup:
FROM mysql
ENV MYSQL_DATABASE="db_name"
ENV MYSQL_USER="user"
ENV MYSQL_PASSWORD="password1"
ENV MYSQL_ROOT_PASSWORD="password"
COPY *.sql /docker-entrypoint-initdb.d/
EXPOSE 3306
Here's my docker-compose.yml:
web-api:
image: registry/api-repo
links:
- test-db
ports:
- "8625:8625"
environment:
- MYSQL_DATABASE=db_name
- MYSQL_HOST=test-db
- MYSQL_USER=user
- MYSQL_PASSWORD=password1
custodial-java:
image: registry/java-repo
links:
- test-db
environment:
- MYSQL_DATABASE=db_name
- MYSQL_HOST=test-db
- MYSQL_USER=user
- MYSQL_PASSWORD=password1
test-db:
image: registry/test-db
ports:
- "3306:3306"
I am likely missing something in my Jenkins config, but I'm not sure where to look.
As it turned out, the version of Docker on Jenkins was not the same as my local machine. On my local machine, i was using Docker 17.03, whereas the Jenkins box was running Docker Toolbox 1.5.
After updating the build machine to use the latest docker toolbox (1.12 i believe), I no longer get this error and my automation test functions!