My TeamCity server seems to be using a broken URL for its built-in NuGet feed.
I'm running it in a docker container using the official JetBrains image. I'm not behind a reverse proxy. I have configured the "server URL" setting.
I can use the feed in Visual Studio using the full URL (unauthenticated guest access) and it all works great. It's adding packages from build artifacts, Visual Studio can pull them.
It's just that the TeamCity property that is supposed to contain the feed URL is broken, as shown in the screen shot. So my builds are failing like this:
/usr/share/dotnet/sdk/3.1.302/NuGet.targets(128,5): error : Unable to load the service index for source http://teamcity:8111/guestAuth/app/nuget/feed/TigraOss/TigraOSS/v3/index.json.
Those are internally generated and not something I've edited, so I'm a bit confuzzled. Any ideas on how to fix this? (I've tried restarting the server, obviously).
Update
I think this might be due to the fact that everything is running in docker containers. A bit later in the parameters screen (off the bottom of the screen shot above) is another line:
teamcity.serverUrl http://teamcity:8111
I think this is coming from my docker-compose.yml file:
agent:
image: jetbrains/teamcity-agent
container_name: teamcity-agent
restart: unless-stopped
privileged: true
user: "root"
environment:
- SERVER_URL=http://teamcity:8111
- AGENT_NAME=ubuntu-ovh-vps-tigra
- DOCKER_IN_DOCKER=start
volumes:
- agentconfig:/data/teamcity_agent/conf
- agentwork:/opt/buildagent/work
- agentsystem:/opt/buildagent/system
- agent1_volumes:/var/lib/docker
I tried changing the SERVER_URL value in my docker-compose.yml file and restarting the agent container, but it looks like once the agent config file is created, the value is sticky and I need to go in and hand-edit that.
Now I have the agent using the full FQDN of the server, so we'll see if that works.
I think this is caused by my complicated docker-in-docker build. I am running TeamCity server and the linux build agent in docker containers built with docker-compose. Here's my docker-compose.yml file with secrets removed:
version: '3'
services:
db:
image: mariadb
container_name: teamcity-db
restart: unless-stopped
env_file: .env
volumes:
- mariadb:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
teamcity:
depends_on:
- db
image: jetbrains/teamcity-server
container_name: teamcity
restart: unless-stopped
environment:
- TEAMCITY_SERVER_MEM_OPTS="-Xmx750m"
volumes:
- datadir:/data/teamcity_server/datadir
- logs:/opt/teamcity/logs
ports:
- "8111:8111"
agent:
image: jetbrains/teamcity-agent
container_name: teamcity-agent
restart: unless-stopped
privileged: true
user: "root"
environment:
SERVER_URL: http://fully.qualified.name:8111
AGENT_NAME: my-agent-name
DOCKER_IN_DOCKER: start
volumes:
- agentconfig:/data/teamcity_agent/conf
- agentwork:/opt/buildagent/work
- agentsystem:/opt/buildagent/system
- agent1_volumes:/var/lib/docker
volumes:
mariadb:
datadir:
logs:
agentconfig:
agentwork:
agentsystem:
agent1_volumes:
networks:
default:
When I first created everything, I had the SERVER_URL variable set to `http://teamcity:8111". This works because Docker maps the host name to the service name, which is also 'teamcity' so that host is resolvable within the docker composition.
The problem comes when doing a build step inside yet another container.
I am building .NET Core and the .NET SDK is not installed on the machine,
so I have to run the build using the .NET Core SDK container.
The agent passes in the URL of the NuGet feeed, which is pointing to the docker service name, and the build container can't "see" that host name. I'm not sure why not. I tried passing in --network teamcity_default as a command line argument to docker run, but it says that network doesn't exist.
I found two ways to get things to work.
Edit the build step to use the FQDN of the nuget feed, and don't use the teamcity built-in parameter %teamcity.nuget.feed.guestAuth.feed-id.v3%. I don't like this solution much because it sets me up for a breakage in the future.
Find the docker volume where the teamcity agent config is stored. In my case, it was /var/lib/docker/volumes/teamcity_agentconfig/_data. Edit the buildAgent.properties file and set serverUrl=http\://fully.qualified.name\:8111. Then docker-compose restart agent. Then you can safely use %teamcity.nuget.feed.guestAuth.feed-id.v3% in containerized builds.
I haven't tested this, but I think you may be able to avoid all this in the first place by using a fully-qualified server name in the docker-compose.yml file. However you have to do this right from the start, because the moment you run docker-compose up the agent config filesystem is created and becomes permanent.
Related
I'm trying to get TeamCity server running using docker-compose. Here's my compose file:
version: '3'
services:
db:
image: mysql
container_name: teamcity-db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=teamcity
volumes:
- mysql:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
teamcity:
depends_on:
- db
image: jetbrains/teamcity-server
container_name: teamcity
restart: unless-stopped
volumes:
- datadir:/data/teamcity_server/datadir
- logs:/opt/teamcity/logs
ports:
- "8111:8111"
volumes:
mysql:
datadir:
logs:
I've been successful getting wordpress set up using a very similar technique, and I can run phpMyAdmin and link it to the MySQL container and see the database, so its there.
When I browse to the teamcity web address, it shows me the initial setup screen as expected. I tell it to use MySQL and I put in 'root' as teh username and my MySQL root password. Teamcity then shows this:
I'm sure it's something simple but I just can't see what's wrong. Any ideas?
Solved! Here is my solution and some other learnings.
The problem was that I was telling TeamCity to use 'localhost' as the database server URL. This seems intuitive because all the services are on the same machine, but is incorrect. It is as if each container is its own host and so 'localhost' is specific to each container. 'localhost' inside a container refers to the container itself, not the host machine or any other container. So 'localhost' on the teamcity service refers to the teamcity server, not the database server, and that's why it couldn't connect.
The correct address for the database server based on my docker-compose.yml file is db (the service name of the database container). The service name becomes the host name for that container and docker resolves these as DNS names correctly within the composed group.
Also note: the default virtual network is created implicitly by docker-compose and allows all of the containers in the composed group to communicate with each other. The name of this network derives from the folder where the docker-compose.yml file is located (in my case ~/projects/teamcity) so I get a network called teamcity_default. All servers on this private vitual network are visible to each other with no further configuration needed.
The teamcity server container explicitly exposes port 8111 on the host's network interface, so it is the only container visible to the outside world. You do not need to (and probably should not) expose ports if you only need the servers to talk to each other. For example, the database server does not need to have a ports entry because it is automatically exposed on the private inter-container network. This is great for security because all the back-end services are hidden from the physical LAN and therefore the Internet.
I have setup docker environment in my local machine to run our development code by pointing to dev mysql database url. Done all configurations and was able to build the docker using docker-compose build command. Even though i was able to build it successfully , on running the command docker-compose up , i am getting numerous errors.
Few of the errors are :
ERROR - TransactionManager Failed to start new registry transaction.
ERROR- AsyncIndexer. Error while indexing
I saw another answers in stackoverflow and updated my code accordingly like limiting max active connections to field to 50 in master-datasources.xml file and changing the configuration url to the below format.
jdbc:mysql://${db_url}:${db_port}/${carbon_db}?autoReconnect=true&relaxAutoCommit=true
Docker version : 19.03.5
Docker compose version : 1.24.1
My docker-compose.yml file:
services:
wso2am:
build: ./apim
image: wso2am:2.1.0
env_file:
- sample.env
ports:
- "9444:9444"
- "8281:8281"
- "8244:8244"
depends_on:
- wso2is-km
wso2is-km:
build: ./is-as-km
image: wso2is-km:5.3.0
env_file:
- sample.env
ports:
- "9443:9443"
My sample.env file:
HOST_NAME=<hostname>
DB_URL=<db_connection_url>
DB_USER=admin
DB_PASS=adminpassword
DB_PORT=3306
CARBON_DB=carbondb
APIM_DB=apimdb
ADMIN_PASS=<wso2_password>
Can anyone provide a solution for this issue.
I am a noob with docker, and I try to implement a redmine+mysql container in Windows environment and add production mysql dump in it after that.
I have an issue when trying to access to redmine after my execution of sql script production dump, after sql launch I only have Internal error when browsing redmine with docker.
I don't know how to change the database name in the data-compose file , if I replace 'redmine' with anything else my script is broken.
Also I don't know how I can access to redmine error logs folder in my docker to fix the issue.
Here is my docker-compose file :
version: '3.7'
services:
db:
image: mysql:5.5.47
restart: always
ports:
- 3306:3306
volumes:
- .\mysql_files\data-mysql:/var/lib/mysql
- .\mysql_files\backup-mysql:/var/lib/mysql/backup
- .\mysql_files\dump-mysql:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: monmdp
MYSQL_DATABASE: redmine
redmine:
image: redmine:4.0.3
restart: always
ports:
- 8080:3000
depends_on:
- db
volumes:
- .\redmine_files\files:/usr/local/redmine/files
- .\redmine_files\logs:/var/log/redmine
environment:
REDMINE_DB_MYSQL: db
REDMINE_DB_PASSWORD: monmdp
as you can see I tried to access to redmine log folder in this line :
- .\redmine_files\logs:/var/log/redmine
but the folder is still empty :(
expected result : can browse redmine with production data dump
current result : Can't browse redmine and can't access log folder to check what's wrong.
Thanks for your help
From what I understand, you are trying to access the logs of redmine container but you found the .\redmine_files\logs directory empty. First, you need to check whether there are logs in the docker /var/log/redmine directory or not. You can do so by running commands in redmine shell itself. Use the command docker exec -it redmine /bin/bash to move to redmine shell and cd to /var/log/redmine to check if the logs are present there or not. If you don't find the logs there then that means that there was not logs to be replicated to ./redmine_files/logs.
If you find the logs in /var/log/redmine then there must be some issue with your docker-compose file, but it seems fine to me. Also, as #Mihai has suggested you can check the logs of redmine using the command sudo docker-compose logs redmine to see if it is running properly or not.
this docked image host redmine under /usr/src/redmine/ so you should use
- .\redmine_files\logs:/usr/src/redmine/log
I'm very new to Docker and after reading about data volumes I'm still somewhat confused by the behaviour I'm seeing.
In my compose file I had an entry for mysql like this:
db:
image: mysql
restart: always
volumes:
- ./database:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- back
This mapped the /database directory to /var/lib/mysql. The database files where created and I could start Wordpress, install, add a post. The problem as it never persisted any created data. If I restarted Docker and executed:
docker-compose up -d
The database was empty.
Changing this to:
db:
image: mysql
restart: always
volumes:
- db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: p4ssw0rd!
networks:
- back
And adding in a volume like this:
volumes:
db_data:
Now persists the data in the Docker data volume and restarting works. Any data created during the last run is still present.
How would I get this to work using the host mapped directory?
Am I right in thinking the second example using volumes is the way to go?
Docker volumes on windows work a bit different way than Linux. Basically on Windows, docker runs a VM and the docker is setup inside the VM. So it seems to you that you run docker commands locally on Windows but the actual stuff happens in background inside a VM.
docker run -v d:/data:/data alpine ls /data
First you need to make share the D: in docker settings. You can find a detailed article explaining the steps for doing so
https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c
I'm trying to adapt Docker's Wordpress secret example (link below) to work in my Docker Compose setup (for Drupal).
https://docs.docker.com/engine/swarm/secrets/#/advanced-example-use-secrets-with-a-wordpress-service
However, when the 'mysql' container is spun up, the following error is output:
"error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD"
I created the secrets using the 'docker secret create' command:
docker secret create mysql_root_pw tmp-file-holding-root-pw.txt
docker secret create mysql_pw tmp-file-holding-pw.txt
After running the above, the secrets 'mysql_root_pw' and 'mysql_pw' now exist in the swarm environment. Verified by doing:
docker secret ls
Here are the relevant parts from my docker-compose.yml file:
version: '3.1'
services:
mysql:
image: mysql/mysql-server:5.7.17
environment:
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/mysql_root_pw"
- MYSQL_PASSWORD_FILE="/run/secrets/mysql_pw"
secrets:
- mysql_pw
- mysql_root_pw
secrets:
mysql_pw:
external: true
mysql_root_pw:
external: true
When I do "docker stack deploy MYSTACK", I get the error mentioned above when the 'mysql' container attempts to start.
It seems like "MYSQL_PASSWORD_FILE" and "MYSQL_ROOT_PASSWORD_FILE" are not standard environment variables recognized by MySQL, and it's still expecting "MYSQL_ROOT_PASSWORD" environment variable.
I'm using Docker 17.03.
Any suggestions?
Thanks.
You get this error if your secret is a empty string as well. That is what happened to me, secret is mounted and service properly configured, but still fails because there is not password.