I pushed an image (mysql:5.5 to be exact) to my registry and am currently running the container under the name db and it does appear when I run cf ic ps.
As docker exec seems to be supported now, I tried to run cf ic exec -it db bash but I get a response of Error response from daemon: 404 error encountered while processing request!. Any exec command I try results in the same error... Does anyone know why this returns a 404 when my container does exist?
For reference I need to load a dump onto the container which is why I'm trying docker exec in the first place.
Edit: Can confirm this occurs for any container I create and try to exec -it into. logs for any container give the same error as well
For some reasons the daemon could not reach your container. I've just tried the following command on different kinds of containers and it worked:
cf ic exec -it [containerId] [command]
I think you should retry. If the problem persists I suggest you to restart the container with:
cf ic restart [containerId]
If you still get 404 you could try with a new container instance using docker run again.
Moreover, be sure that you have installed the latest version of the IBM Containers CLI
Due to a platform issue this command, even though recently added to the docker's supported commands on Bluemix, was not working fine. This was a bug that's been resolved few days ago so you should try again.
Related
as a first... yes...yes I know there are 1000 questions and solutions to this. But unfortunately none of them helps me.
Let's get to the problem:
I have a Docker container running on which MySQL is configured. Now I would like to change the bind address from 127.0.0.1 to 0.0.0.0. Unfortunately I can't open my.cnf because I don't have nano, vim installed. With apk, yum, vim, apt-get and so on I get that:
apt-get: command not found
apk: command not found
...
Could someone of you maybe help me out with my little problem?
best thanks and greetings
The default for MySQL docker image has been changed to Oracle based Linux distribution. In this distribution, the default package manager is yum. If for whatever reason you still want to use apt, pull Debian image explicitly. Something like mysql:8-debian.
See this issue for more detail.
You could do a docker cp to copy the file out of the container, edit it, and then docker cp it back in again. This may be fine if you need to do this for troubleshooting, but you probably want to look at fixing this in your deployment process. You should be able to destroy and re-create the docker container without having to manually fix configurations. This should be handled in your Dockerfile, or perhaps copying the correct configuration file in in your docker compose file.
I'm trying to setup the gitlab ci shell runner. I've used the docker runner before successfully but now I'd like to use another docker container within my testing routine and therefore switched to the shell runner.
After registering I'm running into an exception:
ERROR: Job failed (system failure): prepare environment: exit status 1. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
So, I went through the linked material but that didn't cure the problem. Now, I verified that the gitlab-runner user exists and it has access to docker (needed to run the docker test container). Also the gitlab-runner user is part of the docker group. I can also --login, fire up the /bin/bash without problems.
Still, all I get from the runner side is the the enigmatic message above. What other checkups to I need to do to track down this issue?
The careful reader will find the answer:
"A common failure is when you have a .bash_logout that tries to clear
the console."
I am running Podman version 1.6.2 on Ubuntu 18.04. I am unable to start a container after stopping it.
I run the container with:
podman run -d -p 8081:8081 --name nexus -v /opt/nexus-data:/nexus-data sonatype/nexus3
And it starts up ok. If I run:
podman container stop nexus
podman container start nexus
I get an error:
Error: unable to start container "nexus": container create failed (no
logs from conmon): EOF
When run with debug logging I see this in the output:
DEBU[0000] Initializing event backend journald DEBU[0000]
using runtime "/usr/lib/cri-o-runc/sbin/runc" WARN[0000] Error
initializing configured OCI runtime crun: no valid executable found
for OCI runtime crun: invalid argument
DEBU[0000] unmounted container
"419f6576ff23328c6445526058c9988aa27a4b69605348230fa26246a522c726"
ERRO[0000] unable to start container "nexus": container create failed
(no logs from conmon): EOF
The source image is:
docker.io/sonatype/nexus3
I'm not sure what the "invalid argument" in the logs means. Do I need to pass another argument?
there seems to be problem with the latest version of conmon package from Project Atomic PPA (v 2.0.3).
I had the same problem and I installed a lower version of conmon package (v 2.0.0) from,
https://launchpad.net/ubuntu/+archive/primary/+files/conmon_2.0.0-1_amd64.deb
This is a package built for Eoan. However, it worked on my Bionic environment and I am able to start my containers again.
As #Loki Arya noted, a bug in the common package was causing the issue. Since Podman for Unbuntu is no longer being hosted at projectatomic ppa, the updates after version 1.6.2 that fixed the bug were not available.
After removing the project atomic ppa and all associated packages, I reinstalled Podman for Ubuntu from its new repository location here
I've tested Podman (1.7) and it is working great, including the start command
Gitlab runner throw ERROR: Preparation failed: Getwd: getwd: no such file or directory?
gitlab version is: GitLab Community Edition 8.6.4
gitlab-runner version: 1.11.5
My CI throw ERROR: Preparation failed: Getwd: getwd frequently,
but sometimes we commit is work fine. So we didn't know what the final reason cause this problem.
We only know about one thing that is this error shows after we moved the build directory.
In my case that was because of residual gitlab-runner processes still executing. I resolved it by identifying guilty pids then killed them:
$ ps -ax | grep gitlab-runner
27034 ? Ssl 0:06 /usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
$ sudo kill -9 27034
I got the same error and solved by restarting gitlab-runner
gitlab-runner restart
The Gitlab Runner checks out a copy of your repository into CI_PROJECT_DIR. You can check its value by adding the following to your .gitlab-ci.yml:
script:
- echo $CI_PROJECT_DIR
I received the "getwd: no such file or directory" error because:
I had changed my working directory to /var/www/mysite (I am using a docker container with gitlab-runner installed inside it, but I think that's beside the point)
one of my deploy script lines moves /var/www/mysite to /var/www/old-mysite.
I'm used to the Gitlab Runner checking out its build inside /home/gitlab-runner/build. When I changed the docker working directory this caused the runner to check it out at /var/www/mysite/build.
After my script moved /var/www/mysite to /var/www/old-mysite, on second and subsequent runs, gitlab runner still expected to find (/var/www/mysite) but it no longer existed, hence the error.
Given the above, I can't explain why the runner works the first time ever, when that directory also doesn't exist, but hopefully my answer might at least prompt something useful for someone! :)
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume