I have a container that appears to be stuck.
In the status it currently shows that it is "Networking". However none of the ports work.
I also am unable to stop it. Just gives me an error...
Sometimes it happens that a container is in Networking state for too long and it usually means that the networking is being created for your Container so that the public and private IPs for your Container can be accessed and routed to your instance. When a container gets stuck on Networking then it is typically a problem with the infrastructure rather than anything you have done. You can try to create a new container from the same image with cf ic run or ice run. Please consider that if you reached the maximum quota you could need to delete the stuck container or to release unbound IPs in order to create a new one. You can delete a container using:
cf ic rm -f [containerId]
To get the container id you can run:
cf ic ps -a
You can list all IPs (available or not) using:
cf ic ip list -a
Then you can release an IP using:
cf ic ip release [IPAddr]
I had same situation and I was able to remove it via command line:
cf ic rm your_container_id
After couple of minutes it was removed. For some reason via web console didn't work but it worked by command line.
Related
I tried to deploy library/cassandra image cassandra container in Sandbox Openshift cluster but it threw me this error in pod logs,
"Running Cassandra as root user or group is not recommended - please start Cassandra using a different system user.
If you really want to force running Cassandra as root, use -R command line option."
When I checked the container description, I could see that SCC is set to Restricted...So looks like in Sandbox openshift, SCC "Restricted" is set for "Default" Service account by default..
But in AWS when I tried to install openshift with installer option, I didnt face this error with same library/cassandra image..
Looks like default Service account is not by default associated with "Restricted" SCC...
could someone clarify what is the difference in Sandbox environment which throws this error? and How can I set the same config in AWS openshift so that default Service account can be associated with restricted SCC?
I can't see your specific environment, but from the error message I suspect it's being triggered by the GROUP=0, not user=0.
To confirm:
$ oc get pods (whatever) -o yaml | grep openshift.io/scc
This will show you which SCC admitted the pod into the cluster. It should be "restricted" based on what you said. If so, then we've got some good evidence that it's just the group.
Next, you can look for something like this:
$ oc rsh (podname) id -a
uid=1000640000(1000640000) gid=0(root) groups=0(root),1000640000
UID (user) is in the expected billion+ range defined in the namespace annotation. GID (group) is zero.
With that in place, you can either ignore the error, knowing it's own group=0 that's in place, or you can set a securityContext for your pod (or container) to specify a different gid.
I came to know that "default" project has different set of permissions so even a container with user id 0 can be deployed in default namespace..
In Sandbox cluster, the project is dev or stage so it works with correct security level..
I have set up a network with 4 validator using docker compose and it is using PBFT consensus.
If i try to submit a proposal to change a setting, for example the "sawtooth.validator.transaction_families" settings, nothing happens (I'm doing it from the validator container using "sawtooth proposal create" ). Did someone have similar problems?
Moreover If i enter inside the settings TP docker container I can't see the folder with logs. Does someone know why the settings TP is not creating logs?
Probably seems to be an issue when running the command without the --url option inside the shell container. So, probably the transaction never got submitted.
If you run these commands be sure that you are using the --url option with the right host
I have my Docker file , build through it in the Docker engine , and then run the Docker image using docker run -td --name <imagename>
Checks for it, it keeps running in the Docker engine.
But when I tag it to Bluemix and then push it to Bluemix containers(gets available in catalog), and then I ran
cf ic run -td --name ifx2container registry.ng.bluemix.net/namespace_container/ifx2:informixinstall
This creates the container but it gets stopped automatically after few seconds of start
do run docker with
docker run -itd
not with
docker run -td
-i : Keep STDIN open even if not attached
source : https://docs.docker.com/engine/reference/run/
Make sure that your container has a long-running command. Per docs: https://console.ng.bluemix.net/docs/containers/container_planning_container_ov.html#container_planning_images
To keep a container up and running at least one long-running process is required to be included in the container image. For example, echo "Hello world" is a short running process. If no other command is specified in the image, the container shuts down after the command is executed. To transform the echo "Hello world" command into a long running process, you can, for example, loop it multiple times, or include the echo command into another long running process inside your app.
Also, by default containers in Bluemix run in detached mode. You can review supported run flags here: https://console.ng.bluemix.net/docs/containers/container_cli_reference_cfic.html#container_cli_reference_cfic__run
I am connecting to a mysql container using another container running mysql client. When I exit out of this client the container stops obviously. But when I do a docker ps -a this container doesn't show. I have not been able to find a reason for this. I am following these instructions to start the containers. Any ideas would be helpful
The --rm option passed along docker run automatically removes the container after its stopped.
See clean up flag:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.