Unable to see Container Processes with docker-java - docker-java

I'm using the docker-java libraries to handle start up of a Docker image:
DockerClient dockerClient = DockerClientBuilder.getInstance("unix:///var/run/docker.sock").build();
CreateContainerResponse container = dockerClient.createContainerCmd("postgres")
.withCmd("--bind_ip_all")
.withHostName("127.0.0.1")
.withPortBindings(PortBinding.parse("5432:5432")).exec();
dockerClient.startContainerCmd(container.getId()).exec();
I can see I'm able to return the containerId from the above command
String containerId = container.getId();
However running a 'docker ps' shows an empty list. Do I miss something in order to start the postgres container image?
Thanks

I've just relalized that the cause is the
.withCmd("--bind_ip_all")
It seems to conflict with my docker configuration. By removing that line I'm able to see the Container with 'docker ps'

Related

when use docker-compose to launch mysql, I met the problem "Unable to load '/usr/share/zoneinfo/****** as time zone"

https://i.stack.imgur.com/8E8fX.png (this is the link to the screenshot).
I want to use https://github.com/polkascan/polkascan-os.git to launch a block exploer. But As the picture shows, I met the warnning "Unable to load '/usr/share/zoneinfo/****** as time zone". And finally, my block exploer can't shows the time correctly. So I think it has something to do with the warnning. I have searched the warnning, but seems like the solutions are all for docker but not docker-compose. So could anyone tell me how to solve the problem in dock-compose?
Try defining the containers timezone. There are several ways, depending on the contained system. For example, mounting host's /etc/localtime and /etc/timezone into the container:
explorer-api:
volumes:
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"

How to deploy MySQL docker image on AWS ECS?

I have troubles deploying MySQL image on AWS ECS FARGATE.
The cloudformation script that i have is this (dont mind the syntax, i am using python lib Troposphere to manage cloudfromation templates):
TaskDefinition(
'WordpressDatabaseTaskDefinition',
RequiresCompatibilities=['FARGATE'],
Cpu='512',
Memory='2048',
NetworkMode='awsvpc',
ContainerDefinitions=[
ContainerDefinition(
Name='WordpressDatabaseContainer',
Image='mysql:5.7',
Environment=[
Environment(Name='MYSQL_ROOT_PASSWORD', Value='root'),
Environment(Name='MYSQL_DATABASE', Value='wpdb'),
Environment(Name='MYSQL_USER', Value='root'),
Environment(Name='MYSQL_PASSWORD', Value='root'),
],
PortMappings=[
PortMapping(
ContainerPort=3306
)
]
)
]
)
The deployment succeeds. I can even see that the task is running for few seconds until its state changes to STOPPED.
The only thing that i can see is:
Stopped reason Essential container in task exited
Exit Code 1
On localhost it works like a charm. What am i doing here wrong? At least - are there ways to debug this?
With AWS ECS, if it is stopping, it may be failing a health check which is causing the container to restart. What port is the container DB mapped to and can you check the container logs to see what is happening when it starts then stops? Also, check the logs in ECS under the service or task. Post it here so I can take a look at them.
So, I found out a mistake.
THE VERY FIRST THING YOU DO - is you test that docker container on localhost and see if you can reproduce the issue. In my case docker mysql container on a local machine with the exact same environment crashed too. I was able to inspect logs and found out that it fails to create "root" user. Simply changing user and password made everything work, even on ECS.
This is the complete stack to have a mysql docker image running on AWS ECS FARGATE:
self.wordpress_database_task = TaskDefinition(
'WordpressDatabaseTaskDefinition',
RequiresCompatibilities=['FARGATE'],
Cpu='512',
Memory='2048',
NetworkMode='awsvpc',
# If your tasks are using the Fargate launch type, the host and sourcePath parameters are not supported.
Volumes=[
Volume(
Name='MySqlVolume',
DockerVolumeConfiguration=DockerVolumeConfiguration(
Scope='shared',
Autoprovision=True
)
)
],
ContainerDefinitions=[
ContainerDefinition(
Name='WordpressDatabaseContainer',
Image='mysql:5.7',
Environment=[
Environment(Name='MYSQL_ROOT_PASSWORD', Value='root'),
Environment(Name='MYSQL_DATABASE', Value='wpdb'),
Environment(Name='MYSQL_USER', Value='wordpressuser'),
Environment(Name='MYSQL_PASSWORD', Value='wordpressuserpassword'),
],
PortMappings=[
PortMapping(
ContainerPort=3306
)
]
)
]
)
self.wordpress_database_service = Service(
'WordpressDatabaseService',
Cluster=Ref(self.ecs_cluster),
DesiredCount=1,
TaskDefinition=Ref(self.wordpress_database_task),
LaunchType='FARGATE',
NetworkConfiguration=NetworkConfiguration(
AwsvpcConfiguration=AwsvpcConfiguration(
Subnets=[Ref(sub) for sub in VpcFormation().public_subnets],
AssignPublicIp='ENABLED',
SecurityGroups=[Ref(self.security_group)]
)
),
)
Note the AssignPublicIp='ENABLED' option so you would be able to connect to the database remotely.
After the stack completed i was able to successfully connect with a command:
mysql -uwordpressuser -pwordpressuserpassword -h18.202.31.123
Thats it :)

Cannot parse server name for external Xdebug connection

I have a Docker container with xdebug in it, when I run the script I need to form the Docker container I receive from PhpStorm the following message:
Cannot parse server name for external Xdebug connection.
To fix it create environment variable PHP_IDE_CONFIG on the remote server.
Windows: set PHP_IDE_CONFIG="serverName=SomeName"
Linux / Mac OS X: export PHP_IDE_CONFIG="serverName=SomeName".
but I have already set those environment variables as you can see in the screenshot here:
xdebug.log
Here is the xdebug section from my phpinfo():
And these are my settings for PhpStorm:
Environment from phpinfo():
PHP Variables from phpinfo():
I also tried to export env variables with and without quotes but the result was the same...
With quotes:
XDEBUG_CONFIG="remote_host=192.168.1.110"
PHP_IDE_CONFIG="serverName=docker-server"
Without quotes:
XDEBUG_CONFIG=remote_host=192.168.1.110
PHP_IDE_CONFIG=serverName=docker-server
The result from ifconfig en1 inet command from my MacOS where I'm running Docker and PhpStorm
You can also check the following files in cases needed:
Dockerfile.development
docker-compose.yml
environment.development
php.ini
Any help will be much appreciated!
Update:
Seems that if I add
environment:
XDEBUG_CONFIG: "remote_host=192.168.1.110"
PHP_IDE_CONFIG: "serverName=docker-server"
into my php service located inside docker-compose.yml it solves the issue but leaves me with a big question.
Since I have:
env_file:
- ./etc/environment.yml
- ./etc/environment.development.yml
and inside ./etc/environment.development.yml I have:
XDEBUG_CONFIG="remote_host=192.168.1.110"
PHP_IDE_CONFIG="serverName=docker-server"
And since it is not ignored and I can see that those Env variables are set even before I add environment property into my php service, why xdebug is only triggered when I have set the environment property? It feels like duplication for me to have it in both places and I prefer to have it inside ./etc/environment.development.yml rather than docker-compose.yml.
After some more digging,
I saw the following difference:
When I use env_file directive I had the following in my environment.development file:
XDEBUG_CONFIG="remote_host=192.168.1.110"
PHP_IDE_CONFIG="serverName=docker-server"
which resulted in:
Notice the double quotes around the value.
When I was removing env_file directive and put the following:
environment:
XDEBUG_CONFIG: "remote_host=192.168.1.110"
PHP_IDE_CONFIG: "serverName=docker-server"
Then I had this in phpinfo():
So in the end what I did was, I removed environment directive and put back the env_file directive and inside environment.development file I removed the double quotes around the value, so now it looks like that:
XDEBUG_CONFIG=remote_host=192.168.1.110
PHP_IDE_CONFIG=serverName=docker-server
And now it works fine :)
I filled a bug report in PhpStorm youtrack.
I had the same issue with double quotes but in docker-compose. First version was wrong, removing double quotes solved it:
environment:
- PHP_IDE_CONFIG="serverName=local"
- PHP_IDE_CONFIG=serverName=local

lxc container is not starting

I am using LXC container and want to add some capabilties to it
so
set lxc.cap.keep = sys_ptrace in var/lib/lxc/container_name/config
but after doing this container is not starting and giving error
container requests lxc.cap.drop and lxc.cap.keep: either use lxc.cap.drop or lxc.cap.keep, not both

No logging with starting bluemix container

I create a bluemix container with a docker file.
If I look at the IBM dashboard the status of the container is fixed on Networking.
When I try to get the log file through bij cf ic in my command shell I get a 404.
I use the following command the get the container_id: cf ic ps -a.
This is the response I get back:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57f7363b-710 <no image> "" 8 days ago Networking a day ago XXX.YYY.ZZZ.KKK:22->22/tcp, XXX.YYY.ZZZ.KKK:80->80/tcp, XXX.YYY.ZZZ.KKK:587->587/tcp, XXX.YYY.ZZZ.KKK:2812->2812/tcp containername
With the following command I try to get the logs: cf ic logs 57f7363b-710.
But then I see the following response:
FAILED
404 error encountered while processing request!`
Is there another way to see why the container is hanging on the status "Networking"?
This issue reflects a networking problem that has been fixed last week. When the container status is freeze you can use "ICE rm -f" to force the removal of a running container or "ICE stop" to stop a running container by sending SIGTERM and then SIGKILL after a grace period.
If you are unable to create a new container because the status is always "freeze", please open a ticket to bluemix support.
When a container is in 'Networking' state it means that the networking phase is not finished yet. During that step of a container creation there is for example the allocation of selected IP addresses (both public and private). When this phase ends you will be able to route requests to those IPs. When a container is in 'Networking' state for too long it usually means that there was an infrastructure problem. You can try to create a new container from the same image with cf ic run. Please consider that if you reached the maximum quota you could need to delete the stuck container or to release unbound IPs in order to create a new one.
You can delete a container using:
cf ic rm -f [containerId]
You can list all IPs (available or not) using:
cf ic ip list -a
Then you can release an IP using:
cf ic ip release [IPAddr]