Error while executing ansible ping module
bash ~ ansible webservers -i inventory -m ping -k -u root -vvvv
SSH password:
<~> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO ~
<my-lnx> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO my-lnx
~ | FAILED => FAILED: [Errno 8] nodename nor servname provided, or not known
<my-lnx> REMOTE_MODULE ping
<my-lnx> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582 && echo $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582'
<my-lnx> PUT /var/folders/8n/fftvnbbs51q834y16vfvb1q00000gn/T/tmpP6zwZj TO /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping
<my-lnx> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping; rm -rf /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ >/dev/null 2>&1'
my-lnx | FAILED >> {
"failed": true,
"msg": "Error: ansible requires a json module, none found!",
"parsed": false
}
This is my inventory file
bash ~ cat inventory
[webservers]
my-lnx ansible_ssh_host=my-lnx ansible_ssh_port=22
I have installed simplejosn module also in the client as well as remote machine
bash ~ pip list | grep json
simple-json (1.1)
simplejson (3.6.5)
I think you need to install the python-simplejson module.
Try to run this command first and then your desired commands:
ansible webservers -i inventory -m raw -a "sudo yum install -y python-simplejson" -k -u root -vvvv
I am supposing that its old Red Hat/CentOS system.
If you don't want or can't install the python-simplejson module on remote servers, you can simply request the raw output instead:
> ansible webservers -i inventory -m ping -m raw
Or like I did, added it to my ~/.bash_profile
alias ansible="ansible -m raw"
# And then simply running:
> ansible webservers -i inventory -m ping
in centos 5.* version no python-simple json available on repo to donwload and install. you can simple use below mentioned method.
make sure both the source and destination should be accessed password less and from source to destination also password less.
use ssh-keygen -t rsa to generate key
ssh-copy-id user#host_ip
"---
- hosts: (ansible host)
become: yes
remote_user: root
gather_facts: false
tasks:
- name: copying copying temps
shell: ssh (source) && rsync -parv /root/temp/* root#(Destination):/root/temp/"
Related
Im use terraform to create infrastr. in AWS. I'm using script on ec2 userdata, that connects to rds.But this script doesn`t work.
#! /bin/bash
yum update -y
yum install -y httpd
service httpd start
usermod -a -G apache centos
chown -R centos:apache /var/www
yum install -y mysql php php-mysql
systemctl enable httpd.service
cd /var/www/html/
echo "[mysql]" > ~/.my.cnf
echo "user = myuser" >> ~/.my.cnf
echo "password = passworddata" >> ~/.my.cnf
chmod 600 ~/.my.cnf
cd database/
mysql -h db_server_address < script.sql
systemctl restart httpd.service
/var/log/cloud-init-output.log
ERROR 1045 (28000): Access denied for user 'root'#'10.0.1.91' (using password: NO)
Application cannot connect to the database because there is no schema created.
But, when I do it manually into the instance everything works fine.I understand that script is not perfect, but what's the problem,why the script don`t take credentials from ~/.my.cnf?
Probably env variables are not loaded to load data from /root, use:
mysql --defaults-file=~/.my.cnf -h db_server_address < script.sql
I downloaded deb package from https://www.couchbase.com/downloads and installed it using:
sudo dpkg -i couchbaseXXX.deb
It is successfully installed but when I try to execute:
couchbase-cli bucket-create -c localhost:8091 -u Administrator ****
Returns:
couchbase-cli: command not found
What is the issue behind that, How to fix it?
First you have to setup the couchbase cluster with same command before the bucket creation. An example below, --services could be index,data,query.
/opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1:8091 -u Administrator -p Public123 --cluster-username=Administrator --cluster-password=Public123 --cluster-port=8091 --cluster-ramsize=49971 --cluster-index-ramsize=2000 --services=data
you have to go into CLI directory location and run the command.
Below are the steps I have done.
cd /opt/couchbase/bin
./couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket test-data --bucket-type couchbase --bucket-ramsize 100
Once I run the above command, I got the success message and the bucket has been created.
I'm setting up a Dockerfile where I can run my automated tests, and I'm having troubles with connecting to mysql database.
The Dockerfile depends on a prevoously built image and looks like this:
# Stage 0, assign argument as multistage image alias
ARG PHP_IMAGE
FROM ${PHP_IMAGE} as image
# Stage 1, start tests
FROM php:7.2-fpm
RUN curl -sS https://getcomposer.org/installer | php \
&& chmod +x composer.phar && mv composer.phar /usr/local/bin/composer
RUN apt-get update && apt-get install -y gnupg
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && \
apt-get install -yq nodejs build-essential \
git unzip \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng-dev \
subversion \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash - \
&& pecl install mcrypt-1.0.1 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install -j$(nproc) mysqli
RUN apt-get install -y mysql-server
RUN /etc/init.d/mysql start
RUN mysqladmin -u root -p status
RUN yes | pecl install xdebug \
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_enable=on" >> /usr/local/etc/php/conf.d/xdebug.ini \
&& echo "xdebug.remote_autostart=off" >> /usr/local/etc/php/conf.d/xdebug.ini
RUN npm install -g npm
COPY --from=image /var/www/html/ /var/www/html/
WORKDIR /var/www/html/
COPY scripts/develop.sh develop.sh
COPY scripts/docker-test.sh docker-test.sh
RUN ["/bin/bash", "-c", "bash develop.sh && bash docker-test.sh"]
I've added RUN mysqladmin -u root -p status to try to debug why connecting to mysql failed and I got
Enter password: mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
To run this I am running
docker build -t $TEST_DOCKER_NAME --build-arg PHP_IMAGE=$DOCKER_IMAGE_NAME_PHP -f Dockerfile.test .
The TEST_DOCKER_NAME and DOCKER_IMAGE_NAME_PHP are stored in an env file and read from there. The PHP image was built successfuly and I'm using it to copy the files from there to here so that I can run PHPUnit.
When I remove that RUN line my build fails when I'm trying to run a script that creates the database
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to MySQL server on 'localhost' (99 "Cannot assign requested address")'
Check that mysqld is running on localhost and that the port is 3306.
You can check this by doing 'telnet localhost 3306'
What do I need to do in my Dockerfile to make it work?
Answer to your specific problem
This is a common mistake people make when using docker. When you use the RUN directive in docker you are running a command through to completion, capturing the filesystem changes and then exiting.
So when you have the lines
RUN /etc/init.d/mysql start
RUN mysqladmin -u root -p status
The first one is starting mysql. But then the changes are captured, the container is exited and then a new one is started to run the mysqladmin command. Therefore the mysql process is no longer running.
To avoid this you could combine them into a single line like
RUN /etc/init.d/mysql start && mysqladmin -u root -p status
However you will need to do this every time you want to use mysql. Such as in your develop.sh.
Wider answer
It is not recommended to run multiple processes within your container and it is also not recommended to use init.d or other system startup frameworks within your container.
You seem to be treating your container like a virtual machine and are having issues because containers are not VMs.
I recommend you explore running mysql in a separate container and then using a tool like docker-compose to start and and stop your containers.
I have a simple Centos6 docker image:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y update && yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:apache /var/log/httpd && \
chmod ug+w,a+rx /var/log/httpd && \
chown apache:apache /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
I can run this locally and push it up to hub.docker.com. If I then go into the web console of the Redhat OpenShift Container Developer Kit (CDK) running locally and deploy the image from dockerhub it works fine. If I go into the OpenShift3 Pro web console the pod goes into a crash loop. There are no logs on the console or the command line to diagnose the problem. Any help much appreciated.
To try to see if it was a problem only with Centos7 I changed the first line to be centos:7 and once again it works on minishift CDK but doesn't work on OpenShift3 Pro. It does show something on the logs tab of the pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.2.55. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
AH00059: Remove it before continuing if it is corrupted.
It is failing because your image expects to run as a specific user.
In Minishift this is allowed, as is being able to run images as root.
On OpenShift Online your images will run as an arbitrary assigned UID and can never run as a selected UID and never as root.
If you are only after a way of hosting static files, see:
https://github.com/sclorg/httpd-container
This is a S2I builder for taking static files for Apache and running them up in a container.
You could use it as a S2I builder by running:
oc new-app centos/httpd-24-centos7~<repository-url> --name httpd
oc expose svc/httpd
Or you could create a derived image if you wanted to try and customise it.
Either way, look at how it is implemented if wanting to build your own.
From the redhat enterprise docs at https://docs.openshift.com/container-platform/3.5/creating_images/guidelines.html#openshift-container-platform-specific-guidelines:
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node. For an image to support running as an arbitrary user, directories
and files that may be written to by processes in the image should be
owned by the root group and be read/writable by that group. Files to
be executed should also have group execute permissions.
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
So in this case the modified Docker file which runs on OpenShift 3 Online Pro is:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:0 /etc/httpd/conf/httpd.conf && \
chmod g+r /etc/httpd/conf/httpd.conf && \
chown apache:0 /var/log/httpd && \
chmod g+rwX /var/log/httpd && \
chown apache:0 /var/run/httpd && \
chmod g+rwX /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html && \
chown -R apache:0 /var/www/html && \
chmod -R g+rwX /var/www/html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
I am trying to access mysql databases from my docker host to the container.
It's my own dockerfile which install a database expose on port 3306.
I launch my docker with docker-compose, and my compose file is mapping 3308 host port on 3306 container port.
I can access to mysql from the host like this :
mysql -h localhost -P 3308 -u root -pMyPassword
It's working well, but what I can't figure out, is why I can't see any datas from my container?
From inside the container, I have a test databases which I can connect to without any problem. But when I connect from the host to the container mysql process, It seems to show me the mysql datas from the host machine, not from the container one.
Any ideas?
Thanks :)
EDIT 1 :
So here is the first way I can connect to mysql into the container :
docker exec -it MyContainer mysql -uroot -pMyPassword
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test_db |
+--------------------+
It show me my db : test_db
But If i access from :
mysql -h localhost -P 3308 -u root -pMyPassword
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
My test_db isn't here.
And the result of docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0de6a691d72 MyContainer "docker-entrypoint.sh" 3 hours ago Up 3 hours 9000/tcp, 0.0.0.0:8085->80/tcp, 0.0.0.0:3308->3306/tcp, 0.0.0.0:8084->8000/tcp, 0.0.0.0:8086->8080/tcp MyContainer
EDIT 2 :
I am developing a standard docker container for web hosting production environnement. Each host is controlled by ajenti. The host work with an nginx reverse proxy which redistribute websites on correct container. Every thing is wokring well. So here is my Dockerfile :
FROM php:5.6-fpm
RUN apt-get update && apt-get install -y --no-install-recommends \
git \
libxml2-dev \
python \
build-essential \
make \
gcc \
python-dev \
locales \
python-pip
RUN dpkg-reconfigure locales && \
locale-gen C.UTF-8 && \
/usr/sbin/update-locale LANG=C.UTF-8
ENV LC_ALL C.UTF-8
ARG MYSQL_ROOT_PASSWORD
RUN export DEBIAN_FRONTEND=noninteractive; \
echo mysql-server mysql-server/root_password password $MYSQL_ROOT_PASSWORD | debconf-set-selections; \
echo mysql-server mysql-server/root_password_again password $MYSQL_ROOT_PASSWORD | debconf-set-selections;
RUN apt-get update && apt-get install -y -q mysql-server php5-mysql
RUN rm /etc/apt/apt.conf.d/docker-gzip-indexes
RUN apt-get update && apt-get install -y wget
RUN wget http://repo.ajenti.org/debian/key -O- | apt-key add -
RUN echo "deb http://repo.ajenti.org/debian main main debian" > /etc/apt/sources.list.d/ajenti.list
RUN apt-get update && apt-get install -y ajenti cron unzip ajenti-v ajenti-v-php-fpm ajenti-v-mysql ajenti-v-nginx
RUN apt-get install -y python-setuptools python-dev \
&& easy_install -U gevent==1.1b3 \
&& sed -i -e s/ssl_version=PROTOCOL_SSLv3/ssl_version=PROTOCOL_SSLv23/ /usr/local/lib/python2.7/dist-packages/gevent-1.1b3-py2.7-linux-x86_64.egg/gevent/ssl.py
EXPOSE 80 8000 8080 3306
RUN mkdir /tmp/tempfiles \
&& mv /srv /tmp/tempfiles \
&& mv /var/lib/mysql /tmp/tempfiles \
&& mv /var/log /tmp/tempfiles \
&& mv /etc/ajenti /tmp/tempfiles
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
As I said, I wanted to be able do deploy a new container easily. So I created a docker-entrypoint.sh which copy wanted files to my volume when I start the container :
#!/bin/bash
DIR="/var/lib/mysql"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/mysql /var/lib/
fi
# rest of the logic
DIR="/srv"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/srv /
fi
# rest of the logic
DIR="/var/log"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/log /var/
fi
# rest of the logic
DIR="/etc/ajenti"
# look for empty dir
if [ ! "$(ls -A $DIR)" ]; then
cp -avr /tmp/tempfiles/ajenti /etc/
fi
# rest of the logic
Finally, my docker-compose.yml to launch everything and map ports :
version: '2'
services:
ajenti:
build:
context: ./dockerfiles/
args:
MYSQL_ROOT_PASSWORD: MyPassword
volumes:
- ./logs:/var/log
- ./html:/srv
- ./ajenti:/etc/ajenti
- ./mysql-data:/var/lib/mysql
- ./apache2:/etc/apache2
ports:
- "8084:8000"
#NGINX
- "8085:80"
#APACHE
- "8086:8080"
- "3308:3306"
Hope this will help to find a solution !
I finally found a solution and it was pretty simple...
First of all, I need to let mysql bind external address, so I changed the line bind-address to '0.0.0.0' inside the container.
Next I just changed the command line with mysql -h 127.0.0.1 -P 3308 -u root -pMyPassword
Now it's fine, I can access container mysql data from the host.
Thanks all for your help :)
In my case I was confused because docker used a different host and port. So you need to find them then do this:
mysql -P <portnumber> -h <host IP> -u db_name -p
Most people would put the docker DB related variables into the environment of the docker container so do this:
sudo docker exec -it container_name env
See if there's a variable called DB_HOST or DB_PORT or something like that. If not then look thru the source code. If it's a PHP project then find a config directory and look in main.php and see
if you execute MySQL operation as entrypoint in the dockerfile file, you will only see that operation when you connect to the container. try changing the entrypoint.
https://docs.docker.com/engine/reference/builder/#entrypoint