How can I use Ansible when I only have read-only access? - configuration

I am using Ansible to automate some network troubleshooting tasks, but when I try to ping all my devices as a sanity check I get the following error:
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\".
When I run the command in Ansible verbose mode, right before this error I get the following output:
<10.25.100.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" && echo ansible-tmp-1500330345.12-194265391907358="echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" ) && sleep 0'
I am an intern and thus only have read-only access to all devices; therefore, I believe the error is occurring because of the mkdir command. My two questions are thus:
1) Is there anyway to configure Ansible to not create any temp files on the devices?
2) Is there some other factor that may be causing this error that I might have missed?
I have tried searching through the Ansible documentation for any relevant configurations, but I do not have much experience working with Ansible so I have been unable to find anything.

The question does not make sense in a broader context. Ansible is a tool for server configuration automation. Without write access you can't configure anything on the target machine, so there is no use case for Ansible.
In a narrower context, although you did not post any code, you seem to be trying to ping the target server. Ansible ping module is not an ICMP ping. Instead, it is a component which connects to the target server, transfers Python scripts and runs them. The scripts produce a response which means the target system meets minimal requirements to run Ansible modules.
However you seem to want to run a regular ping command using Ansible command module on your control machine and check the status:
- hosts: localhost
vars:
target_host: 192.168.1.1
tasks:
- command: ping {{ target_host }}
You might want to play with failed_when, ignore_errors, or changed_when parameters. See Error handling in playbook.
Note, that I suggested running the whole play on localhost, because in your configuration, it doesn't make sense to configure the target machines to which you have limited access rights in the inventory.
Additionally:
Is there anyway to configure Ansible to not create any temp files on the devices?
Yes. Running commands through raw module will not create temporary files.
As you seem to have an SSH access, you can use it to run a command and check its result:
- hosts: 192.168.1.1
tasks:
- raw: echo Hello World
register: echo
- debug:
var: echo.stdout

If someone have multiple nodes and sudo permission, and you want to bypass Read Only restriction, try to use raw module, to remount disk, on remoute node with raed/write option, it was helful for me.
Playbook example:
---
- hosts: bs
gather_facts: no
pre_tasks:
- name: read/write
raw: ansible bs -m raw -a "mount -o remount,rw /" -b --vault-password-file=vault.txt
delegate_to: localhost
tasks:
- name: dns
raw: systemctl restart dnsmasq
- name: read only
raw: mount -o remount,ro /

Related

Use kubernetes with helm and provide my predefined user and password with bitnami correctly

I am using kubernetes with helm 3.
I need to create a kubernetes pod with sql - creating:
database name: my_database
user: root
password:12345
port: 3306
The steps:
creating chart by:
helm create test
after the chart is created, change the Chart.yaml file in test folder, by adding dependencies section.
apiVersion: v2
name: test3
description: A Helm chart for Kubernetes
version: 0.1.0
appVersion: "1.16.0"
dependencies:
name: mysql
version: 8.8.23 repository: "https://charts.bitnami.com/bitnami"
run:
helm dependencies build test
After that there is a compressed file tgz.
So I extracted it and there is tar file - I extracted it too, and leave only the final extracted folder.
I presume this isn't the best approach of changing parameter in yaml for bitnami,
and using also the security.yaml - I would like knowing that better approach too.
I need to change the user + password, and link to database,
so I changed the values.yaml directly (any better approach?), for values: auth:rootPassword and auth:my_database.
the another following steps:
helm build dependencies test
helm install test --namespace test --create-namespace
after that there are two pods created.
I could check it by:
kubectl get pods -n test
and I see two pods running (maybe replication).
one of the pod: test-mysql-0 (the other is with random parse).
run:
kubectl exec --stdin --tty test-mysql-0 --namespace test-mysql -- /bin/sh
did enter the pod.
run:
mysql -rroot -p12345;
and then:
show databases;
That did showing all the database, including seeing the created database: my_database, successfully.
When I tried openning the mysql database from 'mysql workbench', and test (same user: root, and password, and port: 3306, and localhost), I couldn't run test (test connection button in database properties returns: 'failed to connect to database').
Why cannot I run properly 'mysql workbench', while in the pad itself - without any particular problem?
Is there any better approach than extrating the tgz file as I described above, and can I pass in better way (some secured yaml) the user+password?
(Right now is only the root password)
Thanks.
It sounds like you're trying to set the parameters in the dependent chart (please correct me if I'm wrong)
If this is right, all you need to do is add another section in your chart's values.yaml
name-of-dependency:
user-name: ABC
password: abcdef
the "name-of-dependency" is specified in your Chart.yaml file when you declare your chart. For example, here's my redis dependency from one of my own charts
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami/
version: x.x.x
Then when I install the chart, I can override the redis chart's settings by doing this in my own chart's values.yaml
redis:
architecture: standalone
auth:
password: "secret-password-here"

How do I complete this SSH tunnel from local development docker to staging database

I can create a docker containing my project code. It has unit tests that fail because there is no database connection.
I can log in to a server with a central database that contains our test data using SSH key and credentials.
I cannot get the docker and DB communicating.
I've tried several different suggestions, scratching and restarting this portion of the Dockerfile over the past two days. I've searched Youtube for tutorials, Stackexchange for answers and the docker forums for reference.
If there's a step by step tutorial, that is tucked away I would love to see that too!
The docker-compose has the following:
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
args:
APP_PATH: ${APP_PATH}
image: laravel-docker
env_file: .env
ports:
- 8080:80
# We need to expose 443 port for SSL certification.
- "443:443"
volumes:
- .:/var/www/jumbledown
Inside the container, I can contact the host of the remote DB with the following:
ssh -4 -R 8888:localhost:8888 [devname]#NN.NN.NN.NN -i ~/ident -p [portnumber]
where:
- devname is my log in.
- NN.NN.NN.NN is the IP address of the host of the DB.
- ident is a file containing the SSH key that is copied in by a copy command contained in the Dockerfile.
The Docker file is built off FROM php:7.1.8-apache and installs a LOT of extra stuff now, including Xdebug. It's too long to just copy and paste and I'm not sure what parts are relevant; I can expose at request.
Ideally, I'd like to be able to use Dockerfile to set up an SSH tunnel serving the DB to the docker container. Right now, I'd settle for being able to manually set up the connection inside the container.
Update As per questions in the answer, the end result I need to create is for several developers to have local dockers and each have a tunnel to a central database that contains testing data, for our use while we code throughout the day.
If you want the PHP container to have a permanent SSH tunnel to your remote DB, you can change you Dockerfile's COMMAND statement (assuming the ENTRYPOINT is a shell) to use a script that creates the SSH tunnel in the background, similar to what you manually, wait for the SSH tunnel and then proceed to run whatever it is you want to run.
Your question lacks the details of what you're trying to achieve (permanent tunnel? only while testing? etc.)
An example to such script:
# run ssh in background (notice the "&" at the end)
ssh -4 -R 8888:localhost:8888 $DB_USERNAME#$DB_HOST -i ~/ident -p $DB_PORT &
# wait for the ssh tunnel if needed
# ...
# run the main command here
# ...
I'd suggest to consider a different path -
Create a new service in the docker-compose file that is dedicated to open a tunnel, and then connect to that service from your PHP service.

jdbc driver does not work in kubernetes, fails with timeout

I have a java 11 app with jdbc driver running together with mysql 8.0, the app is able to connect to mysql and execute one sql, but it looks like it never gets a response back?
It looks like a connectivity issue.
At first it'd be good to look at the Java program output.
First simple checks are at the Kubernetes level to ensure that key components are alive:
$ kubectl get deployments
$ kubectl get services
$ kubectl get pods
Additional checks could be done from within the container where your Java app is running.
A possible approach is below.
List deployments of your app and their labels:
$ kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
hello-node 2/2 2 2 1h app=hello-node
Having got the label, you can list the relevant pods and their containers:
$ LABEL=hello-node; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
POD CONTAINER
hello-node-55b49fb9f8-7tbh4 hello-node
hello-node-55b49fb9f8-p7wt6 hello-node
Now it's possible to run basic diagnostic commands from within the Java app container.
Ping might not achieve the target but is available almost always in container and does primitive check of DNS resolution.
Services from the same namespace should be available via short DNS name.
Services from other namespaces inside of the same Kubernetes cluster should be available via internal FQDN.
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node.default.svc.cluster.local
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- mysql -u [username] -p [dbname] -e [query]
From here on the connectivity diagnostics is pretty similar to the bare-metal server except the fact that you are limited by the tools available inside of container. You might install missing packages into the container as needed.
As soon as you obtain more diagnostic information, you'll get a clue what to check next.

Accessing environment variables in Docker containers linked with --link

I'm setting up the development environment for my application inside Docker containers, at the moment I have these containers:
myapp-data - Holds application source code and log files
myapp-phpfpm - Runs the php5-fpm process for Nginx
myapp-nginx - Runs the Nginx web server that serves the application
This setup works beautifully, I'm really happy with it. But my application needs a MySQL database to connect to, so I'm using the official MySQL image, and running it like so:
sudo docker run --name myapp-mysql -e "MYSQL_ROOT_PASSWORD=iamroot" -e "MYSQL_USER=redacted" -e "MYSQL_PASSWORD=redacted" -e "MYSQL_DATABASE=redacted" -d mysql
This also works great. But my myapp-phpfpm container needs to be linked to the myapp-mysql container in order to expose MySQL's connection details to my application. So I restart my myapp-phpfpm container:
sudo docker run --privileged=true --name myapp-phpfpm --volumes-from myapp-data --link myapp-mysql:mysql -d readr/phpfpm
So now my myapp-phpfpm container is linked to my myapp-mysql container so I should be able to access the database within my PHP application.
The problem is I can't. The environment variables don't exist inside the PHP application. If I do:
die(var_dump(`printenv`));
I don't get the MySQL environment variables. To try to debug I did a whoami to find out what user PHP is running as, which is www-data. I then created a bash process inside the container, used su www-data to become the www-data user and did printenv there. Sure enough, the MySQL environment variables do exist there:
MYSQL_PORT_3306_TCP_PORT=3306
MYSQL_PORT_3306_TCP=tcp://172.17.1.118:3306
MYSQL_ENV_MYSQL_ROOT_PASSWORD=iamroot
... etc ...
So, how can I access the environment variables that Docker exposes about my myapp-mysql container within PHP?
I solved this by creating a custom start.sh script that then gets called from my Dockerfile:
#!/bin/sh
# Function to update the fpm configuration to make the service environment variables available
function setEnvironmentVariable() {
if [ -z "$2" ]; then
echo "Environment variable '$1' not set."
return
fi
# Check whether variable already exists
if grep -q $1 /etc/php5/fpm/pool.d/www.conf; then
# Reset variable
sed -i "s/^env\[$1.*/env[$1] = $2/g" /etc/php5/fpm/pool.d/www.conf
else
# Add variable
echo "env[$1] = $2" >> /etc/php5/fpm/pool.d/www.conf
fi
}
# Grep for variables that look like MySQL (MYSQL)
for _curVar in `env | grep MYSQL | awk -F = '{print $1}'`;do
# awk has split them by the equals sign
# Pass the name and value to our function
setEnvironmentVariable ${_curVar} ${!_curVar}
done
# start php-fpm
exec /usr/sbin/php5-fpm
This then adds the environment variables to the PHP5-FPM config so they can be accessed from within PHP scripts.
php-fpm by default clears all environment variables, /etc/php5/fpm/pool.d/www.conf:
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no
you can fix this by uncommenting in your Dockerfile:
RUN sed -i -e "s/;clear_env\s*=\s*no/clear_env = no/g" /etc/php5/fpm/pool.d/www.conf
I'd recommend using something like fig and just passing the env vars to both containers at startup. If you really want to you could docker inspect any container from any other container if you bind-mount the docker socket, then do something like this:
docker inspect -f {{.Config.Env}} myapp-mysql
The problem may not be the environment variables - it may be your PHP installation.
TL;DR environment variables that are accessible when you're running your application under Apache & PHP may not be available if you're using nginx or lighttpd and fastcgi.
The longer version
Here's the way I understand it (and it's probably wrong or incomplete because my experience with this is quite limited). Because PHP is not running as part of the browser under nginx with fastCGI, it does not have access to the shell in which the browser was started and therefore does not have access to the environment variables in that shell.
The solution is to declare the variables you're interested in as part of the configuration. This answer is kind of terse, but it contains the basic answer to this problem.

Google Compute Engine: how to set hostname permanently?

How do I set the hostname of an instance in GCE permanently? I can set it via hostname,but after reboot it is gone again.
I tried to feed in metadata (hostname:f.q.d.n), but that did not do the job. But it should work via metadata (https://github.com/GoogleCloudPlatform/compute-image-packages/tree/master/google-startup-scripts).
Anybody an idea?
The most simple way to achieve it is creating a simple script and that's what I have done.
I have stored the hostname in the instance metadata and then I retrieve it every time the system restarts in order to set the hostname using a cron job.
$ gcloud compute instances add-metadata <instance> --metadata hostname=<new_hostname>
$ sudo crontab -e
And this is the line that must be appended in crontab
#reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
After these steps, every time you restart your instance it will have the hostname <new_hostname>.
You can check it in the prompt or with the command: hostname
You need to remove the file /etc/dhcp/dhclient.d/google_hostname.sh
rm -rf /etc/dhcp/dhclient.d/google_hostname.sh
rm -rf /etc/dhcp/dhclient-exit-hooks.d/google_set_hostname
It's worth noting that this script is needed in order to run gcloud beta compute instances create with the --hostname flag. If this script is absent on a base image, new VM instances will preserve the source hostname/FQDN!
Edit rc.local
sudo nano /etc/rc.local
Add your line under the rest:
hostname *your.hostname.com*
Make sure to run the following after for the script to be executed
chmod +x /etc/rc.d/rc.local
Reboot, and profit.
That isn't possible. Please take a look at this answer. The following article explains that the "hostname" is part of the default metadata entries and it is not possible to manually edit any of the default metadata pairs. As such, you would need to use a script or something else to change the hostname every time the system restarts, otherwise it will automatically get re-synced with the metadata server on every reboot.
You can find information on startup scripts for GCE in this article. You can visit this one for info on how to apply the script to an instance.
You can also create a simple startup-script to do the jobs:
$ gcloud compute instances add-metadata <instance-name> --zone <instance-zone> --metadata startup-script='#! /bin/bash
hostname <hostname>'
Notice that if you already have a startup-script you need to add to the existing startup-script below command or you will replace all the startup-script:
$ hostname instance-name
I was lucky to set hostname at GCE running CentOS.
Source: desantolo.com
Click EDIT on your instance
Go to "Custom metadata" section
Add hostname + your.hostname.tld (change "your.hostname.tld" to your actual hostname
run curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google"
run sudo env EDITOR=nano crontab -e to edit crontab
add line #reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
On your keyboard Ctrl + X
On your keyboard hit Y
On your keyboard hit Enter
run reboot
after system rebooted, run hostname and see if your changes applied
Good luck!
If anyone finds this solution does not work for them on GCS instance. Then I suggest you try using exit hooks as described by Google Support.
In fact, some distributions of Linux like CentOS and Debian use
dhclient-script script to configure the network parameters of the
machine. This script is invoked from time to time by dhclient which is
dynamic host configuration protocol client and provides a means for
configuring one or more network interfaces using the DHCP protocol,
BOOTP protocol, or if these protocols fail, by statically assigning an
address.
The following text is a quote from the man (manual) page of
dhclient-script:
After all processing has completed, /usr/sbin/dhclient-script
checks for the presence of an executable
/etc/dhcp/dhclient-exit-hooks script, which if present is invoked using the ´.´ command. The exit status of
dhclient-script will be passed to dhclient-exit-hooks in the exit_status shell variable, and will always be zero
if the script succeeded at the task for which it was invoked. The rest of the environment as described previ‐
ously for dhclient-enter-hooks is also present. The /etc/dhcp/dhclient-exit-hooks script can modify the valid of
exit_status to change the exit status of dhclient-script.
That being said, by taking a look into the code snippet of
dhclient-script, we can see the script checks for the existence of an
executable /etc/dhcp/dhclient-up-hooks script and all scripts in
/etc/dhcp/dhclient-exit-hooks.d/ directory.
ETCDIR="/etc/dhcp"
193 exit_with_hooks() {
194 exit_status="${1}"
195
196 if [ -x ${ETCDIR}/dhclient-exit-hooks ]; then
197 . ${ETCDIR}/dhclient-exit-hooks
198 fi
199
200 if [ -d ${ETCDIR}/dhclient-exit-hooks.d ]; then
201 for f in ${ETCDIR}/dhclient-exit-hooks.d/*.sh ; do
202 if [ -x ${f} ]; then
203 . ${f}204 fi
205 done
206 fi
207
208 exit ${exit_status}209 }
Therefore, in order to modify the hostname of your Linux VM you can
create a custom script with .sh extension and place it in
/etc/dhcp/dhclient-exit-hooks.d/ directory. If this directory does not
exist, you can create it. The content of the custom script will be:
hostname YourFQDN.sh
>
be sure to make this new .sh file executable:
chmod +x YourFQDN.sh
Source: (https://groups.google.com/d/msg/gce-discussion/olG_nXZ-Jaw/Y9HMl4mlBwAJ)
Im not sure I understand Adrián's answer. It seems overly complex since you have to run a script each boot why not just use hostname?
vi /etc/rc.local
add:
hostname your_hostname
thats it. tested and working. no need to fiddle with metadata and such.
Non-cron/metadata/script solution.
Edit /etc/dhclient-(network-interface).conf or create one if it doesn't exist.
Example:
sudo nano /etc/dhclient-eth0.conf
Then add the following line, replacing the desired FQDN between the double quotes:
supersede host-name "hostname.domain-name";
Persists between reboots and hostname and hostname -f works as intended.
Tested on Debian.
The dhclient sets the hostname using DHCP
You can override this by creating a custom hook script in /etc/dhcp/dhclient-exit-hooks.d/custom_set_hostname that would read the hostname from /etc/hostname:
if [ -f "/etc/hostname" ]; then
new_host_name=$(cat /etc/hostname)
fi
The script must have the execute permission.
It's important to set the new_host_name variable and not calling the hostname command directly as any call to the hostname command will be overriden by another hook or the dhclient-script which uses this variable
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and prevent the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
In my CentOS VMs I found that the script /etc/dhcp/dhclient.d/google_hostname.sh, installed by the google-compute-engine RPM, actually changed the hostname. This happens when the instance gets its IP address during boot.
While it's not the long-term solution I really want, for now I simply deleted this script. The hostname I set with hostnamectl now persists after a reboot.
The script is likely to be in exactly the same place in Debian/Ubuntu VMs, but of course I don't run any of those.
There is some hack you can do to achieve this as i did. Just do:
sudo chattr +i /etc/hosts
This command actually makes the file "(i)mmutable", which means even root can't change it (unless root does chattr -i /etc/hosts first, of course).
As above, you can undo this with sudo chattr -i /etc/hosts
Cheer!
An easy way to fix this is to set up a startup script with custom metadata.
Key :startup-script
Value:
#! /bin/bash
hostname <desired hostname>