mount cifs share using local credentials - samba

I have an Ubuntu server that is joined to our windows domain. Users login via SSH using AD creds. I have a script that allows me to mount a windows share using cifs.
sudo mount.cifs //server/$1 /home/DOMAIN/$1/D -o user=$1,uid=$1,gid=domain\ users
I then have this entered in my /etc/bash.bashrc
#If ~/D does not exist, create it
if [ ! -d ~/D ]; then
mkdir ~/D
fi
#Mount D drive to ~/D
sudo /usr/local/bin/mountsamba.sh $USER
What I am trying to do is get it so it doesn't ask for the password and just uses the credentials that I use for logging into the server.

I actually got it figured out. Install keyutils and created this bash script:
BASH
sudo mount -t cifs //tiberius/$1 /home/NTGROUP/$1/D -o user=$1,cruid=$1,sec=krb5,uid=$1,gid=domain\ users
Then I added this into /etc/bash.bashrc
BASH
#If ~/D does not exist, create it
if [ ! -d ~/D ]; then
mkdir ~/D
fi
#Mount D drive to ~/D
sudo /usr/local/bin/mountsamba.sh $USER
Then in /etc/sudoers, I have:
BASH
ALL ALL=(root) NOPASSWD: /usr/local/bin/mountsamba.sh

Related

docker permission denied for shell script in mysql image placed at docker-entrypoint-initdb.d

FROM mysql:latest
RUN set -ex && apt-get update \
&& apt-get dist-upgrade -y \
&& apt-get install -y --no-install-recommends \
curl unzip
ADD my.sh /docker-entrypoint-initdb.d
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
RUN chmod -R 777 docker-entrypoint-initdb.d
Content of my.sh file are :
#!/bin/bash
mkdir try1
The my.sh file has a command to create a directory and when the entrypoint.sh of the image runs this file, it throws a permission denied error. I have changed the owner of the directory as well.
What could be the reason.
Script in docker-entrypoint-initdb.d will be executed by mysql account, not by root, see source code.
You could verify it by change my.sh as next:
#!/bin/bash
echo "start"
pwd
id
mkdir try1
echo "done"
Then, the error log will be:
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/my.sh
start
/
uid=999(mysql) gid=999(mysql) groups=999(mysql)
mkdir: cannot create directory 'try1': Permission denied
You could see the user which run current my.sh is mysql, while you currently want to setup try1 folder in /, surely you won't have permission.

Switch user on Google Compute Engine Startup Script

I pass the following as my GCE startup script but it always logs in as the root user and never as the demo-user. How do I fix it?
let startupScript = `#!/bin/bash
su demo-user
WHO_AM_I=$(whoami)
echo WHO_AM_I: $WHO_AM_I &>> debug.txt
cd..`
I think it should work like that:
#! /bin/bash
sudo -u demo-user bash -c 'WHO_AM_I=$(whoami);
echo WHO_AM_I; $WHO_AM_I &>> debug.txt;'
use "sudo-u" to specify the user, then bash -c 'with all the commands between these particular quotes '' and separated by ;
For example: bash -c 'command1; command2;'
You can try an easier test (it worked for me), for example:
#! /bin/bash
sudo -u demo-user bash -c 'touch test.txt'
And then check with ls -l /home/demo-test/text.txt that demo-test is the owner of the new file.

Dockerfile passing wrong mysql host ip address

I created a build on my local machine and was working great. Now
trying to do same in the server.It is not passing the host-ip set in
Dockerfile but passing 172.17.0.3 and throwing the bellow error
Here is the error:
Step 22/25 : RUN cd /usr/html/bin && ./magento setup:config:set --db-host=172.17.0.2 --db-name=mydb --db-user=tara --db-password=password
---> Running in 7bbe53f5d054
SQLSTATE[HY000] [1045] Access denied for user 'tara'#'172.17.0.3' (using password: YES)
[InvalidArgumentException]
Parameter validation failed
*Where the host ip comes from the IP address of another running container. which is 172.17.0.2 *
Why is it taking wrong IP address during connection
HERE is the dockerfile:
FROM docker-php:0.2
#mysqql setup
ENV DB_HOST 172.17.0.2
ENV DB_NAME mydb
ENV DB_USER admin
ENV DB_PASSWORD password
#magenot admin user
ENV ADMIN_USER tara
ENV ADMIN_PASSWORD password123
ENV ADMIN_FIRSTNAME tara
ENV ADMIN_LASTNAME gurung
ENV ADMIN_EMAIL tara.email#somelink.net
ADD ./magento2 /usr/html/
WORKDIR /usr/html/
RUN find var vendor pub/static pub/media app/etc -type f -exec chmod g+w {} \;
RUN find var vendor pub/static pub/media app/etc -type d -exec chmod g+ws {} \;
RUN chown -R :docker .
RUN chmod u+x bin/magento
RUN composer install
CMD echo "Success installed Magento"
ENTRYPOINT ["/home/docker/run.sh"]
#get into the magento isntallation dir
#get into the magento isntallation dir
#start setting the necessasry file permission first
RUN cd /usr/html && \
chmod 777 generated -R && \
chmod 777 var -R
RUN cd /usr && \
chown -R docker:docker html
#start running the magento commands to install
RUN cd /usr/html/bin && \
./magento setup:config:set --db-host=$DB_HOST --db-name=mydb --db-user=bhavi --db-password=password
#installing the magento with admin user created
RUN cd /usr/html/bin && \
./magento setup:install --admin-user=$ADMIN_USER --admin-password=$ADMIN_PASSWORD --admin-firstname=$ADMIN_FIRSTNAME --admin-lastname=$ADMIN_LASTNAME --admin-email=ADMIN_EMAIL --use-rewrites=1
RUN chmod 777 vendor -R
RUN cd /usr && \
chown -R docker:docker html

OpenShift3 Pro doesn't run a simple Centos image which runs locally on minishift

I have a simple Centos6 docker image:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y update && yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:apache /var/log/httpd && \
chmod ug+w,a+rx /var/log/httpd && \
chown apache:apache /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
I can run this locally and push it up to hub.docker.com. If I then go into the web console of the Redhat OpenShift Container Developer Kit (CDK) running locally and deploy the image from dockerhub it works fine. If I go into the OpenShift3 Pro web console the pod goes into a crash loop. There are no logs on the console or the command line to diagnose the problem. Any help much appreciated.
To try to see if it was a problem only with Centos7 I changed the first line to be centos:7 and once again it works on minishift CDK but doesn't work on OpenShift3 Pro. It does show something on the logs tab of the pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.2.55. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
AH00059: Remove it before continuing if it is corrupted.
It is failing because your image expects to run as a specific user.
In Minishift this is allowed, as is being able to run images as root.
On OpenShift Online your images will run as an arbitrary assigned UID and can never run as a selected UID and never as root.
If you are only after a way of hosting static files, see:
https://github.com/sclorg/httpd-container
This is a S2I builder for taking static files for Apache and running them up in a container.
You could use it as a S2I builder by running:
oc new-app centos/httpd-24-centos7~<repository-url> --name httpd
oc expose svc/httpd
Or you could create a derived image if you wanted to try and customise it.
Either way, look at how it is implemented if wanting to build your own.
From the redhat enterprise docs at https://docs.openshift.com/container-platform/3.5/creating_images/guidelines.html#openshift-container-platform-specific-guidelines:
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node. For an image to support running as an arbitrary user, directories
and files that may be written to by processes in the image should be
owned by the root group and be read/writable by that group. Files to
be executed should also have group execute permissions.
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
So in this case the modified Docker file which runs on OpenShift 3 Online Pro is:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:0 /etc/httpd/conf/httpd.conf && \
chmod g+r /etc/httpd/conf/httpd.conf && \
chown apache:0 /var/log/httpd && \
chmod g+rwX /var/log/httpd && \
chown apache:0 /var/run/httpd && \
chmod g+rwX /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html && \
chown -R apache:0 /var/www/html && \
chmod -R g+rwX /var/www/html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND

Cannot delete files on docker host

I'm using the following shell script to extract my databases in the entrypoint and startup the container.
#!/bin/bash
if [ ! -d "/var/lib/mysql/assetmanager" ]; then
tar -zxvf mysql.tar.gz
fi
exec /usr/bin/mysqld_safe
On startup I mount a local directory to the /var/lib/mysql directory with the -v parameter and extract then the files with the above script.
But now I can't delete the extracted files on my host, because permission denied error.
Can someone help me with this problem.
Thx
You cannot delete them because by default process in container executed by root user and extracted files belong to root. if you don't need these files in mapped dir, use different location for it -v ...:/myassets and in script:
if [ ! -d "/var/lib/mysql/assetmanager" ]; then
tar -zxvf /myassets/mysql.tar.gz
fi
you also could map a single file instead of whole directory if you need only that file.
There are many other solutions, depends what you need:
you could delete these files as root: sudo rm ...
you could delete them in container before exit
you could create user in container and create files from this user