Basic setup
Using:
Fedora 30, fully upgraded (kernel 5.1.19)
Podman 1.4.4
I have this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Which I build with:
podman build -t peque/vpn .
Now, in order to be able to run it successfully, I have to take care of some SELinux issues (see Connect to VPN with Podman).
Fixing SELinux permission issues
sudo dnf install udica
I define this ovpn_container.cil custom policy for the VPN container:
(block ovpn_container
(blockinherit container)
(blockinherit restricted_net_container)
(allow process process (capability (chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin)))
(allow process default_t (dir (open read getattr lock search ioctl add_name remove_name write)))
(allow process default_t (file (getattr read write append ioctl lock map open create)))
(allow process default_t (sock_file (getattr read write append open)))
(allow process tun_tap_device_t (chr_file (ioctl open read write)))
(allow process self (netlink_route_socket (nlmsg_write)))
(allow process unreserved_port_t (tcp_socket (name_connect)))
)
I apply the policy with:
sudo semodule -r ovpn_container
sudo semodule -i ovpn_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
Running the container
Now I can successfully run the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun --security-opt label=type:ovpn_container.process -it peque/vpn
Issues
Once the container is running, I open a terminal, within the container, from which I want to ssh to remote servers:
podman exec -it container_name bash
From the container I am able to ssh to remote servers successfully, but only if they are not within the VPN.
When I try to ssh to servers in the VPN, it gets stuck for a while and then throws this error:
$ ssh server.domain.com
ssh: connect to host server.domain.com port 22: Connection refused
kex_exchange_identification: Connection closed by remote host
What could I be missing?
Related
It is possible to run Google-chrome not Chromium with puppeteer in AWS Lambda with container?
Script stuck when I create new page in browser:
const page = await browser.newPage();
Logs from AWS lambda:
mkdir: cannot create directory ‘/.local’: Read-only file system
touch: cannot touch ‘/.local/share/applications/mimeapps.list’: No such file or directory
/usr/bin/google-chrome-stable: line 45: /dev/fd/62: No such file or directory
/usr/bin/google-chrome-stable: line 46: /dev/fd/62: No such file or directory
[0213/000419.523205:ERROR:bus.cc(397)] Failed to connect to the bus: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
[0213/000419.528197:ERROR:bus.cc(397)] Failed to connect to the bus: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
[0213/000419.648505:WARNING:audio_manager_linux.cc(60)] Falling back to ALSA for audio output. PulseAudio is not available or could not be initialized.
DevTools listening on ws://127.0.0.1:46195/devtools/browser/1d348770-1c99-48a5-934c-fae5254fc766
[0213/000419.769218:WARNING:bluez_dbus_manager.cc(248)] Floss manager not present, cannot set Floss enable/disable.
prctl(PR_SET_NO_NEW_PRIVS) failed
prctl(PR_SET_NO_NEW_PRIVS) failed
I do not use puppeteer but that doesn't matter much.
FROM public.ecr.aws/lambda/provided:al2
RUN yum install unzip atk at-spi2-atk gtk3 cups-libs pango libdrm \
libXcomposite libXcursor libXdamage libXext libXtst libXt \
libXrandr libXScrnSaver alsa-lib \
xorg-x11-server-Xvfb wget shadow-utils -y
COPY install-chrome.sh /tmp/
RUN /usr/bin/bash /tmp/install-chrome.sh
ENV DBUS_SESSION_BUS_ADDRESS="/dev/null"
I am not 100% DBUS_SESSION_BUS_ADDRESS is necessary. I am also not 100% sure whether explicitly naming all these packages are necessary, I stole everything from a dozen different places, likely the chrome rpm will pull in what it needs, but I never used any RHEL based system so I am totally clueless. I know this works. Optimizations are welcome.
Here's the script:
#!/usr/bin/bash
# Download and install chrome
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
# Without -y it doesn't run because it needs to add dependencies.
yum install -y google-chrome-stable_current_x86_64.rpm
rm google-chrome-stable_current_x86_64.rpm
CHROMEVERSION=`wget -qO- https://chromedriver.storage.googleapis.com/LATEST_RELEASE`
wget --no-verbose -O /tmp/chromedriver_linux64.zip https://chromedriver.storage.googleapis.com/$CHROMEVERSION/chromedriver_linux64.zip
unzip /tmp/chromedriver_linux64.zip -d /opt
rm /tmp/chromedriver_linux64.zip
mv /opt/chromedriver /opt/chromedriver-$CHROMEVERSION
chmod 755 /opt/chromedriver-$CHROMEVERSION
ln -fs /opt/chromedriver-$CHROMEVERSION /usr/local/bin/chromedriver
# Create a user. /usr/sbin is not on $PATH.
/usr/sbin/groupadd --system chrome
/usr/sbin/useradd --system --create-home --gid chrome --groups audio,video chrome
You can verify it is working by starting it locally with docker run --mount type=tmpfs,destination=/tmp --read-only this simulates well the environment of AWS Lambda. Then you need to run su chrome -c 'xvfb-run chromedriver --allowed-ips=127.0.0.1'. I am using https://github.com/instaclick/php-webdriver/ which is a very thin PHP client for W3C and Selenium 2 webdriver. I used this to test:
<?php
namespace WebDriver;
require 'vendor/autoload.php';
#mkdir('/tmp/chrome');
chmod('/tmp/chrome', 0777);
$wd_host = 'http://localhost:9515';
$web_driver = new WebDriver($wd_host);
$session = $web_driver->session('chrome', [['goog:chromeOptions' => ['args' => [
'--no-sandbox',
'--user-data-dir=/tmp/chrome'
]]]]);
$session->open('https://google.com');
I am creating a development/testing container that contains a number of elements including a mysql server that must run internally for code to access. To demonstrate the issue, I run the following Dockerfile with docker run -i -t demo_mysql_server:
FROM amazonlinux:2018.03
RUN yum -y update && yum -y install shadow-utils mysql-server
Unfortunately, after building the docker container I receive a common connection error (see 1, 2)
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
which can be fixed by logging in as admin with docker run -i -u 0 -t demo_mysql_server
and executing:
echo "NETWORKING=yes" >/etc/sysconfig/network
service network restart
/etc/init.d/mysqld start
chkconfig mysqld on
which seems to turn everything on. However, incorporating these into RUN commands doesn't seem to keep the service on and logging in as admin requires restarting the service as above and adding a user and working as a non-admin and trying to start the service results in errors of the flavor:
bash: /etc/sysconfig/network: Permission denied
[testUser#544a938c44c1 /]$ service network restart
[testUser#544a938c44c1 /]$ /etc/init.d/mysqld start
/etc/init.d/mysqld: line 16: /etc/sysconfig/network: No such file or directory
[testUser#544a938c44c1 /]$ chkconfig mysqld on
You do not have enough privileges to perform this operation.
I this a normal error to see and how do I get the MySQL server instance to stay running?
I have setup application on AWS ECS , with repository, task and cluster management.
My Dockerfile is
FROM ruby:2.4.1
ENV LANG C.UTF-8
RUN apt-get update && \
apt-get install -y nodejs \
vim \
mysql-client --no-install-recommends && rm -rf /var/lib/apt/lists/*
WORKDIR /tmp
ADD ./Gemfile Gemfile
ADD ./Gemfile.lock Gemfile.lock
RUN bundle install
ENV APP_ROOT /workspace
RUN mkdir -p $APP_ROOT
WORKDIR $APP_ROOT
COPY . $APP_ROOT
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0", "-e", "production"]
This is a repository which I have set up on the AWS ECS repository.
In the task destination I have set up two containers rail-app and mysql, which are linked to each other
From the rails-app, I am trying to connect RDS mysql instance, but as its throwing following error, I have added mysql container to support the connection which I think is not required as well
When I run the application with added task in the cluster service . Both containers runs fine, But on the rails-app container I got this error.
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (13)
When I run rails c production under docker exec -it containerid bash it runs fine and connects db properly. Here I can test active records as well
Please provide me a solution in task destination to share volume of mysql to rails application.
I am answering my own question here,
I have assigned env variables on the container section of task. Which were not loaded properly on the environment. When I have used figaro with application.yml It has started using variables as per the environment.
Mysql was not able to get the host from environment
I have a simple Centos6 docker image:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y update && yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:apache /var/log/httpd && \
chmod ug+w,a+rx /var/log/httpd && \
chown apache:apache /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
I can run this locally and push it up to hub.docker.com. If I then go into the web console of the Redhat OpenShift Container Developer Kit (CDK) running locally and deploy the image from dockerhub it works fine. If I go into the OpenShift3 Pro web console the pod goes into a crash loop. There are no logs on the console or the command line to diagnose the problem. Any help much appreciated.
To try to see if it was a problem only with Centos7 I changed the first line to be centos:7 and once again it works on minishift CDK but doesn't work on OpenShift3 Pro. It does show something on the logs tab of the pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.2.55. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
AH00059: Remove it before continuing if it is corrupted.
It is failing because your image expects to run as a specific user.
In Minishift this is allowed, as is being able to run images as root.
On OpenShift Online your images will run as an arbitrary assigned UID and can never run as a selected UID and never as root.
If you are only after a way of hosting static files, see:
https://github.com/sclorg/httpd-container
This is a S2I builder for taking static files for Apache and running them up in a container.
You could use it as a S2I builder by running:
oc new-app centos/httpd-24-centos7~<repository-url> --name httpd
oc expose svc/httpd
Or you could create a derived image if you wanted to try and customise it.
Either way, look at how it is implemented if wanting to build your own.
From the redhat enterprise docs at https://docs.openshift.com/container-platform/3.5/creating_images/guidelines.html#openshift-container-platform-specific-guidelines:
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node. For an image to support running as an arbitrary user, directories
and files that may be written to by processes in the image should be
owned by the root group and be read/writable by that group. Files to
be executed should also have group execute permissions.
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
So in this case the modified Docker file which runs on OpenShift 3 Online Pro is:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:0 /etc/httpd/conf/httpd.conf && \
chmod g+r /etc/httpd/conf/httpd.conf && \
chown apache:0 /var/log/httpd && \
chmod g+rwX /var/log/httpd && \
chown apache:0 /var/run/httpd && \
chmod g+rwX /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html && \
chown -R apache:0 /var/www/html && \
chmod -R g+rwX /var/www/html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
So here's what i have to do: i need to set up some containers automatically using docker. One of them is liek this: Debian Squeeze with limited CPU shares and limited memory (1 cpu share and 512 mb memory),preinstalled apache2,build-essential,php5,mysql-server-5.5,openssh-server and with some ports opened (8000 for Apache and 1500 for MySQL). So i created the following dockerfile :
FROM debian:squeeze
MAINTAINER Name < email : >
# Update the repository sources list
RUN apt-get update
# Install apache, PHP, and supplimentary programs. curl and lynx-cur are for debugging the container.
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apache2 build-essential php5 mysql-server openssh-server libapache2-mod-php5 php5-mysql php5-gd php-pear php-apc php5-curl curl lynx-cur
# Enable apache mods.
RUN a2enmod php5
RUN a2enmod rewrite
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
EXPOSE 80
# Copy site into place.
ADD www /var/www/site
# Update the default apache site with the config we created.
ADD apache-config.conf /etc/apache2/sites-enabled/000-default.conf
# By default, simply start apache.
CMD /usr/sbin/apache2ctl -D FOREGROUND
#CMD [ "mysqladmin -u root password mysecretpasswordgoeshere"]
EXPOSE 3306
the content of apache-config.conf is this:
<VirtualHost *:80>
ServerAdmin me#mydomain.com
DocumentRoot /var/www/site
<Directory /var/www/site/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order deny,allow
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
and in www folder i put a php file with this code:
<?php
$connect=mysql_connect("localhost:1500","root","") or die("Unable to Connect");
?>
to test the connection to the mysql server
then
i build all this into an image like this:
sudo docker build --rm --tag="tag_name" .
and then i run the image like this
sudo docker run -c=1 -m="512m" --net=bridge -p 8000:80 -p 1500:3306 -d --name="container_name" tag_name
It seems to work,the apache server works when i access localhost:8000/site in my browser but is shows "Unable to connect". what am i doing wrong?
And another problem is that,the contaienr is running but i can't attach to it.I run this command
sudo docker attach CONTAINER_ID
and then nothing happens,can't do anythign else from there,What am i doing wrong?
I have to build few more dockerfiles similar to this to create containers.All those must be hosted on a ZFS file system and i have to configure a container repository of 50gb based on it,what does this mean and how do i do that?
I'm sorry for my english,it's not my native language :(
Thank you in advance
MySQL issue
in the PHP code
$connect=mysql_connect("localhost:1500","root","") or die("Unable to Connect");
localhost refers to the container IP address. Since there is no MySQL server running in that container the connection will fail.
In this gist, I've changed a bit your example to have the container start both MySQL and Apache (I assume this was your first intent) using the following instruction: CMD bash -c '(mysqld &); /usr/sbin/apache2ctl -D FOREGROUND' and changed the PHP code to connect to the MySQL server on localhost:3306.
Docker attach
The docker attach command is meant to allow you to interact with the process currently running in the foreground of a container. Unless that process is a shell, it won't provide you with a shell in that container.
Take this example:
Start a container running a shell process
docker run -it --rm base bash
You are now in interactive mode in your container and can play around with the shell running in the foreground in that container:
root#de8f16a13571:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
if you now exit the shell by typing exit the shell process will end, and as that was the process running in the foreground in the container, that container will stop.
root#de8f16a13571:/# exit
exit
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Now start a new container named test running bash again:
docker run -it --name test base bash
verify you can interact with it and detach from it by hitting keys Ctrl+p+q. You end up back in the docker host shell.
verify that the container named test is still running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81f0f1094f4a base:latest "bash" 6 seconds ago Up 5 seconds test
You can then use the docker attach command to attach to the bash program in the container:
docker attach test
root#81f0f1094f4a:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
ZSH
And regarding ZSH, I don't know what all that means either. Also note that having 3 questions at once makes it difficult for the community to come up with a single answer that would answer all 3 ; maybe consider posting a new question for those.
Please comment if my assumptions about how you run MySQL or what your intent is with docker attach are wrong.