Google-chrome on AWS Lambda - google-chrome

It is possible to run Google-chrome not Chromium with puppeteer in AWS Lambda with container?
Script stuck when I create new page in browser:
const page = await browser.newPage();
Logs from AWS lambda:
mkdir: cannot create directory ‘/.local’: Read-only file system
touch: cannot touch ‘/.local/share/applications/mimeapps.list’: No such file or directory
/usr/bin/google-chrome-stable: line 45: /dev/fd/62: No such file or directory
/usr/bin/google-chrome-stable: line 46: /dev/fd/62: No such file or directory
[0213/000419.523205:ERROR:bus.cc(397)] Failed to connect to the bus: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
[0213/000419.528197:ERROR:bus.cc(397)] Failed to connect to the bus: Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory
[0213/000419.648505:WARNING:audio_manager_linux.cc(60)] Falling back to ALSA for audio output. PulseAudio is not available or could not be initialized.
DevTools listening on ws://127.0.0.1:46195/devtools/browser/1d348770-1c99-48a5-934c-fae5254fc766
[0213/000419.769218:WARNING:bluez_dbus_manager.cc(248)] Floss manager not present, cannot set Floss enable/disable.
prctl(PR_SET_NO_NEW_PRIVS) failed
prctl(PR_SET_NO_NEW_PRIVS) failed

I do not use puppeteer but that doesn't matter much.
FROM public.ecr.aws/lambda/provided:al2
RUN yum install unzip atk at-spi2-atk gtk3 cups-libs pango libdrm \
libXcomposite libXcursor libXdamage libXext libXtst libXt \
libXrandr libXScrnSaver alsa-lib \
xorg-x11-server-Xvfb wget shadow-utils -y
COPY install-chrome.sh /tmp/
RUN /usr/bin/bash /tmp/install-chrome.sh
ENV DBUS_SESSION_BUS_ADDRESS="/dev/null"
I am not 100% DBUS_SESSION_BUS_ADDRESS is necessary. I am also not 100% sure whether explicitly naming all these packages are necessary, I stole everything from a dozen different places, likely the chrome rpm will pull in what it needs, but I never used any RHEL based system so I am totally clueless. I know this works. Optimizations are welcome.
Here's the script:
#!/usr/bin/bash
# Download and install chrome
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
# Without -y it doesn't run because it needs to add dependencies.
yum install -y google-chrome-stable_current_x86_64.rpm
rm google-chrome-stable_current_x86_64.rpm
CHROMEVERSION=`wget -qO- https://chromedriver.storage.googleapis.com/LATEST_RELEASE`
wget --no-verbose -O /tmp/chromedriver_linux64.zip https://chromedriver.storage.googleapis.com/$CHROMEVERSION/chromedriver_linux64.zip
unzip /tmp/chromedriver_linux64.zip -d /opt
rm /tmp/chromedriver_linux64.zip
mv /opt/chromedriver /opt/chromedriver-$CHROMEVERSION
chmod 755 /opt/chromedriver-$CHROMEVERSION
ln -fs /opt/chromedriver-$CHROMEVERSION /usr/local/bin/chromedriver
# Create a user. /usr/sbin is not on $PATH.
/usr/sbin/groupadd --system chrome
/usr/sbin/useradd --system --create-home --gid chrome --groups audio,video chrome
You can verify it is working by starting it locally with docker run --mount type=tmpfs,destination=/tmp --read-only this simulates well the environment of AWS Lambda. Then you need to run su chrome -c 'xvfb-run chromedriver --allowed-ips=127.0.0.1'. I am using https://github.com/instaclick/php-webdriver/ which is a very thin PHP client for W3C and Selenium 2 webdriver. I used this to test:
<?php
namespace WebDriver;
require 'vendor/autoload.php';
#mkdir('/tmp/chrome');
chmod('/tmp/chrome', 0777);
$wd_host = 'http://localhost:9515';
$web_driver = new WebDriver($wd_host);
$session = $web_driver->session('chrome', [['goog:chromeOptions' => ['args' => [
'--no-sandbox',
'--user-data-dir=/tmp/chrome'
]]]]);
$session->open('https://google.com');

Related

Cannot `ssh` from container with `openvpn`

Basic setup
Using:
Fedora 30, fully upgraded (kernel 5.1.19)
Podman 1.4.4
I have this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Which I build with:
podman build -t peque/vpn .
Now, in order to be able to run it successfully, I have to take care of some SELinux issues (see Connect to VPN with Podman).
Fixing SELinux permission issues
sudo dnf install udica
I define this ovpn_container.cil custom policy for the VPN container:
(block ovpn_container
(blockinherit container)
(blockinherit restricted_net_container)
(allow process process (capability (chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin)))
(allow process default_t (dir (open read getattr lock search ioctl add_name remove_name write)))
(allow process default_t (file (getattr read write append ioctl lock map open create)))
(allow process default_t (sock_file (getattr read write append open)))
(allow process tun_tap_device_t (chr_file (ioctl open read write)))
(allow process self (netlink_route_socket (nlmsg_write)))
(allow process unreserved_port_t (tcp_socket (name_connect)))
)
I apply the policy with:
sudo semodule -r ovpn_container
sudo semodule -i ovpn_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
Running the container
Now I can successfully run the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun --security-opt label=type:ovpn_container.process -it peque/vpn
Issues
Once the container is running, I open a terminal, within the container, from which I want to ssh to remote servers:
podman exec -it container_name bash
From the container I am able to ssh to remote servers successfully, but only if they are not within the VPN.
When I try to ssh to servers in the VPN, it gets stuck for a while and then throws this error:
$ ssh server.domain.com
ssh: connect to host server.domain.com port 22: Connection refused
kex_exchange_identification: Connection closed by remote host
What could I be missing?

Installation Requirements for mysql with DBIish on rakudo-star docker image

I was creating an own docker image based on the latest rakudo-star docker image. I wanted to use DBIish to connect to a mysql database. Unfortunately I am not able to get the DBDish::mysql to work.
I've installed default-libmysqlclient-dev as you can see in
# find / -name 'libmysqlclient*.so'
/usr/lib/x86_64-linux-gnu/libmysqlclient_r.so
/usr/lib/x86_64-linux-gnu/libmysqlclient.so
The error i am facing is:
# perl6 -Ilib -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot locate native library 'mysqlclient': mysqlclient: cannot open shared object file: No such file or directory
in method setup at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 289
in method CALL-ME at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 539
in method connect at /root/DBIish/lib/DBDish/mysql.pm6 (DBDish::mysql) line 12
in block <unit> at -e line 1
Short answer: you need the package libmysqlclient20 (I added the documentation request to a similar DBIish issue). Debian 9 (stable at the moment) uses and older version than Ubuntu 18.04 (stable at the moment) and Debian Unstable. It also refers to mariadb instead of mysql. Pick libmariadbclient18 on images based on Debian Stable and create a link with the mysql name (see below).
On Debian Testing/Unstable and recent derivatives:
$ sudo apt-get install libmysqlclient20
$ dpkg -L libmysqlclient20
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.9
/usr/share
/usr/share/doc
/usr/share/doc/libmysqlclient20
/usr/share/doc/libmysqlclient20/NEWS.Debian.gz
/usr/share/doc/libmysqlclient20/changelog.Debian.gz
/usr/share/doc/libmysqlclient20/copyright
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
On Debian 9 and derivatives:
$ dpkg -L libmariadbclient18
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18.0.0
/usr/lib/x86_64-linux-gnu/mariadb18
/usr/lib/x86_64-linux-gnu/mariadb18/plugin
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/client_ed25519.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/dialog.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/mysql_clear_password.so
/usr/share
/usr/share/doc
/usr/share/doc/libmariadbclient18
/usr/share/doc/libmariadbclient18/changelog.Debian.gz
/usr/share/doc/libmariadbclient18/copyright
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18
Create the link:
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libmariadbclient.so.18 /usr/lib/x86_64-linux-gnu/libmysqlclient.so.18
In order to illustrate this, I created an Ubuntu 18.04 container for the occasion*:
docker run -ti --rm --entrypoint=bash rakudo/ubuntu-amd64-18.04
And the abbreviated commands and output:
# apt-get install -y libmysqlclient20 build-essential
# zef install DBIish
# perl6 -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot look up attributes in a DBDish::mysql type object
[...]
The error is because I didn't pass the correct parameters for connect as I didn't have a db running. The important thing is that no .so file is missing.
*: I uploaded it to the Docker Hub, a normal run will put you right in the REPL:
$ docker run -ti --rm rakudo/ubuntu-amd64-18.04
To exit type 'exit' or '^D'
>
(I didn't use the Star image when debugging, but it does not matter because this is a more generic problem.)

Unable to run startup script when creating instance on Google Cloud Platform

I have a simple startup script which looks like so:
#!/usr/bin/env bash
sudo apt update
sudo apt install -y ruby-full ruby-bundler build-essential
And create VM instance on GCP like so:
$ gcloud compute instances create test-app --boot-disk-size=10GB --image-family ubuntu-1604-lts --image-project=ubuntu-os-cloud --machine-type=g1-small --zone europe-west1-b --tags test-server --restart-on-failure --metadata-from-file startup-script=startup.sh
My startup.sh is executable. I set its rights like so:
$ chmod +x startup.sh
When however I enter the shell of my newly created instance and check bundler:
test-app:~$ bundle -v
I get these messages:
The program 'bundle' is currently not installed...
So, what is wrong with that and how can I fix it? PS. If I run all my commands just from inside the instance shell, it's all ok, so there is some problem with using startup script on GCP.
I tested with your use case, But the bundle package was installed without making any changes.
Output:
bundle -v
Bundler version 1.11.2
You can check VM serial console log output to verify if start-up script ran. Check the VM instance to verify if the package is installed using the commands below:
sudo apt list --installed | grep -i bundle
sudo egrep bundle /var/log/dpkg.log
In addition, check the gem list bundle

OpenShift3 Pro doesn't run a simple Centos image which runs locally on minishift

I have a simple Centos6 docker image:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y update && yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:apache /var/log/httpd && \
chmod ug+w,a+rx /var/log/httpd && \
chown apache:apache /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
I can run this locally and push it up to hub.docker.com. If I then go into the web console of the Redhat OpenShift Container Developer Kit (CDK) running locally and deploy the image from dockerhub it works fine. If I go into the OpenShift3 Pro web console the pod goes into a crash loop. There are no logs on the console or the command line to diagnose the problem. Any help much appreciated.
To try to see if it was a problem only with Centos7 I changed the first line to be centos:7 and once again it works on minishift CDK but doesn't work on OpenShift3 Pro. It does show something on the logs tab of the pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.2.55. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
AH00059: Remove it before continuing if it is corrupted.
It is failing because your image expects to run as a specific user.
In Minishift this is allowed, as is being able to run images as root.
On OpenShift Online your images will run as an arbitrary assigned UID and can never run as a selected UID and never as root.
If you are only after a way of hosting static files, see:
https://github.com/sclorg/httpd-container
This is a S2I builder for taking static files for Apache and running them up in a container.
You could use it as a S2I builder by running:
oc new-app centos/httpd-24-centos7~<repository-url> --name httpd
oc expose svc/httpd
Or you could create a derived image if you wanted to try and customise it.
Either way, look at how it is implemented if wanting to build your own.
From the redhat enterprise docs at https://docs.openshift.com/container-platform/3.5/creating_images/guidelines.html#openshift-container-platform-specific-guidelines:
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node. For an image to support running as an arbitrary user, directories
and files that may be written to by processes in the image should be
owned by the root group and be read/writable by that group. Files to
be executed should also have group execute permissions.
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
So in this case the modified Docker file which runs on OpenShift 3 Online Pro is:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:0 /etc/httpd/conf/httpd.conf && \
chmod g+r /etc/httpd/conf/httpd.conf && \
chown apache:0 /var/log/httpd && \
chmod g+rwX /var/log/httpd && \
chown apache:0 /var/run/httpd && \
chmod g+rwX /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html && \
chown -R apache:0 /var/www/html && \
chmod -R g+rwX /var/www/html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND

Unable to install Couchbase sync gateway

I am trying to install Couchbase sync gateway using the steps from the following URL for MacOS
https://developer.couchbase.com/documentation/mobile/current/installation/sync-gateway/index.html
This issue is, i downloaded "couchbase-sync-gateway-enterprise_1.4.1-3_x86_64.tar.gz" and its in "Downloads" folder in MacBook.
When I execute this command -> sudo tar -zxvf couchbase-sync-gateway-enterprise_1.4.1-3_x86_64.tar.gz --directory /opt
[MyMacbook:downloads administrator$ sudo tar -zxvf couchbase-sync-gateway-enterprise_1.4.1-3_x86_64.tar.gz --directory /opt
tar: could not chdir to '/opt']
Throws error as "tar: could not chdir to '/opt'"
I don't understand how to resolve this. I couldn't get help anywhere.
Please help installing Couchbase sync gateway successfully on MacBook.
It dosen't actually matter if /opt does not exist on your Mac. You can either create one before firing the command with the following command:
sudo mkdir /opt
Otherwise, you can replace "opt" with locations like : /Users/< your_user > or /usr/share or /var