I am trying to run a test suite on GitHub Actions for a package that wraps utilities for calling the clipboard on a variety of platforms. While I have managed to get headless testing set up for a Linux system using X11, based on running xvfb, I am struggling to find documentation for how to set up a headless Wayland-based system for testing the utility wl-clipboard.
The current action I'm running installs sway, creates the required XDG_RUNTIME_DIR, and then runs sway. I suspect I am not starting sway correctly because I can't seem to get it to startup and stay running in the background while the rest of the tests run.
- name: Install wayland
if: ${{ matrix.config.clip_type == 'wayland' }}
run: |
mkdir $XDG_RUNTIME_DIR
chown $USER $XDG_RUNTIME_DIR
chmod 0700 $XDG_RUNTIME_DIR
sudo apt-get update
sudo apt-get purge x11-*
sudo apt-get install sway meson libwayland-dev
echo $XDG_RUNTIME_DIR
ls -la $XDG_RUNTIME_DIR
sway -d -V
cd $GITHUB_WORKSPACE/..
git clone https://github.com/bugaevc/wl-clipboard.git
cd wl-clipboard
meson build
cd build/
sudo ninja install
wl-copy --primary
wl-paste --primary
cd $GITHUB_WORKSPACE
env:
XDG_RUNTIME_DIR: /home/runner/work/clipr/xdg
WLR_BACKENDS: headless
WLR_LIBINPUT_NO_DEVICES: 1
WAYLAND_DISPLAY: wayland-1
GTK_USE_PORTAL: 0
See the verbose logs from sway. Running sway like this in the foreground, it just hangs indefinitely. Naively trying to run sway in the background using nohup sway & results in the later calls to the utilities saying Failed to connect to a Wayland server.
Any suggestions for getting a headless Wayland server up and running?
Related
I am trying to configure the networking options for my lxd container but when I try the following command:
lxc network create testbr0
I get the following result:
root#Server02:/var/lib# lxc network create testbr0
Usage: lxc [options]
Checking the list of available commands, I dont see network as an option
Here are the available commands I see:
config, copy, delete, exec, file, finger, image, info, init, launch, list, monitor, move, pause, profile, publish, remote, restart, restore, snapshot, start, stop, version
I'm using Ubuntu 14.04
Any insights?
Thanks
I assume you are using lxd Version 2.0.11. There is no network command in this version.
If you want to use the network command you have to install lxd feature releases (LXD 2.x).
On Ubuntu 14.04 there is no ppa containing the lxd feature releases, so you have to install snap and use the snap package:
sudo apt update
# check if snap is installed, install it if not
if ! type snapctl >/dev/null; then sudo apt install -y snapd; fi
# install lxd
sudo snap install lxd
# wait for lxd startup
while ! echo -e "GET / HTTP/1.0\r\n" | sudo nc -U /var/snap/lxd/common/lxd/unix.socket > /dev/null; do sleep 1; done
# migrate from ppa to snap lxd
sudo /snap/bin/lxd.migrate
# 14.04 does not add the user to lxd group so we do it explicitly
sudo adduser $(id -un) lxd
I have a simple startup script which looks like so:
#!/usr/bin/env bash
sudo apt update
sudo apt install -y ruby-full ruby-bundler build-essential
And create VM instance on GCP like so:
$ gcloud compute instances create test-app --boot-disk-size=10GB --image-family ubuntu-1604-lts --image-project=ubuntu-os-cloud --machine-type=g1-small --zone europe-west1-b --tags test-server --restart-on-failure --metadata-from-file startup-script=startup.sh
My startup.sh is executable. I set its rights like so:
$ chmod +x startup.sh
When however I enter the shell of my newly created instance and check bundler:
test-app:~$ bundle -v
I get these messages:
The program 'bundle' is currently not installed...
So, what is wrong with that and how can I fix it? PS. If I run all my commands just from inside the instance shell, it's all ok, so there is some problem with using startup script on GCP.
I tested with your use case, But the bundle package was installed without making any changes.
Output:
bundle -v
Bundler version 1.11.2
You can check VM serial console log output to verify if start-up script ran. Check the VM instance to verify if the package is installed using the commands below:
sudo apt list --installed | grep -i bundle
sudo egrep bundle /var/log/dpkg.log
In addition, check the gem list bundle
I have a simple Centos6 docker image:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y update && yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:apache /var/log/httpd && \
chmod ug+w,a+rx /var/log/httpd && \
chown apache:apache /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
I can run this locally and push it up to hub.docker.com. If I then go into the web console of the Redhat OpenShift Container Developer Kit (CDK) running locally and deploy the image from dockerhub it works fine. If I go into the OpenShift3 Pro web console the pod goes into a crash loop. There are no logs on the console or the command line to diagnose the problem. Any help much appreciated.
To try to see if it was a problem only with Centos7 I changed the first line to be centos:7 and once again it works on minishift CDK but doesn't work on OpenShift3 Pro. It does show something on the logs tab of the pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.2.55. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
AH00059: Remove it before continuing if it is corrupted.
It is failing because your image expects to run as a specific user.
In Minishift this is allowed, as is being able to run images as root.
On OpenShift Online your images will run as an arbitrary assigned UID and can never run as a selected UID and never as root.
If you are only after a way of hosting static files, see:
https://github.com/sclorg/httpd-container
This is a S2I builder for taking static files for Apache and running them up in a container.
You could use it as a S2I builder by running:
oc new-app centos/httpd-24-centos7~<repository-url> --name httpd
oc expose svc/httpd
Or you could create a derived image if you wanted to try and customise it.
Either way, look at how it is implemented if wanting to build your own.
From the redhat enterprise docs at https://docs.openshift.com/container-platform/3.5/creating_images/guidelines.html#openshift-container-platform-specific-guidelines:
By default, OpenShift Container Platform runs containers using an
arbitrarily assigned user ID. This provides additional security
against processes escaping the container due to a container engine
vulnerability and thereby achieving escalated permissions on the host
node. For an image to support running as an arbitrary user, directories
and files that may be written to by processes in the image should be
owned by the root group and be read/writable by that group. Files to
be executed should also have group execute permissions.
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
So in this case the modified Docker file which runs on OpenShift 3 Online Pro is:
FROM centos:6
MAINTAINER Simon 1905 <simbo#x.com>
RUN yum -y install httpd && yum clean all
RUN sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf && \
chown apache:0 /etc/httpd/conf/httpd.conf && \
chmod g+r /etc/httpd/conf/httpd.conf && \
chown apache:0 /var/log/httpd && \
chmod g+rwX /var/log/httpd && \
chown apache:0 /var/run/httpd && \
chmod g+rwX /var/run/httpd
RUN mkdir -p /var/www/html && echo "hello world!" >> /var/www/html/index.html && \
chown -R apache:0 /var/www/html && \
chmod -R g+rwX /var/www/html
EXPOSE 8080
USER apache
CMD /usr/sbin/httpd -D FOREGROUND
EDIT: My goal is to be able to emit metrics from my spring-boot application and have them sent to a Graphite server. For that I am trying to set up statsd. If you can suggest a cleaner approach, that would be better.
I have a Beanstalk application which requires statsd to run as a background process. I was able to specify commands and packages through ebextensions config file as follows:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_run_statsd:
command: node stats.js exampleConfig.js
cwd: /home/ec2-user/statsd
When I try to deploy the application to a new environment, the EC2 node never comes up fully. I logged in to check what might be going on and noticed in /var/log/cfn-init.log that 01_nodejs_install, 02_mkdir_statsd and 03_fetch_statsd were executed successfully. So I guess the system was stuck on the fourth command (04_run_statsd).
2016-05-24 01:25:09,769 [INFO] Yum installed [u'git']
2016-05-24 01:25:37,751 [INFO] Command 01_nodejs_install succeeded
2016-05-24 01:25:37,755 [INFO] Command 02_mkdir_statsd succeeded
2016-05-24 01:25:38,700 [INFO] Command 03_fetch_statsd succeeded
cfn-init.log (END)
I need help with the following:
If there is a better way to install and run statsd while instantiating an environment, I would appreciate if you could provide details on that approach. This current scheme seems hacky.
If this is the approach I need to stick with, how can I run the fourth command so that statsd can be run as a background process?
Tried a few things and found that the following ebextensions configs work:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_change_config:
command: cat exampleConfig.js | sed 's/2003/<graphite server port>/g' | sed 's/graphite.example.com/my.graphite.server.hostname/g' > config.js
cwd: /home/ec2-user/statsd
05_run_statsd:
command: setsid node stats.js config.js >/dev/null 2>&1 < /dev/null &
cwd: /home/ec2-user/statsd
Note that I added another command (04_change_config) so that I may configure my own Graphite server and port in statsd configs. This change is not needed to address the original question, though.
The actual run command uses setsid to run the command as a daemon.
So i have this Dockerfile:
FROM debian:squeeze
MAINTAINER Name < email : >
# Update the repository sources list
RUN apt-get update
# Install apache, PHP, and supplimentary programs. curl and lynx-cur are for debugging the container.
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apache2 build-essential php5 mysql-server openssh-server libapache2-mod-php5 php5-mysql php5-gd php-pear php-apc php5-curl curl lynx-cur
# Enable apache mods.
RUN a2enmod php5
RUN a2enmod rewrite
# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
EXPOSE 80
# Copy site into place.
ADD www /var/www/site
# Update the default apache site with the config we created.
ADD apache-config.conf /etc/apache2/sites-enabled/000-default.conf
# start mysqld and apache
EXPOSE 3306
RUN mkdir /var/run/sshd
RUN echo 'root:123' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD bash -c ' (mysqld &); /usr/sbin/apache2ctl -D FOREGROUND;/usr/sbin/sshd -D'
it builds up, no problem,MySQL and Apache start and work fine but the ssh won't work and i don't know why. openssh-server is installed.
i tried starting it up like this:
#startup.sh file
#/bin/bash
sshd
+
ADD ./startup.sh /opt/startup.sh
ENTRYPOINT ["/opt/startup.sh"]
and many other,i'm stuck.
What am i doing wrong?
you are starting apache in the foreground, hence the apachectl process will never give back the hand to the shell that started it and thus the /usr/sbin/sshd -D will never be called (unless you kill apache).
The following instruction will start both mysql and apache in the background and then sshd in the foreground:
CMD bash -c ' (mysqld &); /usr/sbin/apache2ctl start;/usr/sbin/sshd -D'
While such a CMD statement is ok for tests I would advise using a different approach for running multiple processes in a single docker container:
supervisor
phusion/baseimage
Replace below lines of code in the docker file,
RUN mkdir /var/run/sshd
RUN echo 'root:123' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
Using these codes
RUN apt-get install -y openssh-server
RUN echo 'root:password' |chpasswd
RUN mkdir -p /var/run/sshd
this works for me.
Note: Use ssh only for debugging purpose, it is not a good practice at all.