I'm using OpenShift Origin and developing a cartridge for the first time. When my bin/install and bin/control scripts are running I've noticed "Permission denied" errors when they try to access anything in the cartridge usr dir. In the node platform.log I see the offending command that OpenShift runs looks like this (where my bin/control start tries to run a script in usr):
/sbin/runuser -s /bin/sh 5351e627ee5a934f290001d2 -c "exec /usr/bin/runcon 'unconfined_u:system_r:openshift_t:s0:c0,c1004' /bin/sh -c \"set -e; /var/lib/openshift/5351e627ee5a934f290001d2/mycart/bin/control start \""
Since the usr dir is a symlink I originally thought it was related to that, but now I think it's related to selinux (which I don't know much about). If I do a "ls -Z" on my app's cartridge dir the files are "system_u:object_r:openshift_var_lib_t:s0:c0,c1004" but the contents of the usr dir are "unconfined_u:object_r:default_t:s0", so it doesn't match what's in the above command.
I used the oo-admin-cartridge command to install the cartridge to my Origin VM.
Any ideas on how to fix this?
What I ended up doing was running "chcon -R -u system_u -t bin_t usr/" before installing the cartridge with oo-admin-cartridge. Built-in cartridges are not affected by this problem (checked nodejs), so I feel like it might be a oo-admin-cartridge bug. I would expect it to massage the selinux permissions instead of using whatever I provide.
Related
I am working on a server running ubuntu 18.04. This digital ocean tutorial on django deployment(https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) is telling me to do the following:
"We’re now finished configuring our Django application. We can back out of our virtual environment by typing:
(env): deactivate" I am familiar with virtual environments, I did this. Now for the part I am not at all familiar with:
"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:
sudo nano /etc/systemd/system/gunicorn.socket
"
First, since I just deactivated my env, I am now at justin#ubuntu-s-1vcpu-1gb-nyc3-01:~$. If I ls I only see the project folder I created which holds the virtualenv, the python project, manage.py and the static directory. Nowhere can I find this
/etc/systemd/system/
directory and the command they are telling me to use cannot create directories, only files. So I am very confused, any help would be greatly appreciated.
/etc doesn't live inside ~. Try ls /etc to see what's already in that directory. If you need to create that directory, you can do so wih sudo mkdir -p /etc/systemd/system/ (the -p flag is to make sure that, in case systemd is also not present under etc, it will get created).
Just got my MediaWiki running on a local domain (running as container on Synology nas). Now I want to configure so only domain users can access the Wiki and are automatically logged in.
This is for the sole purpose of tracking user name with page edits.
My local domain is abc.local and my domain controller is Windows Server 2008 R2.
I've done the following:
Installed extensions LDAPProvider, LDAPAuthentication2, and PluggableAuth.
Added the following to the bottom of my LocalSettings.php.
wfLoadExtension( 'PluggableAuth' );
$wgPluggableAuth_EnabledAutoLogin = true;
wfLoadExtension( 'LDAPAuthentication2' );
wfLoadExtension( 'LDAPProvider' );
$LDAPProviderDomainConfigProvider = function () {
$config = [
'LDAP' => [
'connection' => [
"server" => "abc.local",
"user" => "cn=Administrator,dc=abc,dc=local",
"pass" => 'passwordhere',
"options" => [
"LDAP_OPT_DEREF" => 1
],
"basedn" => "dc=abc,dc=local",
"groupbasedn" => "dc=abc,dc=local",
"userbasedn" => "dc=abc,dc=local",
"searchstring" => "uid=USER-NAME,dc=abc,dc=local",
"emailattribute" => "mail"
"usernameattribute" => "uid",
"realnameattribute" => "cn",
"searchattribute" => "uid",
]
]
];
return new \MediaWiki\Extension\LDAPProvider\DomainConfigProvider\InlinePHPArray( $config );
};
The pluggins are running:
When i go to the main page i'm not automatically logged in, so i try to log in with domain creds and get the following:
I'm pretty green here and not sure how to configure things. Any ideas?
thanks,
russ
EDIT: After adding $wgShowExceptionDetails = true; I'm getting the following error message:
EDIT2: Snip from phpinfo()
EDIT3: Started over with new containers in attempt to get php-ldap extension working and get around the ldap_connect() error.
Here are the steps I took with my last attempt:
REFERENCE: https://wiki.chairat.me/books/docker/page/how-to-setup-mediawiki-with-docker
Enable SSH service from control panel Terminal & SNMP and then open an SSH connection to the Synology box (using Putty). Login as box admin.
Run the following command to create a new docker container named mediawiki based on the latest mediawiki image:
sudo docker container run -d --name mediawiki -p 8080:80 mediawiki
Run the following command to create a new docker container named mediakwiki-mysql based on the latest MySQL image.
Replace <root_pwd> with desired MySQL root password:
sudo docker container run -d --name mediawiki-mysql -v mediawiki-mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=<root_pwd> mysql
Run the following 3 commands to create a docker network and then tie both images into it:
sudo docker network create mediawiki
sudo docker network connect mediawiki mediawiki
sudo docker network connect mediawiki mediawiki-mysql
REFERENCE: https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-ubuntu-18-04#step-2-%E2%80%94-installing-mysql
Next, open a bash terminal in the mediawiki-mysql container and set the root plugin to mysql_native_password if necessary:
mysql -uroot -p<root_pwd> (this opens a MySQL prompt where <root_pwd> is what you set up in 3. without the <>)
SELECT user,authentication_string,plugin,host FROM mysql.user; (this lists user attributes)
ALTER USER 'root'#'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; (password is the <root_pwd> set above too)
ALTER USER 'root'#'%' IDENTIFIED WITH mysql_native_password BY 'password';
Add a volume mapping in the mediawiki-mysql container so you can copy files to/from the container and a share you can access with File Station on the Synology.
Stop the container if it is running.
Right-click and select Edit, then click on Volume.
Click "Add Folder" and select the shared volume you will use.
For "Mount path" put /var/lib/mysql
Start the container.
REFERENCE: https://computingforgeeks.com/how-to-install-php-7-3-on-debian-9-debian-8/
Add php-ldap extension to the mediawiki container if you want to enable LDAP authentication (e.g. if you have domain with active directory etc.). Open a bash terminal in the mediawiki container:
php -m (this will list all of the active PHP modules - ldap is not listed if not installed yet)
php -v (this will show you what version of PHP you are running)
apt-get update
apt-get upgrade -y
apt-get install libldb-dev libldap2-dev
cd /usr/local/bin
docker-php-ext-install ldap (this takes a while)
php -m (this shows ldap in the list)
Setup the MediaWiki before going on to the LDAP extension stuff.
Open "http://XXX.XXX.XXX.XXX:8080/" in browser and configure.
Use "mediawiki-mysql" in place of "localhost" for mysql.
Put LocalSettings.php into the /usr/www/html folder.
REFERENCE: https://www.mediawiki.org/wiki/Special:ExtensionDistributor?extdistname=LDAPProvider&extdistversion=master
Install the LDAPProvider mediawiki extension needed to support LdapAuthentication2
wget "https://extdist.wmflabs.org/dist/extensions/LDAPProvider-master-04dc101.tar.gz"
tar -xzf LDAPProvider-master-04dc101.tar.gz -C /var/www/html/extensions
rm LDAPProvider-master-04dc101.tar.gz
add "wfLoadExtension( 'LDAPProvider' );" to the LocalSettings.php file.
run "php maintenance/update.php" to create the required databases (takes a few seconds).
wget "https://extdist.wmflabs.org/dist/extensions/PluggableAuth-REL1_34-17fb1ea.tar.gz"
tar -xzf PluggableAuth-REL1_34-17fb1ea.tar.gz -C /var/www/html/extensions
rm PluggableAuth-REL1_34-17fb1ea.tar.gz
add "wfLoadExtension( 'PluggableAuth' );" to the LocalSettings.php file.
wget "https://extdist.wmflabs.org/dist/extensions/LDAPAuthentication2-master-cb07184.tar.gz"
tar -xzf LDAPAuthentication2-master-cb07184.tar.gz -C /var/www/html/extensions
rm LDAPAuthentication2-master-cb07184.tar.gz
add "wfLoadExtension( 'LDAPAuthentication2' );" to the LocalSettings.php file.
copy in the LocalSettings.php file that has the LDAP configuration (item 2 in my original question above).
Based on the comments conversation and the additional step-by-step list above, here some thoughts:
Add php-ldap extension to the mediawiki container if you want to enable ldap authentication (e.g. if you have domain with active directory etc.). Open a bash terminal in the mediawiki container:
php -m (this will list all of the active PHP modules - ldap is not listed if not installed yet)
php -v (this will show you what version of PHP you are running)
apt-get update
apt-get upgrade -y
apt-get install libldb-dev libldap2-dev
cd /usr/local/bin
docker-php-ext-install ldap (this takes a while)
php -m (this shows ldap in the list)
I strongly doubt that this is working both at all and even if it would work, then I doubt it would work in a sustainable way. The problems with this "solution" are:
You're just changing the container state, not the image. Whenever the container is deleted, you've no easy way to reproduce the setup, except by doing all these manual steps again. That's not really what docker containers are about
You're "just" changing the php installation, that requires a restart of the php daemon or the apache daemon, if you're using apache. As you're not doing that, the php process handling your requests does not know about the new extension, whereas the php cli is perfectly fine showing you the ldap extension.
The solution, that will work with your problem, is to create your own image, based on the mediawiki:latest docker image. In this you can then add all the required libraries and use this image instead of the base one. Here're the steps you need to do to achieve that:
Create a new directory on your host where you're running docker as well
Create a Dockerfile in this directory on your host: This file is a set of instructions for docker to know how to build the image.
Fill it with this contents:
# inherit from the official mediawiki image
FROM mediawiki:latest
# Install the required libraries for adding the ldap extension for php
RUN apt-get update && \
apt-get install -y libldb-dev libldap2-dev && \
rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-install ldap
Build the image with docker by navigating into the directory and run this command:
docker build -t mediawiki:local . The -t creates a tag for the resulting image so that you can use this meaningful name instead of the checksum of the image. You can, however, choose whatever name and tag you want.
Run the container with this new image:
docker run -v /path/to/LocalSettings.php:/var/www/html/LocalSettings.php -p 8080:80 --rm=true -d mediawiki:local. The command may be different from what you use, the important bit is the new image name, which is mediawiki:local or whatever tag you used in the build step before.
The resulting container has the ldap plugin installed and it can also be used from the php daemon which handles incoming requests.
Some remarks to your subsequent setup: If I understand it correctly, you're also installing extensions in the container itself, as well, by using a shell in the container and downaloding the extension. This is also not the best idea of doing, as, as I said already, when you recreate the container (which shouldbe possible always and you shouldn't think about that), the extensions are deleted as well. You should inject the extensions directory as a volume to the container and save the extensions on your hosts disk. Or, as an alternative, you can install the MediaWiki extension in the Dockerfile where you install the ldap php extension as well.
I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.
I am learning docker these days. And I want to install mysql inside docker container.
Here is my Dockerfile
FROM ubuntu:14.04
ADD ./setup_mysql.sh /setup_mysql.sh
RUN chmod 755 /setup_mysql.sh
RUN /setup_mysql.sh
EXPOSE 3306
CMD ["/usr/sbin/mysqld"]
and shell script setup_mysql.sh
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
sed -i -e "s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
service mysql start &
sleep 5
echo "UPDATE mysql.user SET password=PASSWORD('rootpass') WHERE user='root'" | mysql
echo "CREATE DATABASE devdb" | mysql
echo "GRANT ALL ON devdb.* TO devuser #'%' IDENTIFIED BY 'devpass'" | mysql
sleep 5
service mysql stop
Something wrong happend when running sudo docker build -t test/devenv .
Setting up mysql-server-5.5 (5.5.38-0ubuntu0.14.04.1) ...
invoke-rc.d: policy-rc.d denied execution of stop.
invoke-rc.d: policy-rc.d denied execution of start.
And if I remove the second sleep 5, the command service mysql stop will throw
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Why does this happen?
Thank you!
I high recommend leveraging the work of others. For example checkout the Mysql image from the docker registry:
https://registry.hub.docker.com/_/mysql/
Here's the associated git repository files:
https://github.com/docker-library/mysql/blob/master/5.7
If you look into the Dockerfile you'll notice the software is being installed as expected:
.. apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}"* ..
The trick is to realize that a database instance is not the same thing as the database software, only the latter is shipped with the image. Creating DBs and loading them with data is something that is done at run-time. So that work is done by an extra script, pulled into the image and setup to be executed when you run the container:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Hope this helps.
Add this to your dockerfile:
RUN su
RUN echo exit 0 > /usr/sbin/policy-rc.d
I was facing the same issue. This code fixed it.
Here is a good post which tries to root cause the issue you are facing.
Shorter way:
RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d should resolve your issue
OR
If that doesn't resolve the issue, try running your docker container with privileged option. Like this, docker run --privileged -d -ti DOCKER_IMAGE:TAG
Ideally, I would not recommend running container with privileged option unless its a test bed container. The reason being running a docker container with privileged gives all capabilities to the container, and it also lifts all the limitations enforced. In other words, the container can then do almost everything that the host can do. But this is not a good practice. This defeats the docker purpose of isolating from host machine.
The ideal way to do this is to set capabilities of your docker container based on what you want to achieve. Googling this should help you out to provide appropriate capability for your docker container.
I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh