Script that uses google-drive-ocamlfuse fails when run through Rundeck - google-drive-api

I have a script that runs fine when run directly from the shell of the server hosting Rundeck. It uses google-drive-ocamlfuse to mount my google drive to a local directory, creates a folder in the directory, and then unmounts.
name=New-Folder-Name
google-drive-ocamlfuse /home/user/mygoogledrive/
mkdir /home/user/mygoogledrive/$name
fusermount -u /home/user/mygoogledrive/
If I try to run this as an ad hoc command in Rundeck:
sudo ./var/lib/rundeck/scripts/create-folder.sh
... it errors out with:
Error: no DISPLAY environment variable specified
/bin/sh: 1: google-chrome: not found
/bin/sh: 1: chromium-browser: not found
/bin/sh: 1: open: not found
Cannot retrieve auth tokens.
Failure("Error opening URL:https://accounts.google.com/o/oauth2/auth?client_id=REDACTING-PERSONAL-INFO")
mkdir: cannot create directory ‘/home/user/mygoogledrive/New-Folder-Name’: No such file or directory
fusermount: failed to unmount /home/home-db/mygoogledrive: Invalid argument
I am new to Rundeck and am not yet comfortable with permissions and I don't have a good sense of how a command is being run on the server by Rundeck. It must be accessing and executing the file, given the error output, but maybe there are some limitations in the environment due to permissioning that doesn't allow for the use of certain libraries need by google-drive-ocamlfuse? Any ideas?

To use sudo on a target remote node, you need to set the sudo parameters. Otherwise, if you need to use sudo locally, the easier way is to use this plugin in your Rundeck instance.

Related

Trying to connect google drive to paperspace gradient notebook

I'm trying to mount google drive to paperspace notebook using google-drive-ocamlfuse with the following code
sudo add-apt-repository ppa:alessandro-strada/ppa
sudo apt update && sudo apt install google-drive-ocamlfuse
but when launching with
google-drive-ocamlfuse
there's an error:
/bin/sh: 1: firefox: not found
/bin/sh: 1: google-chrome: not found
/bin/sh: 1: chromium-browser: not found
/bin/sh: 1: open: not found
Cannot retrieve auth tokens.
Failure("Error opening URL:https://accounts.google.com/o/oauth2/auth?client_id=..........
ocamlfuse's github page has instructions on "Headless Usage & Authorization" but it's for local machine not for something like paperspace.
is there any way i can use google-drive-ocamlfuse to mount the drive?
is there any other better/simpler method to mount google drive on paperspace gradient?
Short answer:
There is no way to mount Google Drive as filesystem on paperspace gradient.
Long answer:
Your error message says cannot open browser. You are correct, should use headless mode [https://github.com/astrada/google-drive-ocamlfuse/wiki/Headless-Usage-&-Authorization]. Basically create an OAuth App, note down the client-id and client-secret, then authenticate using google-drive-ocamlfuse -headless -id client-id -secret client-secret.
But even if the authentication step success, you will still encounter error like fuse: device not found, try 'modprobe fuse' first. It is because Paperspace gradient notebook is running as container. A container cannot perform fuse operation unless it has SYS_ADMIN capability. (See FUSE inside Docker). In this case, we have no control on how paperspace running their container. So we are unable to mount filesystem on paperspace gradient.
However, you can use something like https://github.com/iterative/PyDrive2 to access Google Drive file.

Openshift 'oc rsync' fails because of vanished files

In Openshift 3.9, when I use 'oc rsync' to export jenkins data from my jenkins pod to the host's file system, I get the following error:
rsync warning: some files vanished before they could be transferred (code 24) at main.c(1650) [generator=3.1.2]
error: exit status 24
This seems to be a known issue with the underlying linux rsync utility and has a workaround. However, because the rsync utility is called by 'oc' in my case, I cannot figure out how to deal with this issue.
Suggestions? Thanks.
Like rsync, oc rsync seems to have a --exclude option that is also described in your workaround.
So using --exclude should allow you to exclude the folder with the ephemeral files:
oc rsync --exclude='/path/to/*/tmp/' POD:/remote/dir/ ./local/dir

How can I change the directory that packer runs the AMI provisioning script from /tmp to /opt

I need to change the directory where packer runs the AMI provisioning script from /tmp/packer-shell975270284 because our instances don't allow scripts to be run form /tmp.
This script needs to run in /opt or /home/ec2-user. where it will have permissions
Below is the error that I am getting after the ansible playbook ran.
==> amazon-ebs: Provisioning with shell script: /tmp/packer-shell975270284
Build 'amazon-ebs' errored: Error uploading script: scp: /tmp/script_5412.sh: Permission denied.
==> Some builds didn't complete successfully and had errors.
==> Builds finished but no artifacts were created.
You need to set the remote_folder to something else rather than /tmp. See the documentation.

EB: Trigger container commands / deploy scripts on configuration change

I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?
This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.

Zabbix external checks cannot be executed due to SELinux

I try to implement external checks in Zabbix 2.2. I've created simple bash script for SSL verification which should be executed by zabbix service. The script is located in /var/lib/zabbixsrv/externalchecks directory. Even if there are 777 permission for the .sh script I still receive message telling
unable to execute /var/lib/zabbixsrv/externalscripts/test.sh: Permission denied
I've got same message when I try to run the command even as root. The ls -Z /var/lib/zabbixsrv/externalscripts/test.sh command output says:
-rwxrwxrwx. zabbixsrv zabbixsrv unconfined_u:object_r:default_t:s0 /var/lib/zabbixsrv/externalscripts/test.sh
There is no message relating this in /var/log/massages. Does anybody know how to force selinux to allow execute zabbixsrv user the script without disabling selinux?
Which zabbix service (zabbix-server, zabbix-agent, ...) should execute the external checks script?
Did you tried to set AllowRoot=1 in /etc/zabbix/zabbix_agentd.conf?
The main issue was in /etc/fstab configuration file. The Zabbix has defined as default values for script /var/lib/zabbixsrv/excernalscripts directory. My server has /var mounted with rw and noexec permissions.
I've already moved the script to different location and change the configuration file accordingly. Checks are working fine now.
Thanks everybody for any contribution relating this topic.