/usr/share/google/safe_format_and_mount does not exist on new google cloud instance - google-compute-engine

I'm trying to run my first ever google VM following this tutorial which says to run the command
sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/disk/by-id/google-minecraft-disk /home/minecraft
But, there is no /usr/share/google directory on my VM, so no safe_format_and_mount.
This is what I see in /usr/share:
How do I format the volume I created? or, where is this safe_format_and_mount executable?

To format the drive I used
sudo mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google
-minecraft-disk

log warn "====================================================================="
log warn "WARNING: safe_format_and_mount is deprecated."
log warn "See https://cloud.google.com/compute/docs/disks/persistent-disks"
log warn "for additional instructions."
log warn "====================================================================="
source : https://gist.github.com/dcgoss/254262db0dfd151ecf51

Related

Error in running the shutdown-script for instance of google compute engine: gsutil failed to copy file to google cloud storage

I use a shutdown-script to backup the files on an instance before it is shutdown.
In this shutdown-script, the gsutil tool is used to send files to a bucket at google cloud storage.
/snap/bin/gsutil -m rsync -d -r /home/ganjin/notebook gs://ganjin-computing/XXXXXXXXXXX/TEST-202104/notebook
It worked well for long days. But recently, there occurs some error as below.
If I run the code manually, it works well. It seems that there is something wrong with jobs management of systemd.
Could anyone give me some hint?
INFO shutdown-script: /snap/bin/gsutil -m rsync -d -r /home/ganjin/notebook gs://ganjin-computing/XXXXXXXXXXX/TEST-202104/notebook
Apr 25 03:00:41 instance-XXXXXXXXXXX systemd[1]: Requested transaction contradicts existing jobs: Transaction for snap.google-cloud-sdk.gsutil.d027e14e-3905-4c96-9e42-c1f5ee9c6b1d.scope/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction).
Apr 25 03:00:41 instance-XXXXXXXXXXX shutdown-script: INFO shutdown-script: internal error, please report: running "google-cloud-sdk.gsutil" failed: cannot create transient scope: DBus error "org.freedesktop.systemd1.TransactionIsDestructive": [Transaction for snap.google-cloud-sdk.gsutil.d027e14e-3905-4c96-9e42-c1f5ee9c6b1d.scope/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction).]
Update gsutil with -f option.
update gsutil -f
If the above command doesn’t work then try the command below:
sudo apt-get update && sudo apt-get --only-upgrade install google-cloud-sdk
Update guest environment and try to shutdown the instance. Use the link below as a reference to update the guest environment.
https://cloud.google.com/compute/docs/images/install-guest-environment#update-guest
If still facing issues do forceful shutdown:
sudo poweroff -f

How do you add chmod +x permission to an AWS Elastic Beanstalk platform hook?

Context
I am using Elastic Beanstalk to deploy a very simple test application. I have several packages I want to install using apt. I have included a 01_installations.sh script with the installations in the .platform/hooks/prebuild directory. When I zip my application and deploy to Elastic Beanstalk, the logs confirm that the prebuild script runs, but it does not have permissions.
2020/08/12 21:03:46.674234 [INFO] Executing instruction: RunAppDeployPreBuildHooks
2020/08/12 21:03:46.674256 [INFO] Executing platform hooks in .platform/hooks/prebuild/
2020/08/12 21:03:46.674296 [INFO] Following platform hooks will be executed in order: [01_installations.sh]
2020/08/12 21:03:46.674302 [INFO] Running platform hook: .platform/hooks/prebuild/01_installations.sh
2020/08/12 21:03:46.674482 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPreBuildHooks]. Stop running the command. Error: Command .platform/hooks/prebuild/01_installations.sh failed with error fork/exec .platform/hooks/prebuild/01_installations.sh: permission denied
Question
My understanding is that permissions were denied because I did not add chmod +x to make the .sh file executable. As the AWS documentation on platform hooks states: "Use chmod +x to set execute permission on your hook files." (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html). My question is: how do I do this?
I simply have the .sh file in a directory. I do not call it from anywhere else. Is there a simple step I'm missing? The AWS documentation makes it seem like it should be straightforward.
Previous Attempts
Things I have tried:
Adding .ebextensions
Attempt: Create a .config file in the .ebextensions directory with the below command which should execute the .sh file with chmod +x permissions.
Result: The same error occurs. The Elastic Beanstalk logs do not indicate that the .config was processed at all.
container_commands:
01_chmod1:
command: "chmod +x .platform/hooks/prebuild/01_installations.sh"
Changing the name of the .sh file
Attempt: Change the .sh file to be named "chmod +x 01_installations.sh" as suggested by an AWS user (forums link below). Remove the .ebextensions
Result: The same error occurs.
[RunAppDeployPreBuildHooks]. Stop running the command. Error: Command .platform/hooks/prebuild/chmod +x 01_installations.sh failed with error fork/exec .platform/hooks/prebuild/chmod +x 01_installations.sh: permission denied
I have reviewed the ideas here, but none of them actually include complete enough examples to follow:
https://forums.aws.amazon.com/thread.jspa?messageID=942515
https://github.com/aws/elastic-beanstalk-roadmap/issues/15
Make sure to make your file executable in git
chmod +x path/to/file
git update-index --chmod=+x path/to/file
reference from
How to add chmod permissions to file in GIT?
Normally you set the permissions on your local workstation, when before you zip your deployment package.
However, if you want to do this on EB instance, then you can't use container_commands for that. The reason is that container_commands execute after prebuild. Instead you should try using commands:
The prebuild files run after running commands found in the commands section of any configuration file and before running Buildfile commands.
This worked for me after updating the permission commit the changes.
Also the commands in .config file will not work as those commands will run pre-deployment and hence will not be able to find the path to .sh file.

Pm2 startup issue with CENTOS 8 / SELinux

Please, do you know how resolve this issue ?
I searched everywhere without finding.
06:45 SELinux is preventing systemd from open access on the file /root/.pm2/pm2.pid. For complete SELinux messages run: sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
06:45 pm2-root.service: Can't convert PID files /root/.pm2/pm2.pid O_PATH file descriptor to proper file descriptor: Permission denied systemd 2
06:45 Failed to start PM2 process manager.
I have executed this command : sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
Messages d'audit bruts
type=AVC msg=audit(1591498085.184:7731): avc: denied { open } for pid=1 comm="systemd" path="/root/.pm2/pm2.pid" dev="dm-0" ino=51695937 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:admin_home_t:s0 tclass=file permissive=0
PM2 Version : 4.4.0
NODE version : 12.18.0
CentOS Version : 8
my systemd service :
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=root
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/root/.pm2
PIDFile=/root/.pm2/pm2.pid
Restart=on-failure
ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill
[Install]
WantedBy=multi-user.target
Thank you
As said in the comments, I had the exact same issue.
To solve this, just run the following commands as root after trying to start the PM2 service (in your case, this start attempt would be systemctl start pm2-root)
ausearch -c 'systemd' --raw | audit2allow -M my-systemd
semodule -i my-systemd.pp
This looks pretty generic, but it works. These lines were suggested by SELinux itself. To get them, I had to run the command journalctl -xe after trying to start the service
Two options:
Edit the systemd file that starts pm2 and specify an alternative location for the pm2 PIDFile). You'll have to make two changes, one to tell pm2 where to place the PIDFile, and one to tell systemd where to look for it. Replace the existing PIDFile line with the following two lines
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
PIDFile=/run/pm2.pid
Create an SELinux rule that allows this particular behavior. You can do that exactly as Backslash36 suggest in their answer. If you want to create the policy file yourself rather than through audit2allow,the following should work, although then you have to compile it to a usable .pp file yourself.
module pm2 1.0;
require {
type user_home_t;
type init_t;
class file read;
}
#============= init_t ==============
allow init_t user_home_t:file read;

Orion Context Broker functional test failure

I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.

Permission denied errors when creating app with custom OpenShift cartridge

I'm using OpenShift Origin and developing a cartridge for the first time. When my bin/install and bin/control scripts are running I've noticed "Permission denied" errors when they try to access anything in the cartridge usr dir. In the node platform.log I see the offending command that OpenShift runs looks like this (where my bin/control start tries to run a script in usr):
/sbin/runuser -s /bin/sh 5351e627ee5a934f290001d2 -c "exec /usr/bin/runcon 'unconfined_u:system_r:openshift_t:s0:c0,c1004' /bin/sh -c \"set -e; /var/lib/openshift/5351e627ee5a934f290001d2/mycart/bin/control start \""
Since the usr dir is a symlink I originally thought it was related to that, but now I think it's related to selinux (which I don't know much about). If I do a "ls -Z" on my app's cartridge dir the files are "system_u:object_r:openshift_var_lib_t:s0:c0,c1004" but the contents of the usr dir are "unconfined_u:object_r:default_t:s0", so it doesn't match what's in the above command.
I used the oo-admin-cartridge command to install the cartridge to my Origin VM.
Any ideas on how to fix this?
What I ended up doing was running "chcon -R -u system_u -t bin_t usr/" before installing the cartridge with oo-admin-cartridge. Built-in cartridges are not affected by this problem (checked nodejs), so I feel like it might be a oo-admin-cartridge bug. I would expect it to massage the selinux permissions instead of using whatever I provide.