I want to make a timestamped backup of the bookmarks with rsync everytime Chrome exits. How to trigger a script execution right after Chrome closes?
Edit:
This is the default execution script to start Chrome on Linux Mint with the solution I'm trying to implement:
#!/bin/bash
#
# Copyright (c) 2011 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Let the wrapped binary know that it has been run through the wrapper.
export CHROME_WRAPPER="`readlink -f "$0"`"
HERE="`dirname "$CHROME_WRAPPER"`"
"/opt/google/chrome/chrome" & pid=$!
wait $pid
if ! pgrep chrome > /dev/null; then
echo "It exited successfully"
fi
# Check if the CPU supports SSE2. If not, try to pop up a dialog to explain the
# problem and exit. Otherwise the browser will just crash with a SIGILL.
# http://crbug.com/348761
grep ^flags /proc/cpuinfo|grep -qs sse2
if [ $? != 0 ]; then
SSE2_DEPRECATION_MSG="This computer can no longer run Google Chrome because \
its hardware is no longer supported."
if which zenity &> /dev/null; then
zenity --warning --text="$SSE2_DEPRECATION_MSG"
elif which gmessage &> /dev/null; then
gmessage "$SSE2_DEPRECATION_MSG"
elif which xmessage &> /dev/null; then
xmessage "$SSE2_DEPRECATION_MSG"
else
echo "$SSE2_DEPRECATION_MSG" 1>&2
fi
exit 1
fi
# We include some xdg utilities next to the binary, and we want to prefer them
# over the system versions when we know the system versions are very old. We
# detect whether the system xdg utilities are sufficiently new to be likely to
# work for us by looking for xdg-settings. If we find it, we leave $PATH alone,
# so that the system xdg utilities (including any distro patches) will be used.
if ! which xdg-settings &> /dev/null; then
# Old xdg utilities. Prepend $HERE to $PATH to use ours instead.
export PATH="$HERE:$PATH"
else
# Use system xdg utilities. But first create mimeapps.list if it doesn't
# exist; some systems have bugs in xdg-mime that make it fail without it.
xdg_app_dir="${XDG_DATA_HOME:-$HOME/.local/share/applications}"
mkdir -p "$xdg_app_dir"
[ -f "$xdg_app_dir/mimeapps.list" ] || touch "$xdg_app_dir/mimeapps.list"
fi
# Always use our versions of ffmpeg libs.
# This also makes RPMs find the compatibly-named library symlinks.
if [[ -n "$LD_LIBRARY_PATH" ]]; then
LD_LIBRARY_PATH="$HERE:$HERE/lib:$LD_LIBRARY_PATH"
else
LD_LIBRARY_PATH="$HERE:$HERE/lib"
fi
export LD_LIBRARY_PATH
export CHROME_VERSION_EXTRA="stable"
# We don't want bug-buddy intercepting our crashes. http://crbug.com/24120
export GNOME_DISABLE_CRASH_DIALOG=SET_BY_GOOGLE_CHROME
# Automagically migrate user data directory.
# TODO(phajdan.jr): Remove along with migration code in the browser for M33.
if [[ -n "" ]]; then
if [[ ! -d "" ]]; then
"$HERE/chrome" "--migrate-data-dir-for-sxs=" \
--enable-logging=stderr --log-level=0
fi
fi
# Sanitize std{in,out,err} because they'll be shared with untrusted child
# processes (http://crbug.com/376567).
exec < /dev/null
exec > >(exec cat)
exec 2> >(exec cat >&2)
# Make sure that the profile directory specified in the environment, if any,
# overrides the default.
if [[ -n "$CHROME_USER_DATA_DIR" ]]; then
# Note: exec -a below is a bashism.
exec -a "$0" "$HERE/chrome" \
--user-data-dir="$CHROME_USER_DATA_DIR" "$#"
else
exec -a "$0" "$HERE/chrome" "$#"
fi
What OS are you using? In OS X, this shell script will start Chrome and then do stuff when Chrome quits - it should be easy to adapt it for your needs in any Unix-like OS.
#! /usr/bin/env sh
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" & pid=$!
wait $pid
if ! pgrep Chrome > /dev/null; then # no instances of Chrome are running
# do stuff
fi
(based on this answer)
Edit: And this works for me in Ubuntu:
#! /usr/bin/env sh
/opt/google/chrome/chrome & pid=$!
wait $pid
if ! pgrep chrome > /dev/null; then
# do stuff
fi
Due to the way Google Chrome uses processes, monitoring (PID) the process of a specific Google Chrome window (or instance) of it is usually a headache or (almost) impossible.
But, there is a workaround way that makes this possible. Just use the --user-data-dir parameter pointing to the /tmp folder (--user-data-dir=/tmp).
Below, to illustrate, I've created a bash script that starts the http Web service Gitea, opens it in a Google Chrome, and then terminates it when the Google Chrome window is closed.
#!/bin/bash
gitea & GITEA_PID=$!
sleep 3
google-chrome-stable --user-data-dir=/tmp --app=http://0.0.0.0:3000/ & CHROME_PID=$!
wait $CHROME_PID
kill -9 $GITEA_PID
Basically I'm using your same idea. Just adapt. 🥰
[Ref(s).: https://stackoverflow.com/a/75013043/3223785 ,
https://stackoverflow.com/a/35294908/3223785 ,
https://www.ghacks.net/2013/10/06/list-useful-google-chrome-command-line-switches/ ,
https://www.reddit.com/r/firefox/comments/w61dwi/how_do_i_start_firefox_in_a_single_window_with/?utm_source=share&utm_medium=web2x&context=3 , ]
Related
I write a .sh script that firstly downloads the source code of a page and secondly executes a Rscript only if the source code downloaded is different from the latter. The page is updated once a day and the URL ends with the actual date. This is all on a server and a cron job would run the .sh every 15 min. So I do this :
#!/bin/bash
lwp-download "https://geodes.santepubliquefrance.fr/GC_indic.php?lang=fr&prodhash=de1751e6&indic=type_hospit&dataset=covid_hosp_type&view=map2&filters=sexe=0,jour="$(date '+%Y-%M-%d') download.html
md5 page.html > last_md5
diff previous_md5 last_md5
if[ "$?" = "!" ] ; then
Rscript myscript.R
fi
mv last_md5 previous_md5
rm page.html
First problem, it carries on running the R script even though download.html is downloaded and unchanged.
Plus, I hit an error after the R script has run "Syntax error: "fi" unexpected"
Some issues:
You need to put a space between if and [ - or you could just do if command; then.
You calculate the MD5 sum on the wrong file.
You remove the wrong file.
Since you're probably not interested in seeing the actual diff in the MD5 sums, I suggest that you use cmp -s instead of diff.
Also note that I quoted the $(date ...) command too. It's not necessary in this particular case, but it makes linters happy.
#!/bin/bash
lwp-download "https://geodes.santepubliquefrance.fr/GC_indic.php?lang=fr&prodhash=de1751e6&indic=type_hospit&dataset=covid_hosp_type&view=map2&filters=sexe=0,jour=$(date '+%Y-%M-%d')" download.html
md5 download.html > last_md5
if ! cmp -s previous_md5 last_md5; then
Rscript myscript.R
mv last_md5 previous_md5
else
rm last_md5
fi
rm download.html
You should leave a space between if and [.
#!/bin/bash
lwp-download "https://geodes.santepubliquefrance.fr/GC_indic.php?lang=fr&prodhash=de1751e6&indic=type_hospit&dataset=covid_hosp_type&view=map2&filters=sexe=0,jour="$(date '+%Y-%M-%d') download.html
md5 page.html > last_md5
diff previous_md5 last_md5
if [[ "$?" = "!" ]] ; then
Rscript myscript.R
fi
mv last_md5 previous_md5
rm page.html
Also i'd recommend if you dont see any error to use any online lint to guide you in whats wrong
https://www.shellcheck.net/
I have the shell script:
#!/bin/bash
export LD=$(lsb_release -sd | sed 's/"//g')
export ARCH=$(uname -m)
export VER=$(lsb_release -sr)
# Load the test function
/bin/bash -c "lib/test.sh"
echo $VER
DISTROS=('Arch'
'CentOS'
'Debian'
'Fedora'
'Gentoo')
for I in "${DISTROS[#]}"
do
i=$(echo $I | tr '[:upper:]' '[:lower:]') # convert distro string to lowercase
if [[ $LD == "$I"* ]]; then
./$ARCH/${i}.sh
fi
done
As you can see it should run a shell script, depending on which architecture and OS it is run on. It should first run the script lib/test.sh before it runs this architecture and OS-specific script. This is lib/test.sh:
#!/bin/bash
function comex {
which $1 >/dev/null 2>&1
}
and when I run it on x86_64 Arch Linux with this x86_64/arch.sh script:
#!/bin/bash
if comex atom; then
printf "Atom is already installed!"
elif comex git; then
printf "Git is installed!"
fi
it returned the output:
rolling
./x86_64/arch.sh: line 3: comex: command not found
./x86_64/arch.sh: line 5: comex: command not found
so clearly the comex shell function is not correctly loaded by the time the x86_64/arch.sh script is run. Hence I am confused and wondering what I need to do in order to correctly define the comex function such that it is correctly loaded in this architecture- and OS-dependent final script.
I have already tried using . "lib/test.sh" instead of /bin/bash -c "lib/test.sh" and I received the exact same error. I have also tried adding . "lib/test.sh" to the loop, just before the ./$ARCH/${i}.sh line. This too failed, returning the same error.
Brief answer: you need to import your functions using . or source instead of bash -c:
# Load the test function
source "lib/test.sh"
Longer answer: when you call script with bash -c, a child process is created. This child process sees all exported variables (including functions) from parent process. But not vice versa. So, your script will never see comex function. Instead you need to include script code directly in current script and you do so by using . or source commands.
Part 2. After you "sourced" lib/test.sh, your main script is able to use comex function. But arch scripts won't see this function because it is not exported to them. Your need to export -f comex:
#!/bin/bash
function comex {
which $1 >/dev/null 2>&1
}
export -f comex
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have installed Google Chrome in Ubuntu 10.10. When I try to use in normal user, it is working fine.
Now if I want to use as a root it gives the following error:
Google Chrome does not run as root
Also when I tried the following command in terminal, it opens Google Chrome:
google-chrome --user-data-dir
I need a permanent solution for this. Can anybody give me idea about this?
Run from terminal
# google-chrome --no-sandbox --user-data-dir
or
Open the file opt/google/chrome/google-chrome and replace
exec -a "$0" "$HERE/chrome" "$#"
to
exec -a "$0" "$HERE/chrome" "$#" --user-data-dir --no-sandbox
It's working for chrome version 49 in CentOS 6. Chrome will give warning also.
First solution:
1. switch off Xorg access control: xhost +
2. Now start google chrome as normal user "anonymous" :
sudo -i -u anonymous /opt/google/chrome/chrome
3. When done browsing, re-enable Xorg access control:
xhost -
More info : Howto run google-chrome as root
Second solution:
1. Edit the file /opt/google/chrome/google-chrome
2. find exec -a "$0" "$HERE/chrome" "$#"
or exec -a "$0" "$HERE/chrome" "$PROFILE_DIRECTORY_FLAG" \ "$#"
3. change as
exec -a "$0" "$HERE/chrome" "$#" --user-data-dir ”/root/.config/google-chrome”
Third solution:
Run Google Chrome Browser as Root on Ubuntu Linux systems
Go to /opt/google/chrome.
Open google-chrome.
Append current home for data directory. Replace this:
exec -a "$0" "$HERE/chrome" "$#"
With this:
exec -a "$0" "$HERE/chrome" "$#" --user-data-dir $HOME
For reference visit site this site, “How to run chrome as root user in Ubuntu.”
i followed these steps
Step 1. Open /etc/chromium/default file with editor
Step 2. Replace or add this line
CHROMIUM_FLAGS="--password-store=detect --user-data-dir=/root/chrome-profile/"
Step 3. Save it..
Thats it.... Start the browser...
I tried this with Kali linux, Debian, CentOs 7,And Ubuntu
(Permanent Method)
Edit the file with any text editor (I used Leafpad) Run this code your terminal leafpad/opt/google/chrome/google-chrome
(Normally its end line) find exec -a "$0" "$HERE/chrome" "$#"
or exec -a "$0" "$HERE/chrome" "$PROFILE_DIRECTORY_FLAG" \ "$#"
change as exec -a "$0" "$HERE/chrome" "$#" --no-sandbox --user-data-dir
(Just Simple Method)
Run This command in your terminal
$ google-chrome --no-sandbox --user-data-dir
Or
$ google-chrome-stable --no-sandbox --user-data-dir
Just replace following line
exec -a "$0" "$HERE/chrome" "$#"
with
exec -a "$0" "$HERE/chrome" "$#" --user-data-dir
all things will be right.
It no longer suffices to start Chrome with --user-data-dir=/root/.config/google-chrome. It simply prints Aborted and ends (Chrome 48 on Ubuntu 12.04).
You need actually to run it as a non-root user. This you can do with
gksu -wu chrome-user google-chrome
where chrome-user is some user you've decided should be the one to run Chrome. Your Chrome user profile will be found at ~chrome-user/.config/google-chrome.
BTW, the old hack of changing all occurrences of geteuid to getppid in the chrome binary no longer works.
STEP 1: cd /opt/google/chrome
STEP 2: edit google-chrome file. gedit google-chrome
STEP 3: find this line: exec -a "$0" "$HERE/chrome" "$#".
Mostly this line is in the end of google-chrome file.
Comment it out like this : #exec -a "$0" "$HERE/chrome" "$#"
STEP 4:add a new line at the same place.
exec -a "$0" "$HERE/chrome" "$#" --user-data-dir
STEP 5: save google-chrome file and quit. And then you can use chrome as root user. Enjoy it!
Chrome can run as root (remember to use gksu when doing so) so long as you provide it with a profile directory.
Rather than type in the profile directory every time you want to run it, create a new bash file (I'd name it something like start-chrome.sh)
#/bin/bash
google-chrome --user-data-dir="/root/chrome-profile/"
Rember to call that script with root privelages!
$ gksu /root/start-chrome.sh
I have Rails app set up using Jruby with puma as the web server. Puma doesn't daemonize on its own, so I wrapped it in a bash script to handle generating a pid (as described in the Monit FAQ). The script is below:
#!/bin/bash
APP_ROOT="/home/user/public_html/app"
export RAILS_ENV=production
export JRUBY_OPTS="--1.9"
export PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH
case $1 in
start)
echo $$ > $APP_ROOT/puma.pid;
cd $APP_ROOT;
exec 2>&1 puma -b tcp://127.0.0.1:5000 1>/tmp/puma.out
;;
stop)
kill `cat $APP_ROOT/puma.pid` ;;
*)
echo "usage: puma {start|stop}" ;;
esac
exit 0
This works from the command line and it works even if I execute it after running the below to simulate the monit shell:
env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin /bin/sh
The relevant monitrc lines are below:
check process puma with pidfile /home/user/public_html/app/puma.pid
start program = "/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh start"
stop program = "/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh stop"
The monit log shows it constantly try to start puma, and it even gets so far as regenerating a new PID, but is never able to actually start puma. Every time I try to run this script from every other context I can think of it works - except from monit.
I managed to get this to work after reading this post: running delayed_job under monit with ubuntu
For some reason, changing my monitrc to use the following syntax made this work. I have no idea why:
start program = "/bin/su - user -c '/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh start'"
stop program = "/bin/su - user -c '/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh stop'"
I am using GeForce 8400M GS on Ubuntu 10.04 and I am learning CUDA programming. I am writing and running few basic programs. I was using cudaMalloc, and it kept giving me an error until I ran the code as root. However, I had to run the code as root only once. After that, even if I run the code as normal user, I do not get an error on malloc. What's going on?
This is probably due to your GPU not being properly initialized at boot. I've come across this problem when using Ubuntu Server and other installations where an X server isn't being started automatically. Try the following to fix it:
Create a directory for a script to initialize your GPUs. I usually use /root/bin. In this directory, create a file called cudainit.sh with the following code in it (this script came from the Nvidia forums).
#!/bin/bash
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
# Count the number of NVIDIA controllers found.
N3D=`/usr/bin/lspci | grep -i NVIDIA | grep "3D controller" | wc -l`
NVGA=`/usr/bin/lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i;
done
mknod -m 666 /dev/nvidiactl c 195 255
else
exit 1
fi
Now we need to make this script run automatically at boot. Edit /etc/rc.local to look like the following.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#
# Init CUDA for all users
#
/root/bin/cudainit.sh
exit 0
Reboot your computer and try to run your CUDA program as a regular user. If I'm right about what the problem is, then it should be fixed.
To work with Ubuntu 14.04 I followed https://devtalk.nvidia.com/default/topic/699610/linux/334-21-driver-returns-999-on-cuinit-cuda-/ to add nvidia-uvm to etc/modules, and to add a line to a custom udev rule. Create /etc/udev/rules.d/70-nvidia-uvm.rules with this line:
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/bin/mknod -m 666 /dev/nvidia-uvm c $(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0;'"
I don't understand why sudo modprobe nvidia-uvm works to create a proper /dev/nvidia-uvm (as does sudo cuda_program) but the /etc/modules listing requires the udev rule.