I'm writing a small script to open mailto links from webpages in google chrome small app window:
so far I have this:
#!/bin/sh
notify-send "Opening Gmail" "`echo $1`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
google-chrome -app="https://mail.google.com/mail/?extsrc=mailto&url=`echo $1`"
which works nice - however I'd like to add the email recipient to the notification - something like this - but I need a regex to get the email from the mailto link - which might contain subjects and such..
#!/bin/sh
$str = preg_replace('#<a.+?href="mailto:(.*?)".+?</a>#', "$1", $str);
notify-send "Opening Gmail" "`echo $str`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
google-chrome -app="https://mail.google.com/mail/?extsrc=mailto&url=`echo $1`"
this does not work..
any ideas?
UPDATE: here's the working code:
#!/bin/sh
str=$(echo $1|sed 's/.*mailto:\([^?]*\)?.*/\1/')
notify-send "Opening Gmail" "to: `echo $str`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
google-chrome -app="https://mail.google.com/mail/?extsrc=mailto&url=`echo $1`"
If you write it like this, it's not shell:)
Can you provide the sample string to use regex unto? Basically it will be sed invocation, that shall cut everything but the address. Although the mail address according to the RFC can be quite complicated, so the simple approach will work in most of the cases, but not every time.
Try to start from something like
sed 's/.*mailto:\([^?]*\)?.*/\1/'
So you might want to use it like this:
str=$(echo $1|sed 's/.*mailto:\([^?]*\)?.*/\1/')
Great! I got your script and made some change to work better, look:
#!/bin/sh
str=$(echo $1|sed 's/.*mailto:\([^?]*\)?.*/\1/')
notify-send "Abrindo Gmail" "to: `echo $str`" -i /usr/local/share/icons/hicolor/48x48/apps/google-chrome.png -t 5000
chromium-browser "https://mail.google.com/mail/?view=cm&fs=1&tf=1&source=mailto&to=$1"
Related
Just a quick question to solve an issue I've been facing for days now: how to get an wget json response in a shell variable?
I have so far a wget command like this:
wget "http://IP:PORT/webapi/auth.cgi?account=USER&passwd=PASSWD"
The server reponse is normally something like:
{"data":{"sid":"9O4leaoASc0wgB3J4N01003"},"success":true}
What I'd like to do is to grep the sid value in a variable (as it is used as login ticket), but also the success value in order to ensure that the command has been executed correctly...
I think it is a very easy command to build, but I've never practised wget/http reponse in shell command...
Thanks a lot for your help!
EDIT: Thanks for your help. I did gave a try to both answers, but I am having the same error message (whatever I do):
--2022-07-16 14:21:38-- http://xxxxxxxx:port/webapi/auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PWD&session=SurveillanceStation&format=sid
Connecting to 192.168.1.100:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PASSWD&session=SurveillanceStation&format=sid: Permission denied
Cannot write to `auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PASSWD&session=SurveillanceStation&format=sid' (Permission denied).
The annoying thing: execution the URL from a web browser works just fine... :/
You can first store the result of wget command in variable and then use it:
VAR=$(wget "http://IP:PORT/webapi/auth.cgi?account=USER&passwd=PASSWD")
and then using jq extract from JSON file:
sid=$(echo $VAR|jq .data.sid)
success=$(echo $VAR|jq .success)
If you have problem with execution of wget you can try something like:
wget -O output_file 'http://xxxxxxxx:port/webapi/auth.cgi?api=SYNO.API.Auth&method=Login&version=3&account=USER&passwd=PWD&session=SurveillanceStation&format=sid'
and then set variables:
sid=$(jq .data.sid output_file )
success=$(jq .success output_file )
I do not know why I am facing this Permission Denied error. Thus I gave a try to save cookie on a dedicated folder... And it works just fine :)
The final command lloks like:
VAR=$(wget -q --keep-session-cookies --save-cookies "/var/tmp/cookie_tmp" -O- "http://IP:PORT/webapi/auth.cgi?api=SYNO.API.Auth&method=login&version=1&account=USER&passwd=PWD&session=SurveillanceStation");
Thanks for your help (I learned a lot about sed ;) )
So this can be done using the stream editor or "sed". There is a lot to learn but for this post here is an idea of a code:
sid=$(wget <your url> | sed 's/.*sid":"\(.*\)"},.*/\1/')
success=$(wget <your url> | sed 's/.*success":\(.*\)}/\1/')
This will create 2 variables $sid and $success.
you can learn more about sed in depth here.
Hope this helped!
I simply can't find an answer to why this is happening when I am trying command line attempt to connect to gmail.
I am using sendEmail.exe in a bat file
-f xxxxxxxxxxxxxxxxxx#gmail.com -t xxxxxxxxxxxxx#gmail.com -s smtp.gmail.com:587 -xu xxxxxxxxxxxxxx#gmail.com -xp xxxxxxxxxxxxxxxxxx -u "Test Email" -m "Testing Windows Task Scheduler" -o tls=yes
It is the 'bad protocol tcp' part ia m not finding an answer anywhere.
This is happening only in one sytem. But working on another
I've had similar problems with other tools and the same purpose. Please, check you have enabled the less secure apps setting on your Google account.
I hope be useful!
I have a situation where only root can mailx, and only ops can restart the process. I want to make an automated script that both restarts the process and sends an email about doing so.
When I try this using a function the function is "not found".
I had something like:
#!/usr/bin/bash
function restartprocess {
/usr/bin/processcontrol.sh start
}
export -f restartprocess
su - ops -c "restartprocess"
mailx -s "process restarted" myemail.mydomain.com < emailmessage.txt
exit 0
It told me that the function was not found. After some troubleshooting, it turned out that the ops user's default shell is ksh.
I tried changing the script to run in ksh, and changing "export -f" to "typeset -xf", and still the function was not found. Like:
ksh: exportfunction not found
I finally gave up and just called the script (that was in the function directly) and that worked. It was like:
su - ops -c "/usr/bin/processcontrol.sh start"
(This is all of course a simplification of the real script).
Given that user ops has default shell is ksh and I can't change that or modify sudoers, is there a way to export a function such that I can su as ops (and I need to run ops's profile) and execute that function?
I made sure ops user had permission to the directory of the script I wanted it to execute, and permission to run that script.
Any education about this would be appreciated!
There are many restrictions for exporting functions, especially
combined with su - ... with different accounts and different shells.
Instead, turn your script inside out and put all of the command
that is to be run inside a function in the calling shell.
Something like: (Both bash and ksh)
#!/usr/bin/bash
function restartprocess {
/bin/su - ops -c "/usr/bin/processcontrol.sh start"
}
if restartprocess; then
mailx -s "process restarted" \
myemail#mydomain.com < emailmessage.txt
fi
exit 0
This will hide all of the /bin/su processing inside the restartprocess function, and can be expanded at will.
I am trying to write an expect script that will do the following..
open up 13 terminal windows (gnome-terminal, xterm etc)
each window connects to a terminal server via ssh (ssh InReach#10.1.6.254)
and is provided the password via expect.
i can get this to work fine in a single window. the problem i am having though is getting the input passed over to each window.
for instance...
i can do
set timeout -1
spawn gnome-terminal -x ssh InReach#10.1.6.254
inside of a while loop and get my 13 windows. but i would like each one to be logged in automatically via expect.
You can try a slightly different approach. Instead of opening the terminal windows in the expect script, open them in a basic shell script, and have each terminal run an expect script to start a single SSH session.
So the expect script could be as simple as this:
#!/usr/bin/expect -f
spawn ssh InReach#10.1.6.254
# ... provide password ...
interact
And the shell script:
#!/bin/sh
for a in `seq 1 13`; do
gnome-terminal -x ./expect_script
done
When you spawn, you need to cache the $spawn_id value which is set by the attempt.
e.g.
if [catch "spawn ssh -l mtc $ub1_ip_address" ub1_pid] {
Log $ERROR "Unable to spawn ssh to Xubuntu.\n$ub1_pid\n"
return 0
}
set stored_id $spawn_id
To send a command to one terminal session in particular, do
send -i $stored_id "command"
Then, before you contact each, you must first do
expect {
-i $stored_id
[ ... your regexes, globs, etc. ... ]
}
You can find some add'l info http://wiki.tcl.tk/11583
I would also suggest making use of gnome-terminal's ability to specify multiple tabs, including an indication of which is the currently-active one, and a command to be executed. gnome-terminal --help-all is helpful (no pun intended).
I want to be able to display on my web page whether or not a process is running. Both run on the same system (Ubuntu server).
Basically, if something like the command ps -u game | grep java returns something, I want the site to display something like "Game Server Online", else "Offline."
I figure I could redirect the grep output to a file every 5 mins and have a script on the main page read the file content as a string to determine what to print. I feel as though there is be a much better way to do this, however. What else could I do and which scripting language would be best for this task?
If php is available, you could do something like this inline in your page:
<?php
$output = shell_exec('ps -u game | grep java');
if ($output === "java something") {
echo "Server running"
} else {
echo "Server not running"
}
?>
what about a simple web service call?
calling a web service would ensure that both the server is up and the process is running.