export shell function to su as a user with ksh default shell - function

I have a situation where only root can mailx, and only ops can restart the process. I want to make an automated script that both restarts the process and sends an email about doing so.
When I try this using a function the function is "not found".
I had something like:
#!/usr/bin/bash
function restartprocess {
/usr/bin/processcontrol.sh start
}
export -f restartprocess
su - ops -c "restartprocess"
mailx -s "process restarted" myemail.mydomain.com < emailmessage.txt
exit 0
It told me that the function was not found. After some troubleshooting, it turned out that the ops user's default shell is ksh.
I tried changing the script to run in ksh, and changing "export -f" to "typeset -xf", and still the function was not found. Like:
ksh: exportfunction not found
I finally gave up and just called the script (that was in the function directly) and that worked. It was like:
su - ops -c "/usr/bin/processcontrol.sh start"
(This is all of course a simplification of the real script).
Given that user ops has default shell is ksh and I can't change that or modify sudoers, is there a way to export a function such that I can su as ops (and I need to run ops's profile) and execute that function?
I made sure ops user had permission to the directory of the script I wanted it to execute, and permission to run that script.
Any education about this would be appreciated!

There are many restrictions for exporting functions, especially
combined with su - ... with different accounts and different shells.
Instead, turn your script inside out and put all of the command
that is to be run inside a function in the calling shell.
Something like: (Both bash and ksh)
#!/usr/bin/bash
function restartprocess {
/bin/su - ops -c "/usr/bin/processcontrol.sh start"
}
if restartprocess; then
mailx -s "process restarted" \
myemail#mydomain.com < emailmessage.txt
fi
exit 0
This will hide all of the /bin/su processing inside the restartprocess function, and can be expanded at will.

Related

How to deploy multiple functions using gcloud command line?

I want to deploy multiple cloud functions. Here is my index.js:
const { batchMultipleMessage } = require('./gcf-1');
const { batchMultipleMessage2 } = require('./gcf-2');
module.exports = {
batchMultipleMessage,
batchMultipleMessage2
};
How can I use gcloud beta functions deploy xxx to deploy these two functions at one time.
Option 1:
For now, I write a deploy.sh to deploy these two cloud functions at one time.
TOPIC=batch-multiple-messages
FUNCTION_NAME_1=batchMultipleMessage
FUNCTION_NAME_2=batchMultipleMessage2
echo "start to deploy cloud functions\n"
gcloud beta functions deploy ${FUNCTION_NAME_1} --trigger-resource ${TOPIC} --trigger-event google.pubsub.topic.publish
gcloud beta functions deploy ${FUNCTION_NAME_2} --trigger-resource ${TOPIC} --trigger-event google.pubsub.topic.publish
It works, but if gcloud command line support deploy multiple cloud functions, that will be best way.
Option 2:
https://serverless.com/
If anyone is looking for a better/cleaner/parallel solution, this is what I do:
# deploy.sh
# store deployment command into a string with character % where function name should be
deploy="gcloud functions deploy % --trigger-http"
# find all functions in index.js (looking at exports.<function_name>) using sed
# then pipe the function names to xargs
# then instruct that % should be replaced by each function name
# then open 20 processes where each one runs one deployment command
sed -n 's/exports\.\([a-zA-Z0-9\-_#]*\).*/\1/p' index.js | xargs -I % -P 20 sh -c "$deploy;"
You can also change the number of processes passed on the -P flag. I chose 20 arbitrarily.
This was super easy and saves a lot of time. Hopefully it will help someone!

How can I pipe to a bash alias from an npm script?

I have an alias in my .bashrc for bunyan:
$ alias bsh
alias bsh='bunyan -o short'
This line runs fine in bash:
$ coffee src/index.coffee | bsh
But if I put the same thing in 'scripts'
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"coffee":"coffee src/index.coffee | bsh"
},
And npm run coffee, it fails:
> coffee src/index.coffee | bsh
sh: bsh: command not found
events.js:141
throw er; // Unhandled 'error' event
^
Error: write EPIPE
at exports._errnoException (util.js:870:11)
at WriteWrap.afterWrite (net.js:769:14)
So at random I tried putting in || instead of | and it worked. I can't figure out why though. I don't have to escape pipe characters in JSON as far as I know.
However it doesn't actually pipe the output to the bsh alias.
The actual fix is to use "coffee":"coffee src/index.coffee | bunyan -o short" -- get rid of the alias completely.
How can I use a bash alias in an npm script?
You can create a function instead of an alias.
function bsh() {
bunyan -o short
}
export -f bsh
The export will make it available to children processes.
So I had a whole response typed up about using
. ~/.bash_aliases && coffee src/index.coffee | bsh
But it turns out that aliases are barely, if at all, supported in bash scripts. From what I have read, aliases are deprecated in favor of functions...
See this discussion for what convinced me to use functions instead of aliases. I tried for an hour or two to get aliases to work by testing with /bin/bash -c as well as npm run, with no luck. However, using a function as suggested by Diego worked immediately and without problems.
I am including this even though the question is already marked as answered in case someone as stubborn as me winds up here from google and decides to try to make aliases work instead of just using a function.
However, I did run into a problem specifically when trying to use this with npm scripts. Even with the export -f, my functions aren't recognized - I still had to manually include the bash_aliases file, and even then, I got an error about the -f option for export.
In order to actually get this working, I had to take out the function export line and manually include the bash_aliases file...

Export .MWB to working .SQL file using command line

We recently installed a server dedicated to unit tests, which deploys
updates automatically via Jenkins when commits are done, and sends
mails when a regression is noticed
> This requires our database to always be up-to-date
Since the database-schema-reference is our MWB, we added some scripts
during deploy, which export the .mwb to a .sql (using python) This
worked fine... but still has some issues
Our main concern is that the functions attached to the schema are not exported at all, which makes the DB unusable.
We'd like to hack into the python code to make it export scripts... but didn't find enough informations about it.
Here is the only piece of documentation we found. It's not very clear for us. We didn't find any information about exporting scripts.
All we found is that a db_Script class exists. We don't know where we can find its instances in our execution context, nor if they can be exported easily. Did we miss something ?
For reference, here is the script we currently use for the mwb to sql conversion (mwb2sql.sh).
It calls the MySqlWorkbench from command line (we use a dummy x-server to flush graphical output.)
What we need to complete is the python part passed in our command-line call of workbench.
# generate sql from mwb
# usage: sh mwb2sql.sh {mwb file} {output file}
# prepare: set env MYSQL_WORKBENCH
if [ "$MYSQL_WORKBENCH" = "" ]; then
export MYSQL_WORKBENCH="/usr/bin/mysql-workbench"
fi
export INPUT=$(cd $(dirname $1);pwd)/$(basename $1)
export OUTPUT=$(cd $(dirname $2);pwd)/$(basename $2)
"$MYSQL_WORKBENCH" \
--open $INPUT \
--run-python "
import os
import grt
from grt.modules import DbMySQLFE as fe
c = grt.root.wb.doc.physicalModels[0].catalog
fe.generateSQLCreateStatements(c, c.version, {})
fe.createScriptForCatalogObjects(os.getenv('OUTPUT'), c, {})" \
--quit-when-done
set -e

Communicating with interactive processes via Ruby popen

I've been messing around with IO#popen and different programs, and having some trouble with interactive processes.
Here's a stripped down version of the script:
def test(command, string)
IO.popen(command, 'a+') do |pipe|
puts "Prompt: #{pipe.read(5)}" # Just to show whether data is read in
pipe.puts string
end
end
I'm seeing various behavior with a few different interactive processes, and trying to understand why.
$ test('pt-kill --user user --ask-pass --print', 'password')
=> This successfully reads in the prompt, and the password is successfully written
to the script. Works as desired. (This is a perl script from Percona)
$ test('telnet', 'quit')
=> Blocks indefinitely trying to read the prompt. In the process of hacking around,
found that calling 'pipe.close_write' prior to the read would allow the read to
complete. Why?
$ test('mysql -u user -p -e "SELECT 1 FROM DUAL", 'password')
=> Echoes full prompt to the screen, but is still blocking on the first read.
Adding a 'pipe.close_write' does nothing.
I've been trying to understand the differences, but am at a loss. Anyone have an explanation?

Expect/TCL: pass commands to specific proc/spawn IDs

I am trying to write an expect script that will do the following..
open up 13 terminal windows (gnome-terminal, xterm etc)
each window connects to a terminal server via ssh (ssh InReach#10.1.6.254)
and is provided the password via expect.
i can get this to work fine in a single window. the problem i am having though is getting the input passed over to each window.
for instance...
i can do
set timeout -1
spawn gnome-terminal -x ssh InReach#10.1.6.254
inside of a while loop and get my 13 windows. but i would like each one to be logged in automatically via expect.
You can try a slightly different approach. Instead of opening the terminal windows in the expect script, open them in a basic shell script, and have each terminal run an expect script to start a single SSH session.
So the expect script could be as simple as this:
#!/usr/bin/expect -f
spawn ssh InReach#10.1.6.254
# ... provide password ...
interact
And the shell script:
#!/bin/sh
for a in `seq 1 13`; do
gnome-terminal -x ./expect_script
done
When you spawn, you need to cache the $spawn_id value which is set by the attempt.
e.g.
if [catch "spawn ssh -l mtc $ub1_ip_address" ub1_pid] {
Log $ERROR "Unable to spawn ssh to Xubuntu.\n$ub1_pid\n"
return 0
}
set stored_id $spawn_id
To send a command to one terminal session in particular, do
send -i $stored_id "command"
Then, before you contact each, you must first do
expect {
-i $stored_id
[ ... your regexes, globs, etc. ... ]
}
You can find some add'l info http://wiki.tcl.tk/11583
I would also suggest making use of gnome-terminal's ability to specify multiple tabs, including an indication of which is the currently-active one, and a command to be executed. gnome-terminal --help-all is helpful (no pun intended).