I am working with a puppet class which write to my.cnf if specified line isn't there, and it's not working. Here is the code:
class mysql-server::configure {
exec { "enable_binlog":
path => "/usr/bin/:/usr/sbin/:/usr/local/bin:/bin/:/sbin",
command => "echo 'log_bin=/var/log/mysql/mysql-bin.log' >> /etc/mysql/my.cnf",
onlyif => "grep -c log_bin=/var/log/mysql/mysql-bin.log' /etc/mysql/my.cnf",
}
}
I believe that your onlyif query is wrong.
While grep -c prints a 0 if no matching line is found, it still returns 1.
How about
unless => 'grep -q log_bin=/var/log/mysql/mysql-bin.log /etc/mysql/my.cnf'
Note that you probably want to use the file_line type from the stlib module to do the same thing more efficiently.
Related
I curl an endpoint for a json response and write the response to a file.
So far I've got a script that:
1). does the curl if the file does not exist and
2). else sets a variable
#!/bin/bash
instance="server1"
curl=$(curl -sk https://my-app-api.com | python -m json.tool)
json_response_file="/tmp/file"
if [ ! -f ${json_response_file} ] ; then
${curl} > ${json_response_file}
instance_info=$(cat ${json_response_file})
else
instance_info=$(cat ${json_response_file})
fi
The problem is, the file may exist with a bad response or is empty.
Possibly using bash until, I'd like to
(1). check (using JQ) that a field in the curl response contains $instance and only then write the file.
(2). retry the curl XX number of times until the response contains $instance
(3). write the file once the response contains $instance
(4). set the variable instance_info=$(cat ${json_response_file}) when the above is done correctly.
I started like this... then got stuck...
until [[ $(/usr/bin/jq --raw-output '.server' <<< ${curl}) = $instance ]]
do
One sane implementation might look something like this:
retries=10
instance=server1
response_file=filename
# define a function, since you want to run this code multiple times
# the old version only ran curl once and reused that result
fetch() { curl -sk https://my-app-api.com; }
instance_info=
for (( retries_left=retries; retries_left > 0; retries_left-- )); do
content=$(fetch)
server=$(jq --raw-output '.server' <<<"$content")
if [[ $server = "$instance" ]]; then
# Writing isn't atomic, but renaming is; doing it this way makes sure that no
# incomplete response will ever exist in response_file. If working in a directory
# like /tmp where others users may have write, use $(mktemp) to create a tempfile with
# a random name to avoid security risk.
printf '%s\n' "$content" >"$response_file.tmp" \
&& mv "$response_file.tmp" "$response_file"
instance_info=$content
break
fi
done
[[ $instance_info ]] || { echo "ERROR: Giving up after $retries retries" >&2; }
The following Bourne shell script, given a path, is supposed to test each component of the path for existence; then set a variable comprising only those components that actually exist.
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
paths=/usr/share/man:\
/usr/X11R6/man:\
/usr/local/man
MANPATH=`test_path $paths`
echo $MANPATH
When run, it always prints nothing. The trace using set -x is:
+ paths=/usr/share/man:/usr/X11R6/man:/usr/local/man
++ test_path /usr/share/man:/usr/X11R6/man:/usr/local/man
++ path=
++ echo /usr/share/man:/usr/X11R6/man:/usr/local/man
++ tr : '\012'
++ read component
++ '[' -d /usr/share/man ']'
++ '[' -z '' ']'
++ path=/usr/share/man
++ read component
++ '[' -d /usr/X11R6/man ']'
++ read component
++ '[' -d /usr/local/man ']'
++ '[' -z /usr/share/man ']'
++ path=/usr/share/man:/usr/local/man
++ read component
++ echo ''
+ MANPATH=
+ echo
Why is the final echo $path empty? The $path variable within the while loop was incrementally set for each iteration just fine.
The pipe runs all commands involved in sub-shells, including the entire while ... loop. Therefore, all changes to variables in that loop are confined to the sub-shell and invisible to the parent shell script.
One way to work around that is putting the while ... loop and the echo into a list that executes entirely in the sub-shell, so that the modified variable $path is visible to echo:
test_path()
{
echo "$1" | tr ':' '\n' | {
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then
path="$component"
else
path="$path:$component"
fi
fi
done
echo "$path"
}
}
However, I suggest using something like this:
test_path()
{
echo "$1" | tr ':' '\n' |
while read dir
do
[ -d "$dir" ] && printf "%s:" "$dir"
done |
sed 's/:$/\n/'
}
... but that's a matter of taste.
Edit: As others have said, the behaviour you are observing depends on the shell. The POSIX standard describes pipelined commands as run in sub-shells, but that is not a requirement:
Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment.
Bash runs them in sub-shells, but some shells run the last command in the context of the main script, when only the preceding commands in the pipeline are run in sub-shells.
This should work in a Bourne shell that understands functions (and would work in Bash and other shells too):
test_path() {
echo $1 | tr ':' '\012' |
{
path=""
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
}
The inner set of braces groups the commands into a unit, so path is only set in the subshell but is echoed from the same subshell.
Why is the final echo $path empty?
Until recently, Bash would give all components of a pipeline their own process, separate from the shell process in which the pipeline is run.
Separate process == separate address space, and no variable sharing.
In ksh93 and in recent Bash (may need a shopt setting), the shell will run the last component of a pipeline in the calling shell, so any variables changed inside the loop are preserved when the loop exits.
Another way to accomplish what you want is to make sure that the echo $path is in the same process as the loop, using parentheses:
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | ( while read component
do
[ -d "$component" ] || continue
path="${path:+$path:}$component"
done
echo "$path"
)
}
Note: I simplified the inner if. There was no else so the test can be replaced with a shortcut. Also, the two path assignments can be combined into one, using the S{var:+ ...} parameter substitution trick.
Your script works just fine with no change under Solaris 11 and probably also most commercial Unix like AIX and HP-UX because under these OSes, the underlying implementation of /bin/sh is provided by ksh. This would be also the case if /bin/sh is backed by zsh.
It doesn't work for you likely because your /bin/sh is implemented by one of bash, dash, mksh or busybox sh which all process each component of a pipeline in a subshell while ksh and zsh both keep the last element of a pipeline in the current shell, saving an unnecessary fork.
It is possible to "fix" your script for it to work when sh is provided by bash by adding this line somewhere before the pipeline:
shopt -s lastpipe
or better, if you wan't to keep portability:
command -v shopt > /dev/null && shopt -s lastpipe
This will keep the script working for ksh, and zsh but still won't work for dash, mksh or the original Bourne shell.
Note that both bash and ksh behaviors are allowed by the POSIX standard.
I am trying to write a function wrapper for the mysql command
If .my.cnf exists in the pwd, I would like to automatically attach --defaults-file=.my.cnf to the command
Here's what I'm trying
function mysql {
if [ -e ".my.cnf" ]; then
/usr/local/bin/mysql --defaults-file=.my.cnf "$#"
else
/usr/local/bin/mysql "$#"
fi
}
The idea is, I want to be able to use the mysql command exactly as I was before, only, if the .my.cnf file is present, attach it as an argument
Question: Will I run into any trouble with this method? Is there a better way to do it?
If I specify --defaults-file=foo.cnf manually, that should be used instead of .my.cnf.
Your function as written is perfectly fine. This is a touch DRYer:
function mysql {
if [ -e ".my.cnf" ]; then
set -- --defaults-file=.my.cnf "$#"
fi
/usr/local/bin/mysql "$#"
}
That set command puts your my.cnf argument at the beginning of the command line arguments
Only if the option is not already present:
function mysql {
if [[ -e ".my.cnf" && "$*" != *"--defaults-file"* ]]; then
set -- --defaults-file=.my.cnf "$#"
fi
/usr/local/bin/mysql "$#"
}
I have 2 environments variables :
echo $FRONT1_PORT_8080_TCP_ADDR # 172.17.1.80
echo $FRONT2_PORT_8081_TCP_ADDR # 172.17.1.77
I want to inject them in a my default.vcl like :
backend front1 {
.host = $FRONT1_PORT_8080_TCP_ADDR;
}
But I got an syntax error on the $ char.
I've also tried with user variables but I can't define them outside vcl_recv.
How can I retrieve my 2 values in the VCL ?
I've managed to parse my vcl
backend front1 {
.host = ${FRONT1_PORT_8080_TCP_ADDR};
}
With a script:
envs=`printenv`
for env in $envs
do
IFS== read name value <<< "$env"
sed -i "s|\${${name}}|${value}|g" /etc/varnish/default.vcl
done
Now you can use the VMOD Varnish Standard Module (std) to get environment variables in the VCL, for example:
set req.backend_hint = app.backend(std.getenv("VARNISH_BACKEND_HOSTNAME"));
See documentation: https://varnish-cache.org/docs/trunk/reference/vmod_std.html#std-getenv
Note: it doesn't work for backend configuration, but could work elsewhere. Apparently backends are expecting constant strings and if you try, you'll get Expected CSTR got 'std.fileread'.
You can use the fileread function of the std module, and create a file for each of your environment variables.
before running varnishd, you can run:
mkdir -p /env; \
env | while read envline; do \
k=${envline%%=*}; \
v=${envline#*=}; \
echo -n "$v" >"/env/$k"; \
done
And then, within your varnish configuration:
import std;
...
backend front1 {
.host = std.fileread("/env/FRONT1_PORT_8080_TCP_ADDR");
.port = std.fileread("/env/FRONT1_PORT_8080_TCP_PORT");
}
I haven't tested it yet. Also, I don't know if giving a string to the port configuration of the backend would work. In that case, converting to an integer should work:
.port = std.integer(std.fileread("/env/FRONT1_PORT_8080_TCP_PORT"), 0);
You can use use echo to eval strings.
Usually you can do something like:
VAR=test # Define variables
echo "my $VAR string" # Eval string
But, If you have the text in a file, you can use "eval" to have the same behaviour:
VAR=test # Define variables
eval echo $(cat file.vcl) # Eval string from the given file
Sounds like a job for envsubst.
Just use standard env var syntax in your config $MY_VAR and ...
envsubst < myconfig.tmpl > myconfig.vcl
You can install with apt get install gettext in Ubuntu.
I'm trying to write a zsh function to get the path to a python module.
This works:
pywhere() {
python -c "import $1; print $1.__file__"
}
However, what I'd really like is the dir path without the filename. This doesn't work:
pywhere() {
dirname $(python -c "import $1; print $1.__file__")
}
Note: it works in bash, but not in zsh!
EDIT this is the error:
~ % pywhere() {
function → dirname $(python -c "import $1; print $1.__file__")
function → }
File "<string>", line 1
import pywhere() {
^
SyntaxError: invalid syntax
Your problem is due to a broken preexec: you aren't quoting the command line properly when you print it for inclusion in the window title.
In the .zshrc you posted, which is not the one you used (don't do that! Always copy-paste the exact file contents and commands that you used), I see:
a=${(V)1//\%/\%\%}
a=$(print -Pn "%40>...>$a" | tr -d "\n")
print -Pn "\ek$a:$3\e\\"
print -P causes prompt expansion. You include the command in the argument. You protect the % characters in the command by doubling them, but that's not enough. You evidently have the prompt_subst option turned on, so print -P causes the $(…) construct in the command line that defines the function to be executed:
python -c "import $1; print $1.__file__"
where $1 is the command line (the function definition: pywhere { … }).
Rather than attempt to parse the command line, print it out literally. This'll also correct other mistakes: beyond not taking prompt_subst into account, you doubled % signs but should have quadrupled them since you perform prompt expansion twice, and you expand \ sequences twice as well.
function title() {
a=${(q)1} # show control characters as escape sequences
if [[ $#a -gt 40 ]]; then a=$a[1,37]...; fi
case $TERM in
screen)
print -Pn "\ek"; print -r -- $a; print -Pn ":$3\e\\";;
xterm*|rxvt)
print -Pn "\e]2;$2 | "; print -r -- $a; print -Pn ":$3\a";;
esac
}
Why not just use this:
python -c "import os, $1; print os.path.dirname($1.__file__)"