Running command in background using logstash exec output plugin - output

Does anybody know how to run command in background using logstash exec output plugin?
I tried this configuration
input {
file {
path => "file.log"
}
}
output {
exec {
command => "./script.sh fff ggg hhh jjj kkk &"
}
}
And script content is
#/bin/bash
echo "$*" >> file.txt
So finally file.txt contains & as it is parameter: fff ggg hhh jjj kkk &

According to the Logstash Reference for Exec output plugin:
Use dtach or screen to make it non blocking.
I suggest you use dtach, excerpt from dtach's man page:
dtach is intended for users who want the detach feature of screen without the other overhead of screen. It is tiny, does not use many libraries, and stays out of the way as much as possible.
Example usage (as sysadmin1138 suggested, you should use the full path):
output {
exec {
command => "/usr/bin/dtach -n /tmp/session_name -Ez /absolute/path/script.sh fff ggg hhh jjj kkk"
}
}
Note 1: You probably need to install dtach first if your system does not have it by default.
Note 2: You can get full path of dtach by using which dtach command.
Note 3: Definition of -n mode from dtach's man page:
-n Creates a new session, without attaching to it. A new session is created in which the specified program is executed. dtach does not try to attach to the newly created session, however, and exits instead.
Hope that helps!

You seem to be doing it right, though keep in mind that the current-working-directory is not always obvious within the context of exec. A full path to the script would be much more robust. Be aware this is run by ruby's system() function, if you are interested in potential side-effects and constraints.

Related

Ignore JSON ordering

In a project, we use 2 IDEs. The project contains hundreds files of code, and hundreds special files of JSON format which constantly get reread and rewritten by these IDEs. While we used single IDE, it's not a problem, files always get written the same way. Unfortunately, different IDEs save JSON with different ordering which leads to dozens of changes for GIT and uselessly overwhelmed diff. These files are important and must not be excluded by GitIgnore, but they rarely get changed, and this probably can be handled manually.
So, is there a terminal command to quickly undo/unselect changes for specific file extension? Or, maybe it is possible for GIT to track changes of JSONs without considering the order?
I also had an idea to use custom script for reordering the JSONs, but it would consume too much CPU, and also lead to rereading by an IDE which is also bad.
Update
I found the following command from another SO question:
git checkout main -- $(git ls-files -- "*.yy")
This workaround isn't handy but basically solves the problem. If anybody knows how to make GIT ignore JSON ordering, it would be great!
One way to temporarily ignore changes to the json files is to tell git to assume they haven't changed:
git update-index --assume-unchanged file-to-ignore.json
And only when you want to commit, tell git to really look at the file again:
git update-index --no-assume-unchanged file-to-ignore.json
Another option would be to use a pre-commit-hook to sort the json only when committing.
i'd make a git pre-commit hook to make sure all JSONs are always formatted the same way, for example in .git/hooks/pre-commit put
#!/bin/sh
php git/precommit_hook.php
exit $?
and if you're on a unix-system, make sure pre-commit is chmod +x .git/hooks/pre-commit
and in git/precommit_hook.php put
<?php
declare (strict_types = 1);
if(PHP_VERSION_ID < 70300) {
fwrite(STDERR, "PHP 7.3 or higher is required to run this script");
exit(1);
}
$changed_files = explode("\x00", rtrim(shell_exec("git diff --name-only --cached -z"), "\x00"));
foreach ($changed_files as $file) {
if(!file_exists($file)) {
// File was deleted, skip it
continue;
}
$ext = pathinfo($file, PATHINFO_EXTENSION);
if ($ext === "json") {
$json = json_decode(file_get_contents($file), true);
if (json_last_error() !== JSON_ERROR_NONE) {
fwrite(STDERR, "JSON Error: " . json_last_error_msg() . " in $file, will not format it\n");
continue;
}
$json = json_encode($json, JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES | JSON_UNESCAPED_UNICODE | JSON_THROW_ON_ERROR);
file_put_contents($file, $json, LOCK_EX);
}
}
now all *.json files will be committed with the PHP json formatters JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES | JSON_UNESCAPED_UNICODE | JSON_THROW_ON_ERROR
no matter what IDE you use :)

shell parsing json and loop output the combine variable

Jusk like my previous thread, I know how to parse simple json with spaces.
Now I have another question is that if I have multiple module structures, their keys are the same, but the values are different, I want the output value to be a combination of the values in each module, but actually the value in the last module will overwrite the previous one.
My test sample JSON like:
{
"WorkspaceName":"aaa bbb ccc ddd eee",
"ReportFileName":"xxx yyy zzz",
"StageName":"sit uat prod"
},
{
"WorkspaceName":"1111 2222 3333 4444 5555",
"ReportFileName":"6666 7777 8888",
"StageName":"sit1 uat1 prod1"
}
And my tried shell script(s) mian.sh is as follows:
InitialFile=$WORKSPACE/deployment/configuration/Initial.json
eval $(sed -n -e 's/^.*"\(.*\)":\(".*"\).*$/\1=\2/p' $InitialFile)
ConfigFile="$WorkspaceName"_"$ReportFileName"
echo The Config File is_$ConfigFile
The result is always The Config File is_1111 2222 3333 4444 5555_6666 7777 8888, I want get both values: aaa bbb ccc ddd eee_xxx yyy zzz and 1111 2222 3333 4444 5555_6666 7777 8888.
How do I achieve this?
A little background to understand why I'm doing this and some of my limitations:
I am executing my pipeline on jenkins and it will execute my mian.sh. So the entry is mian.sh. In addition, the jenkins server is maintained by a separate team, and we cannot directly access the server, so we cannot run shell code directly on the server.
Another, I need to combine variables in order to use this variable to match the name of the corresponding configuration file. Different results need to match different files for subsequent testing.
Important points for this answer:
Since OP can't install and use jq so going with an awk approach here.
I have provided 3 solutions here, 1st: is GNU awk approach and 2nd is NON-GNU awk approach and 3rd one is running NON-GNU awk code from a shell script.
1st 2 codes are stand along awk codes to run on terminal OR in an awk script
Then as per OP's request since their code is running in Jenkins I have posted a shell script which accepts an argument which is a Input_file name to be passed to it.
To save output into a shell variable could be done in 3rd code of this answer by changing 1st line to StageName=$(awk -v RS= ' and changing last line of 3rd code to ' "$1").
1st solution: With your shown samples please try following GNU awk code. Using match function of GNU awk where I am using regex [[:space:]]+"WorkspaceName":"([^"]*)",\n[[:space:]]+"ReportFileName":"([^"]*) to get the required values and creating 2 capturing groups out of it which further stores values into an array named arr to be get values later on as pre reuiqrement.
awk -v RS= '
{
while(match($0,/[[:space:]]+"WorkspaceName":"([^"]*)",\n[[:space:]]+"ReportFileName":"([^"]*)",/,arr)){
print arr[1]"_"arr[2]
$0=substr($0,RSTART+RLENGTH)
}
}
' Input_file
2nd solution: With your shown samples please try following code, should work in any POSIX awk. This solution also uses match function but it doesn't create array and doesn't have any capturing groups in it, since capturing group capability is part of GNU awk. So using split function here to split the matched values and get only required part out of it.
awk -v RS= '
{
while(match($0,/[[:space:]]+"WorkspaceName":"[^"]*",\n[[:space:]]+"ReportFileName":"[^"]*",/)){
val=substr($0,RSTART,RLENGTH)
split(val,arr,"\"WorkspaceName\":\"|\"ReportFileName\":\"|,\n")
sub(/"$/,"",arr[2])
sub(/",$/,"",arr[4])
print arr[2]"_"arr[4]
$0=substr($0,RSTART+RLENGTH)
}
}
' Input_file
To run code from shell script try like:
#!/bin/bash
awk -v RS= '
{
while(match($0,/[[:space:]]+"WorkspaceName":"[^"]*",\n[[:space:]]+"ReportFileName":"[^"]*",/)){
val=substr($0,RSTART,RLENGTH)
split(val,arr,"\"WorkspaceName\":\"|\"ReportFileName\":\"|,\n")
sub(/"$/,"",arr[2])
sub(/",$/,"",arr[4])
print arr[2]"_"arr[4]
$0=substr($0,RSTART+RLENGTH)
}
}
' "$1"

Fuzzing command line arguments [argv]

I have a binary I've been trying to fuzz with AFL, the only thing is AFL only fuzzes STDIN, and File inputs and this binary takes input through its arguments pass_read [input1] [input2]. I was wondering if there are any methods/fuzzers that allow fuzzing in this manner?
I don't not have the source code so making a harness is not really applicable.
Michal Zalewski, the creator of AFL, states in this post:
AFL doesn't support argv fuzzing, because TBH, it's just not horribly useful in
practice. There is an example in experimental/argv_fuzzing/ showing how to do it
in a general case if you really want to.
Link to the mentioned example on GitHub: https://github.com/google/AFL/tree/master/experimental/argv_fuzzing
There are some instructions in the file argv-fuzz-inl.h (haven't tried myself).
Bash only Solution
As an example, lets generate 10 random strings and store them in a file
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 10 > string-file.txt
Next, lets read 2 lines from string-file and pass it into our application
exec handle< string-file.txt
while read string1 <&handle ; do
read string2 <&handle
pass_read $line1 $line2 >> crash_file.txt
done
exec handle<&-
We then have any crashes stored within crash_file.txt for further analysis.
This may not be the most elegant solution, but perhaps you gives you an idea of some other possibilities if no tool necessarily fulfills the current requirements
I looked at the AFLplusplus repo on GitHub. Inside AFLplusplus/utils/argv_fuzzing/, there is a Makefile. If you run it, you will get a .so file (a shared library) that you can use to do argv fuzzing, even if you only have the binary. Obviously, you must use AFL_PRELOAD. You can read more in the README.

How to include files in icarus verilog?

I know the basic `include "filename.v" command. But, I am trying to include a module which is in another folder. Now, that module further includes other modules present in the same folder. But, when I try to run the module on the most top-level, I am getting an error.
C:\Users\Dell\Desktop\MIPS>iverilog mips.v
./IF/stage_if.v:2: Include file instruction_memory_if.v not found
No top level modules, and no -s option.
Here, I am trying to make a MIPS processor, which is contained in the file "mips.v". The first statement of this file is "`include "IF/stage_if.v". And, in the IF folder, there are numerous files present which I have included in stage_if.v, one of which is "instruction_memory_if.v". Below is the directory level diagram.
-IF
instruction_memory_if.v
stage_if.v
+ID
+EX
+MEM
+WB
mips.v
You need to tell iverilog where to look using the -I flag.
In top.v:
`include "foo.v"
program top;
initial begin
foo();
end
endprogram
In foo/foo.v:
task foo;
$display("This was printed in the foo module");
endtask
Which can be run using the commands:
iverilog -g2012 top.v -I foo/
vvp a.out
>>> This was printed in the foo module

How to download and then use the file in the same tcl script?

I'm new using Tcl and I have the following script:
proc prepare_xml {pdb_id} {
set filename [exec wget ftp://ftp.ebi.ac.uk/pub/databases/msd/sifts/xml/$pdb_id.xml.gz]
set filename_unzip [exec gunzip "$pdb_id.xml.gz"]
set ready_xml [exec sed -i "/entry /c\<entry>" "$pdb_id.xml"]
return $ready_xml
}
The expected output is the file "filename" uncompress and modified. However, when I execute it the first time, it only downloads the file and it does not uncompress it. If I execute it for a second time, I obtained the expected output and a second copy of the original downloaded file.
Can anyone help me with this? I've tried with after and vwait commands but it doesn't work.
Thank you :)
It's hard to say for sure as you're not describing whether any errors are thrown (that'd be the only reason for the code to not run to completion), but I'd expect something like this to be the right approach:
proc prepare_xml {pdb_id} {
# Double quotes on next line just because of Stack Overflow highlighter
set url "ftp://ftp.ebi.ac.uk/pub/databases/msd/sifts/xml/$pdb_id.xml.gz"
set file $pdb_id.xml
append sedcode {/entry /} "c\\\n" {<entry>}
exec wget -q -O - $url | gunzip -c | sed $sedcode > $file
return $file
}
Firstly, I'm keeping complicated bits in (local) variables to stop the exec line from getting too long. Secondly, I've put all the subprocesses together in the one pipeline. Thirdly, I'm using -q and -O - with wget, and -c with gunzip; look up what they do if you don't understand them. Fourthly, I've put the scriptlet for sed in braces where possible to stop there from being trouble with backslashes, but I've used append and a non-backslashed section to make the pattern because the syntax of c in sed is downright weird (it needs a backslash-newline sequence immediately after on at least some platforms…)
I'd actually use native Tcl code to extract and transform the data if I was doing it for me, but that's a rather larger change.