How to redirect the mysqldump to output file using xargs? - mysql

I have same problem as mentioned in this below query.
Can't take mysqldump of long argument list
Any idea how to redirect the mysqldump result to a output file?

You can redirect the result of xargs with > just like any other shell command.
For example if I have an xargs pipeline that runs three commands:
% echo "1\n2\n3" | xargs -n 1 "echo"
1
2
3
I can just use > to redirect the output of all three commands:
% echo "1\n2\n3" | xargs -n 1 "echo" > output
% more output
1
2
3

Related

Grep single value after match

I have a file containing:
{"id":1,"jsonrpc":"2.0","result":{"speed":0}}
How would I be able to grep "0" after "speed":"?
I have tried 'grep -o -P "speed":{1}', not what I am looking for.
You should use jq (sudo apt-get install jq on raspbian) for this task.
echo '{"id":1,"jsonrpc":"2.0","result":{"speed":0}}' | jq .result.speed
Result: 0
Since you said in your question that you have a file "containing" this line, you might want to use grep first to get only the line you're interested in, otherwise jq might throw an error.
Example file:
abc
{"id":1,"jsonrpc":"2.0","result":{"speed":0}}
123
Running grep "speed" yourfile.txt | jq .result.speed would output 0.

Grepping a word buried in a <p> on a website

I am having trouble grepping a word on a website. This is the command I'm using
wget -q http://bcbioinformaticsgrad.ca/our-faculty/james-piret/ | grep 'medical'
which is returning nothing, when it should be returning
[name of the website]:Many recent developments in biological and medical
.
.
.
.
.
.
The overall goal of what I'm trying to do is find a certain word within all the links of the website
My script is written like this
#!/bin/bash
#$1 is the parent website
#This pipeline obtains all the links located on a website
wget -qO- $1 | grep -Eoi '<a [^>]+>' | grep -Eo 'href="[^\"]+"' | cut -c 7- | rev | cut -c 2- | rev > .linksLocated
#$2 is the word being looked for
#This loop goes though every link and tries to locate a word
while IFS='' read -r line || [[ -n "$line" ]]; do
wget -q $line | grep "$2"
done < .linksLocated
#rm .linksLocated
Wget doesn't put the downloaded file to standard output, so it's trying to grep the word from nothing (since you added the -q flag).
Add -O - to print the page to stdout:
wget -q http://bcbioinformaticsgrad.ca/our-faculty/james-piret/ -O - | grep 'medical'
I see you used it with the first wget in your script, so just add it to the second one, too.
It's also possible to use curl, which does that by default, without any parameters:
curl http://bcbioinformaticsgrad.ca/our-faculty/james-piret/ | grep 'medical'
Edit: this tool is super useful when you actually need to select certain HTML elements in the downloaded page, might suit some use cases better than grep: https://github.com/ericchiang/pup

How can I get rid of bash: $1: ambiguous redirect on xargs command

I'm trying to execute several files with sql setences on a given database:
ls -lah | grep "sql$" | awk '{print $9}' | xargs mysql -uanuser -papassword a_database < $1
But I getting the error:
bash: $1: ambiguous redirect
If I change the command on xargs to a simple echo $1 it works
I already tried double quotes like this: xargs mysql -uanuser -papassword a_database < "$1" and this "xargs mysql -uanuser -papassword a_database < $1" with no luck the terminal bring another error:
xargs: mysql -uanuser -papassword a_database < {}: No such file or directory
Can you help me please?
You don't need to use ls -l | grep | awk, hust use a simple for loop on *.sql files:
for f in *.sql; do
mysql -uanuser -papassword a_database < "$f"
done
Here is the xargs method working:
ls -lha | grep "sql$" | awk '{print $9}' | xargs -t -n1 -I{} bash -c "mysql -uanuser -papassword a_database < '{}'"

BATCH: grep equivalent

I need some help what ith the equivalent code for grep -v Wildcard and grep -o in batch file.
This is my code in shell.
result=`mysqlshow --user=$dbUser --password=$dbPass sample | grep -v Wildcard | grep -o sample`
The batch equivalent of grep (not including third party tools like GnuWin32 grep), will be findstr.
grep -v finds lines that don't match the pattern. The findstr version of this is findstr /V
grep -o shows only the part of the line that matches the pattern. Unfortunately, there's no equivalent of this, but you can run the command and then have a check along the lines of
if %errorlevel% equ 0 echo sample

Bash Script Loop through MySQL row and use curl and grep

I have a mysql database, with a table :
url | words
And datas like, for example :
------Column URL------- -------Column Words------
www.firstwebsite.com | hello, hi
www.secondwebsite.com | someword, someotherword
I want to loop through that table to check if the word is present in the content of the website specified by the url.
I have something like this :
!/bin/bash
mysql --user=USERNAME --password=PASSWORD DATABASE --skip-column-names -e "SELECT url, keyword FROM things" | while read url keyword; do
content=$(curl -sL $url)
echo $content | egrep -q $keyword
status=$?
if test $status -eq 0 ; then
# Found...
else
# Not found...
fi
done
One problems :
It's very slow : how set curl to optimize the load time of each website, don't load images, things like that ?
Also, Is it a good idea to put things like that in a shell script, or is it better to create a php script, and call it with curl ?
Thanks !
As it stands your script will not work as you might expect when you have multiple keywords per row as in your example. The reason is that when you pass hello, hi to egrep it will look for the exact string "hello, hi" in its input, not for either "hello" or "hi". You can fix this without making changes to what's in your database by turning each list of keywords into an egrep-compatible regular expression with sed. You'll also need to remove the | from mysql's output, e.g, with awk.
curl doesn't retrieve images when downloading a webpage's HTML. If the order in which the URLs are queried does not matter to you then you can speed things up by making the whole thing asynchronous with &.
#!/bin/bash
handle_url() {
if curl -sL "$1" | egrep -q "$2"; then
echo 1 # Found...
else
echo 0 # Not found...
fi
}
mysql --user=USERNAME --password=PASSWORD DATABASE --skip-column-names -e "SELECT url, keyword FROM things" | awk -F \| '{ print $1, $2 }' | while read url keywords; do
keywords=$(echo $keywords | sed -e 's/, /|/g;s/^/(/;s/$/)/;')
handle_url "$url" "$keywords" &
done