Openshift remote command execution (exec) - openshift

I am trying to run the following command from Windows machine in the openshift docker container running Linux
oc exec openjdk-app-1-l9nrx -i -t --server https://xxx.cloud.ibm.com:30450 \
--token <token> -n dev-hg jcmd \
$(ps -ef | grep java | grep -v grep | awk '{print $2}') GC.heap_dump \
/tmp/heap1.hprof
It is trying to evaluate jcmd $(ps -ef | grep java | grep -v grep | awk '{print $2}') GC.heap_dump /tmp/heap1.hprof on local windows machine and I do not have linux commands. Also, I need the process ID of the application running in container and not my local.
Any quick help is appreciated.

Try this:
oc exec -it openjdk-app-1-l9nrx --server https://xxx.cloud.ibm.com:30450 \
--token <dont-share-your-token> -n dev-hg -- /bin/sh -c \
"jcmd $(ps -ef | grep java | grep -v grep | awk '{print \$2}')"
Or even:
oc exec -it openjdk-app-1-l9nrx --server https://xxx.cloud.ibm.com:30450 \
--token <dont-share-your-token> -n dev-hg -- /bin/sh -c \
"jcmd $(ps -ef | awk '/java/{print \$2}')"

The problem is that the $( ) piece is being interpreted locally. Surrounding it in double quotes won't help, as that kind of syntax is interpreted inside double quotes.
You have to replace your double quotes by single quotes (so $( ) is not interpreted), and then compensate for the awk single quotes:
oc exec openjdk-app-1-l9nrx -i -t --server https://xxx.cloud.ibm.com:30450 --token TOKEN -n dev-hg 'jcmd $(ps -ef | grep java | grep -v grep | awk '\''{print $2}'\'') GC.heap_dump /tmp/heap1.hprof'
Please add the tags unix and shell to your question, as this is more of a UNIX question than an Openshift one.

Related

Why so many number of open file descriptor with MySQL 5.6.38 on centos?

I have two mysql instance running with --open-files-limit=65536. But it got ~193644 open file descriptor with lsof command?
$ lsof -n | grep mysql | wc -l
196410
$ lsof -n | grep mysql | grep ".MYI" | wc -l
83240
$ lsof -n | grep mysql | grep ".MYD" | wc -l
74053
$ sysctl fs.file-max
fs.file-max = 790612
$ lsof -n | wc -l
224647
Why there are so many open file descriptor? what could be the root cause of it? How to debug more?
Problem is with lsof version. I had lsof-4.87 on centos7 which is showing thread information and so it is duplicating open connections per thread. I changed lsof-4.82 & number got reduced

Extract href of a specific anchor text in bash

I am trying to get the href of the most recent production release from Exiftool page.
curl -s 'http://www.sno.phy.queensu.ca/~phil/exiftool/history.html' | grep -o -E "href=[\"'](.*)[\"'].*Version"
Actual output
href="Image-ExifTool-10.36.tar.gz">Version
I want this an as output
Image-ExifTool-10.36.tar.gz
Using grep -P you can use a lookahead and \K for match reset:
curl -s 'http://www.sno.phy.queensu.ca/~phil/exiftool/history.html' |
grep -o -P "href=[\"']\K[^'\"]+(?=[\"']>Version)"
Image-ExifTool-10.36.tar.gz

How can I get rid of bash: $1: ambiguous redirect on xargs command

I'm trying to execute several files with sql setences on a given database:
ls -lah | grep "sql$" | awk '{print $9}' | xargs mysql -uanuser -papassword a_database < $1
But I getting the error:
bash: $1: ambiguous redirect
If I change the command on xargs to a simple echo $1 it works
I already tried double quotes like this: xargs mysql -uanuser -papassword a_database < "$1" and this "xargs mysql -uanuser -papassword a_database < $1" with no luck the terminal bring another error:
xargs: mysql -uanuser -papassword a_database < {}: No such file or directory
Can you help me please?
You don't need to use ls -l | grep | awk, hust use a simple for loop on *.sql files:
for f in *.sql; do
mysql -uanuser -papassword a_database < "$f"
done
Here is the xargs method working:
ls -lha | grep "sql$" | awk '{print $9}' | xargs -t -n1 -I{} bash -c "mysql -uanuser -papassword a_database < '{}'"

BATCH: grep equivalent

I need some help what ith the equivalent code for grep -v Wildcard and grep -o in batch file.
This is my code in shell.
result=`mysqlshow --user=$dbUser --password=$dbPass sample | grep -v Wildcard | grep -o sample`
The batch equivalent of grep (not including third party tools like GnuWin32 grep), will be findstr.
grep -v finds lines that don't match the pattern. The findstr version of this is findstr /V
grep -o shows only the part of the line that matches the pattern. Unfortunately, there's no equivalent of this, but you can run the command and then have a check along the lines of
if %errorlevel% equ 0 echo sample

Bash script output JSON variable to file

I am using twurl in Ubuntu's command line to connect to the Twitter Streaming API, and parse the resulting JSON with this processor. I have the following command, which returns the text of tweets sent from London:
twurl -t -d locations=-5.67,50.06,1.76,58.62 language=en -H stream.twitter.com /1.1/statuses/filter.json | jq '.text'
This works great, but I'm struggling to output the result to a file called london.txt. I have tried the following, but still no luck:
twurl -t -d locations=-5.67,50.06,1.76,58.62 language=en -H stream.twitter.com /1.1/statuses/filter.json | jq '.text' > london.txt
As I'm fairly new to Bash scripting, I'm sure I've misunderstood the proper use of '>' and '>>', so if anyone could point me in the right direction that'd be awesome!
twurl -t -d locations=-5.67,50.06,1.76,58.62 language=en -H stream.twitter.com /1.1/statuses/filter.json | jq '.text' > london.txt
It will replace on each new line pasting on it. But if you use >> it will append the next write operation to end of file. So try the following rather above example, I'm certain it will work.
twurl -t -d locations=-5.67,50.06,1.76,58.62 language=en -H stream.twitter.com /1.1/statuses/filter.json | jq '.text' >> london.txt
Also you can use tee command to check what is printing along side the redirection
twurl -t -d locations=-5.67,50.06,1.76,58.62 language=en -H stream.twitter.com /1.1/statuses/filter.json | jq '.text'| tee london.txt