ejabberd command creation (find only 1 XMPP) - ejabberd

I need some help with ejabberd's ejabberdctl process_rosteritems command.
I want to search all members of the "xmpp.com" jabber server in roster.
That in users roster is ONLY "xmpp.com", but no other xmpp-servers.
The command
ejabberdctl process_rosteritems delete any any any *#xmpp.com
is searching not only xmpp.com, but also other servers.
So I need some help to create the correct command.

Sorry, but I don't understand correctly your problem.
Let's imagine user1#localhost, that has two local contacts, also two contacts from xmpp.com, and another two contacts from other host. We can see them all this way:
$ ejabberdctl process_rosteritems list any any *#* *#*
...
user1#localhost user2#localhost
user1#localhost user3#localhost
user1#localhost contact4#xmpp.com
user1#localhost contact5#xmpp.com
user1#localhost contact6#other.com
user1#localhost contact7#other.com
user2#localhost ...
user2#localhost ...
...
Then, if I want only to list (or delete) the contacts of user1#localhost that are from xmpp.com:
$ ejabberdctl process_rosteritems list any any user1#localhost *#xmpp.com
...
user1#localhost contact4#xmpp.com
user1#localhost contact5#xmpp.com
Can you please edit your question with more details, and maybe some example of what you have, and what you want?

Let's consider this server as example:
$ ejabberdctl process_rosteritems list any any *#localhost *#* | grep -v There | grep -v Progress | grep -v Matches | sort
user1#localhost user2#localhost
user1#localhost user4#xmpp.com
user1#localhost user6#other.com
user2#localhost user3#xmpp.com
user3#localhost user66#xmpp.com
user3#localhost user77#xmpp.com
user4#localhost user99#other.com
You want to get the list of users that have as contacts only xmpp.com? In this case it would be user2#localhost and user3#localhost, right? You can do that:
$ ejabberdctl process_rosteritems list any any *#localhost *#xmpp.com | grep -v There | grep -v Progress | grep -v Matches | sort | awk '{print $1}' | uniq >haveyes.txt
$ ejabberdctl process_rosteritems list any any *#localhost *#* | grep -v There | grep -v Progress | grep -v Matches | sort | grep -v "#xmpp.com" | awk '{print $1}' | uniq >havenot.txt
$ comm -23 haveyes.txt havenot.txt
user2#localhost
user3#localhost

Related

csvgrep from csvkit search mutiple

Looking at the docs for csvgrep https://csvkit.readthedocs.io/en/1.0.2/scripts/csvgrep.html
I don't inherently see a way to search for multiple strings? Am I missing something?
csvgrep -c 1 -m "test" -m " test2" file.csv | csvlook | less -s
you must use regex syntax
csvgrep -c 1 -r '(test1|test2)' input.csv | csvlook | less -s

Openshift remote command execution (exec)

I am trying to run the following command from Windows machine in the openshift docker container running Linux
oc exec openjdk-app-1-l9nrx -i -t --server https://xxx.cloud.ibm.com:30450 \
--token <token> -n dev-hg jcmd \
$(ps -ef | grep java | grep -v grep | awk '{print $2}') GC.heap_dump \
/tmp/heap1.hprof
It is trying to evaluate jcmd $(ps -ef | grep java | grep -v grep | awk '{print $2}') GC.heap_dump /tmp/heap1.hprof on local windows machine and I do not have linux commands. Also, I need the process ID of the application running in container and not my local.
Any quick help is appreciated.
Try this:
oc exec -it openjdk-app-1-l9nrx --server https://xxx.cloud.ibm.com:30450 \
--token <dont-share-your-token> -n dev-hg -- /bin/sh -c \
"jcmd $(ps -ef | grep java | grep -v grep | awk '{print \$2}')"
Or even:
oc exec -it openjdk-app-1-l9nrx --server https://xxx.cloud.ibm.com:30450 \
--token <dont-share-your-token> -n dev-hg -- /bin/sh -c \
"jcmd $(ps -ef | awk '/java/{print \$2}')"
The problem is that the $( ) piece is being interpreted locally. Surrounding it in double quotes won't help, as that kind of syntax is interpreted inside double quotes.
You have to replace your double quotes by single quotes (so $( ) is not interpreted), and then compensate for the awk single quotes:
oc exec openjdk-app-1-l9nrx -i -t --server https://xxx.cloud.ibm.com:30450 --token TOKEN -n dev-hg 'jcmd $(ps -ef | grep java | grep -v grep | awk '\''{print $2}'\'') GC.heap_dump /tmp/heap1.hprof'
Please add the tags unix and shell to your question, as this is more of a UNIX question than an Openshift one.

Why so many number of open file descriptor with MySQL 5.6.38 on centos?

I have two mysql instance running with --open-files-limit=65536. But it got ~193644 open file descriptor with lsof command?
$ lsof -n | grep mysql | wc -l
196410
$ lsof -n | grep mysql | grep ".MYI" | wc -l
83240
$ lsof -n | grep mysql | grep ".MYD" | wc -l
74053
$ sysctl fs.file-max
fs.file-max = 790612
$ lsof -n | wc -l
224647
Why there are so many open file descriptor? what could be the root cause of it? How to debug more?
Problem is with lsof version. I had lsof-4.87 on centos7 which is showing thread information and so it is duplicating open connections per thread. I changed lsof-4.82 & number got reduced

Extract href of a specific anchor text in bash

I am trying to get the href of the most recent production release from Exiftool page.
curl -s 'http://www.sno.phy.queensu.ca/~phil/exiftool/history.html' | grep -o -E "href=[\"'](.*)[\"'].*Version"
Actual output
href="Image-ExifTool-10.36.tar.gz">Version
I want this an as output
Image-ExifTool-10.36.tar.gz
Using grep -P you can use a lookahead and \K for match reset:
curl -s 'http://www.sno.phy.queensu.ca/~phil/exiftool/history.html' |
grep -o -P "href=[\"']\K[^'\"]+(?=[\"']>Version)"
Image-ExifTool-10.36.tar.gz

How to log mysql queries of specific database - Linux

I have been looking at this post
How can I log "show processlist" when there are more than n queries?
It is working fine by running this command
mysql -uroot -e "show full processlist" | tee plist-$date.log | wc -l
the problem it is overriding the file
I also want to run it in cronjob.
I have added this command to the /var/spool/cron/root:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +%F-%H-%M`.log | wc -l) -lt 51 ] && rm plist-`date +%F-%H-%M`.log
but it is not working. Or maybe it is saving the log file some place out of the root folder.
So my question is: how to temporarily log all queries from specific database and specific table and save the whole queries in 1 file?
Note: it is not slow/long queries log I am looking for, but just temp solution to read which queries are running for a database
solution is:
watch -n 1 "mysqladmin -u root -pXXXXX processlist | grep tablename" | tee -a /root/plist.log
The % character has special meaning in crontab commands, you need to escape them. So you need to do:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +\%F-\%H-\%M`.log | wc -l) -lt 51 ] && rm plist-`date +\%F-\%H-\%M`.log
If you want to use your original command, but not overwrite the file each time, you can use the -a option of tee to append:
mysql -uroot -e "show full processlist" | tee -a plist-$date.log | wc -l
To run the command every second for a minute, write a shell script:
#!/bin/bash
for i in {1..60}; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done
You can then run this script from cron every minute:
* * * * * /path/to/script
Although if you want to run something continuously like this, cron may not be the best way. You could use /etc/inittab to run the script when the system boots, and it will automatically restart it if it dies for some reason. Then you would just use an infinite loop:
#!/bin/bash
while :; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done