Why so many number of open file descriptor with MySQL 5.6.38 on centos? - mysql

I have two mysql instance running with --open-files-limit=65536. But it got ~193644 open file descriptor with lsof command?
$ lsof -n | grep mysql | wc -l
196410
$ lsof -n | grep mysql | grep ".MYI" | wc -l
83240
$ lsof -n | grep mysql | grep ".MYD" | wc -l
74053
$ sysctl fs.file-max
fs.file-max = 790612
$ lsof -n | wc -l
224647
Why there are so many open file descriptor? what could be the root cause of it? How to debug more?

Problem is with lsof version. I had lsof-4.87 on centos7 which is showing thread information and so it is duplicating open connections per thread. I changed lsof-4.82 & number got reduced

Related

Openshift remote command execution (exec)

I am trying to run the following command from Windows machine in the openshift docker container running Linux
oc exec openjdk-app-1-l9nrx -i -t --server https://xxx.cloud.ibm.com:30450 \
--token <token> -n dev-hg jcmd \
$(ps -ef | grep java | grep -v grep | awk '{print $2}') GC.heap_dump \
/tmp/heap1.hprof
It is trying to evaluate jcmd $(ps -ef | grep java | grep -v grep | awk '{print $2}') GC.heap_dump /tmp/heap1.hprof on local windows machine and I do not have linux commands. Also, I need the process ID of the application running in container and not my local.
Any quick help is appreciated.
Try this:
oc exec -it openjdk-app-1-l9nrx --server https://xxx.cloud.ibm.com:30450 \
--token <dont-share-your-token> -n dev-hg -- /bin/sh -c \
"jcmd $(ps -ef | grep java | grep -v grep | awk '{print \$2}')"
Or even:
oc exec -it openjdk-app-1-l9nrx --server https://xxx.cloud.ibm.com:30450 \
--token <dont-share-your-token> -n dev-hg -- /bin/sh -c \
"jcmd $(ps -ef | awk '/java/{print \$2}')"
The problem is that the $( ) piece is being interpreted locally. Surrounding it in double quotes won't help, as that kind of syntax is interpreted inside double quotes.
You have to replace your double quotes by single quotes (so $( ) is not interpreted), and then compensate for the awk single quotes:
oc exec openjdk-app-1-l9nrx -i -t --server https://xxx.cloud.ibm.com:30450 --token TOKEN -n dev-hg 'jcmd $(ps -ef | grep java | grep -v grep | awk '\''{print $2}'\'') GC.heap_dump /tmp/heap1.hprof'
Please add the tags unix and shell to your question, as this is more of a UNIX question than an Openshift one.

myisamchk: error: 140 when opening MyISAM-table

i have this error when run repair?
myisamchk: error: 140 when opening MyISAM-table '/var/lib/mysql/zibarsho_karno/wp_yoast_seo_links.MYI'
how i can fix this ???
ls *.MYI | sed 's/\.[^.]*$//' | xargs myisamchk -F -U
Saved me here because of basename extra operand and other issue. Please note -F -U are for Fast and UPDATE STATUS flag. You can use without it.
This is a bug already reported since MySQL 5.6
Still happening in 8.0.11 so in the mean time you can use the walkarround solution.
Not using the MYI extension.
myisamchk --force --update-state var/lib/mysql/zibarsho_karno/wp_yoast_seo_links
As user Jesus Uzcanga already mentioned, it is an old bug which hasn't been fixed yet [current version is 8.0.15].
These commands are a workaround when you run it directly in the directory where the .MYI files are located:
ls *.MYI | xargs basename -s .MYI | xargs myisamchk
It removes the extension and runs myisamchk for each MyISAM data file.
try use this :
find /var/lib/mysql/*/* -name '*.MYI' | sed -e 's/\.MYI$//' | xargs -I{} myisamchk -r -f -o {}
ls *.MYI | xargs basename -s .MYI | xargs -I{} myisamchk -r --force {}

Ping flood ( ping -f ) vs Default Ping ( e.g ping -i 0.2 )

We currently have a Zabbix app running in CentOS Linux that we use to log our network rtt and packet loss. We came in a internal discussion of what kind of ping we should use
Ping Flood:
ping -f -c 500 -s 1500 -I {$HOSTVLAN2} {$DEST1IPSEC} | grep packet | awk -F" " '{print $6}' | sed -e s'/%//' -e '/^\s*$/d'
'Default' ping:
ping -c 100 -i 0.2 -s 1500 -I {$HOSTVLAN2} {$DEST1IPSEC} | grep packet | awk -F" " '{print $6}' | sed -e s'/%//' -e '/^\s*$/d'
Thats a screen we made to compare results of packet loss:
So we would like an external view of this case. What you guys think about?
We came in several topics regarding network load, DDoS, realiable values of pkt loss etc..
thanks in advance

How to log mysql queries of specific database - Linux

I have been looking at this post
How can I log "show processlist" when there are more than n queries?
It is working fine by running this command
mysql -uroot -e "show full processlist" | tee plist-$date.log | wc -l
the problem it is overriding the file
I also want to run it in cronjob.
I have added this command to the /var/spool/cron/root:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +%F-%H-%M`.log | wc -l) -lt 51 ] && rm plist-`date +%F-%H-%M`.log
but it is not working. Or maybe it is saving the log file some place out of the root folder.
So my question is: how to temporarily log all queries from specific database and specific table and save the whole queries in 1 file?
Note: it is not slow/long queries log I am looking for, but just temp solution to read which queries are running for a database
solution is:
watch -n 1 "mysqladmin -u root -pXXXXX processlist | grep tablename" | tee -a /root/plist.log
The % character has special meaning in crontab commands, you need to escape them. So you need to do:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +\%F-\%H-\%M`.log | wc -l) -lt 51 ] && rm plist-`date +\%F-\%H-\%M`.log
If you want to use your original command, but not overwrite the file each time, you can use the -a option of tee to append:
mysql -uroot -e "show full processlist" | tee -a plist-$date.log | wc -l
To run the command every second for a minute, write a shell script:
#!/bin/bash
for i in {1..60}; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done
You can then run this script from cron every minute:
* * * * * /path/to/script
Although if you want to run something continuously like this, cron may not be the best way. You could use /etc/inittab to run the script when the system boots, and it will automatically restart it if it dies for some reason. Then you would just use an infinite loop:
#!/bin/bash
while :; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done

Putting the data passed to xargs twice in one line

tmp-file contains:
database_1
database_2
database_3
I want to run a command like "mysqldump DATABASE > database.sql && gzip database.sql" for each line in the above file.
I've got as far as cat /tmp/database-list | xargs -L 1 mysqldump -u root -p
I guess I want to know how to put the data passed to xargs in more than once (and not just on the end)
EDIT: the following command will dump each database into its own .sql file, then gzip them.
mysql -u root -pPASSWORD -B -e 'show databases' | sed -e '$!N; s/Database\n//' | xargs -L1 -I db mysqldump -u root -pPASSWORD -r db.backup.sql db; gzip *.sql
In your own example you use && to use two commands on one line - so why not do
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file | xargs -L1 -I db gzip database.sql
if you really want to do it all in one line using xargs only. Though I believe that
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file; gzip *.sql
would make more sense.
If you have a multicore CPU (most of us have these days) then GNU Parallel http://www.gnu.org/software/parallel/ may improve the time to run:
mysql -u root -pPASSWORD -B -e 'show databases' \
| sed -e '$!N; s/Database\n//' \
| parallel -j+0 "mysqldump -u root -pPASSWORD {} | gzip > {}.backup.sql"
-j+0 will run as many jobs in parallel as you have CPU cores.