zabbix issues didn't include IP address? - zabbix

comparison picture
like picture
the first zabbixserver log sendSMSlog
$2=PROBLEM: Free disk space is less than 20% on volume /boot $3= ZQDBA: Free disk space is less than 20% on volume /boot
the second zabbix server log snedSMSlog
$2=PROBLEM: 1xxxxx.254 Free disk space is less than 20% on volume /boot $3= ZQ: 1xxxx254 Free disk space is less than 20% on volume /boot
the first server Cant't get the hostIP in issue
thx

That seems to be caused by the trigger configuration - one of them is using {HOST.HOST} or {HOST.NAME} macro in the trigger name, the other is not.

Related

How to change a default expression in Zabbix

by default, zabbix comes with the following expression that alerts you when the server's disk is less than 5GB of free space
{Zabbix server:vfs.fs.size[/,pused].last()}>{$VFS.FS.PUSED.MAX.CRIT:"/"} and
(({Zabbix server:vfs.fs.size[/,total].last()}-{Zabbix server:vfs.fs.size[/,used].last()})<5G or {Zabbix server:vfs.fs.size[/,pused].timeleft(1h,,100)}<1d)
I want to know if there is a way to change these 5GB to a slightly larger number, like 20Gb for example
I am new to Zabbix and any help is welcome, thanks.

Cassandra Commit log ideal size and default size clarification

From the docs it is given
commitlog_total_space_in_mb
(Default: 32MB for 32-bit JVMs, 8192MB for 64-bit JVMs)note Total space used for commitlogs. If the used space goes above this value, Cassandra rounds up to the next nearest segment multiple and flushes memtables to disk for the oldest commitlog segments, removing those log segments. This reduces the amount of data to replay on start-up, and prevents infrequently-updated tables from indefinitely keeping commitlog segments. A small total commitlog space tends to cause more flush activity on less-active tables.
And from the Cassandra.yaml it is specified :
Total space to use for commit logs on disk.
If space gets above this value, Cassandra will flush every dirty CF
in the oldest segment and remove it. So a small total commitlog space
will tend to cause more flush activity on less-active columnfamilies.
The default value is the smaller of 8192, and 1/4 of the total space
of the commitlog volume.
commitlog_total_space_in_mb: 8192
The query i have is What does the following statement mean :
The default value is the smaller of 8192, and 1/4 of the total space
of the commitlog volume.
I have my commitlog in the same hard drive but different drive
I have allocated the volume of commit log to 70 GB. Should I reduce it to 8gb or should i reduce it to 32 GB as 1/4 of the volume space confuses me to consider that partition space.
I have the default value for commitlog_total_space_in_mb , so what should be the ideal commit log partition size ??
P.s : I know that these 2 should be on different drives for better performance.
As for your first question about Cassandra.yaml statement - the commit log size will be:
Min(8GB, 0.25*total_disk_size)
In your case since you have allocated 70 GB, the commit log size will be 8 GB.
You can avoid those calculation be setting the size at Cassandra.yaml
As for your second question, what is the optimal size, from my testing when setting the commit log directory under 8GB you will get a warning for insufficient disk size for it, so set it more than 8GB and remember that you can always increase that size.

MySQL - High CPU long after executing update

I have a MySQL database running in Amazon RDS that serves a site. Around 80MB of data in one of the tables gets replaced hourly. I load the data into another table and then use rename to swap the tables and then delete the other table.
Most of the time everything is fine. When the update happens hourly, there is a CPU spike to around to 10% and back to 2% or 3%, which is the CPU consumption with normal site traffic.
Once in a while (no pattern), however, the CPU spikes to almost 100% right at the time of the update and stays very high until the next update an hour later, long after the update is completed. The update takes seconds. During the entire hour, traffic to the site is normal. Also, during that time, show processlist is empty - no stuck processes.
A couple of times, I manually triggered the update before the hour ended and the CPU went back to normal.
My question is this: Assuming there is something inefficient going on with the data loading, can a bad query or update cause the CPU to stay high long after it completes executing?
I will run SHOW ENGINE INNODB STATUS next time it happens. When I do, what should I be looking for?
Some details if useful:
RDS m4.large instance running in multi-az
MySQL version 5.6.27
InnoDB engine
Connection pool used with default 8 connections

Delete records from table to reduce disk space

I tried to increase my disk space by deleting the some old records, but my disk space is reduced at the time of running delete query. So I cancelled that query execution.. but still the disk space is reduced.
What i need to do ? is any other way ?
Deleting records does not automatically regain you disk space, as the disk space allocated to the database files stays the same. You in fact may increase the size of the transaction log!
If you have deleted a bunch of records it may be possible to regain some disk space.
First you'll need to determine how much disk space is actually used & how much is available.
The following script should tell you this (taken from DBA Exchange ):
SELECT RTRIM(name) AS [Segment Name], fileid AS [File Id], groupid AS [Group Id], filename AS [File Name],
CAST(size/128.0 AS DECIMAL(10,2)) AS [Size in MB],
CAST(FILEPROPERTY(name, 'SpaceUsed')/128.0 AS DECIMAL(10,2)) AS [Space Used],
CAST(size/128.0-(FILEPROPERTY(name, 'SpaceUsed')/128.0) AS DECIMAL(10,2)) AS [Available Space],
CAST((CAST(FILEPROPERTY(name, 'SpaceUsed')/128.0 AS DECIMAL(10,2))/CAST(size/128.0 AS DECIMAL(10,2)))*100 AS DECIMAL(10,2)) AS [Percent Used]
FROM sysfiles
ORDER BY groupid DESC
You can then use the command DBCC SHRINKFILE to shrink the files. - eg
DBCC SHRINKFILE(1, 240000)
Would shrink the file with fileID = 1 to 240 GB.
Note however -
Shrinking database files is a bad idea in general, as you will cause indexes to become fragmented, and hence cause performance problems.
If you do resort to shrinking the file - make sure there is a reasonable amount of free space left after the shrink - probably at least 15%
If at all possible do not shrink the database file. Shrinking the transaction log file is less of a problem (ensure it is truncated first so there is space to shrink the file).
Read the Technet article here about managing the transaction log http://technet.microsoft.com/en-us/library/ms345382(v=sql.105).aspx
Maybe consider getting more storage space for your database...

Will a MySQL server stop working if it runs out of disc space, but has a lot of data_free?

I have a several tables which have very high volume inserts and deletes. Data stays in the table for about 2 hours before being removed. At any one point there is far less data on the sever than available disc space.
However, since disc space isn't freed on row DELETEs the tables just keep increasing in size.
I can get the table sizes back down to reasonable levels by running OPTIMIZE on them. However this takes down the system for longer than I'd like and doesn't seem like an efficient solution.
Do I need to do this? Will MySQL recover the disc space in data_free itself when it's about to run out? I don't want to risk just waiting for this to happen as I know how difficult it is to recover an MySQL server once it runs out of disc space.
No, MySQL will not recover the disc space - if you run out of disc space the system will come to a complete standstill and you may lose data.
Some tables may also be marked as crashed and will need to be repaired.
(This happened to me once or twice...)