How to change a default expression in Zabbix - zabbix

by default, zabbix comes with the following expression that alerts you when the server's disk is less than 5GB of free space
{Zabbix server:vfs.fs.size[/,pused].last()}>{$VFS.FS.PUSED.MAX.CRIT:"/"} and
(({Zabbix server:vfs.fs.size[/,total].last()}-{Zabbix server:vfs.fs.size[/,used].last()})<5G or {Zabbix server:vfs.fs.size[/,pused].timeleft(1h,,100)}<1d)
I want to know if there is a way to change these 5GB to a slightly larger number, like 20Gb for example
I am new to Zabbix and any help is welcome, thanks.

Related

Is it possible to bypass the limitation on MySQL Azure?

I am using Azure MySQL version 5.6
When i am trying to import a large MySQL dump file from linux enviroment into the PaaS of Azure (Azure Database for MySQL servers) using this command :
pv DBFILE.sql | mysql -u username#mysqlserver -h
mysqlservername.mysql.database.azure.com -pPassword DBNAME
I am getting this message:
"The size of BLOB/TEXT data inserted in one transaction is greater
than 10% of redo log size. Increase the redo log size using
innodb_log_file_size."
Is there any way to bypass this error?
I read on the Microsoft documentation that "innodb_log_file_size" is not configurable. Can i split this large dump file into smaller ones and import them all together? Does it make any difference?
The size of the dump file is not the problem. It won't help to split it up.
The problem is that the size of one BLOB or TEXT value on at least one row is greater than 1/10th the size of the innodb log file. You can't split the data to be less than a single BLOB or TEXT value.
According to the Azure documentation you linked to, the value of innodb_log_file_size is fixed at 256MB. This means you cannot import any row with a BLOB or TEXT value of more than 25.6MB. At least you can't import it to an InnoDB table.
The reason is that the redo log file has a fixed size, and the size of the log file creates a limit on the number of modified pages in the InnoDB buffer pool (not one-to-one though, because the format of redo log file records is not the same as pages in the buffer pool). It's kind of an arbitrary ratio, but the purpose of the limit on BLOB/TEXT values is meant to avoid a giant BLOB wrapping around and overwriting part of itself in a small redo log, which would leave the MySQL server in a state that could not recover from a crash. In MySQL 5.5, this was just a recommended limit. In MySQL 5.6, it became enforced by InnoDB, so an INSERT of a BLOB or TEXT that was too large would simply result in an error.
Amazon RDS used to have a similar restriction years ago. They only supported a fixed size for innodb_log_file_size, I recall it was 128MB. It was not configurable.
I was at an AWS event years ago in San Francisco, and I found an opportunity to talk to the Amazon RDS Product Manager in the hall between sessions. I gave him feedback that leaving this setting at a relatively small value without the ability to increase it was too limiting. It meant that one could only insert BLOB/TEXT of 12.8MB or less.
I'm sure I was not the only customer to give him that feedback. A few months later, an update to RDS allowed that variable to be changed. But you must restart the MySQL instance to apply the change, just like if you run MySQL yourself.
I'm sure that Azure will discover the same thing, and get an earful of feedback from their customers.

MySQL - 'a bulk size specified must be increased' during copy database attempt

I am trying to copy over a database, and have been able to do so for months without issue. Today however, I ran into an error that says 'A BULK size specified must be increased'. I am using SQLYog.
I didn't find much on google about this but it seems as tough it should be fixed by increasing the bulk_insert_buffer_size through something like 'SET SESSION bulk_insert_buffer_size = 1024 * 1024 * 256'. (Tried with GLOBAL instead of SESSION too)
This has not worked and I am still getting the error unfortunately. The only other bit of information I found was the source code where the message is generated as seen here: from this page: https://github.com/Fale/sqlyog/blob/master/src/CopyDatabase.cpp. Unfortunately I really don't know what to do with that information. I tried looking through the code to find out what mysql variables (like bulk_insert_buffer_size) were tied to the variables used in the source code but wasn't able to follow it effectively.
Any help would be appreciated.
http://faq.webyog.com/content/24/101/en/about-chunks-and-bulks.html says you can specify BULK size:
The settings for the 'export' tool are available from 'preferences' and for the 'backup' 'powertool' the option is provided by the backup wizard.
You should make sure the BULK size is no larger than your MySQL Server's max_allowed_packet config option. This has a default value of 1MB, 4MB, or 64MB depending on your MySQL version.
I'm not a user of SQLYog, but I know mysqldump has a similar concept. Mysqldump auto-calculates the max bulk size by reading the max_allowed_packet.
For what it's worth, bulk_insert_buffer_size is not relevant for you unless you're copying into MyISAM tables. But in general, you shouldn't use MyISAM tables.

Running into MySQL column width limits on localhost but not on AWS server

I have a MySQL table with 650 columns and 1 row. I know this is a bad design (which I inherited) and it WILL be moved to a simple two-column layout, but at the moment I'm trying to diagnose a different problem to be able to understand it. I don't want to fix the design issues if it will mask the system configuration issue I'm currently facing.
Background: I have a system which takes the following steps in building an instance of a web application:
Use a skeleton.sql file to build the "version 0" of the database.
Run an upgrade.sh script which looks through an "upgrades" directory and...
...runs each of those upgrade files to change the database structure (including adding new columns to the already-too-wide table), in proper order.
The issue is, on my localhost server (WAMP) while running one of the scripts in step 3, I'm getting an error when attempting to add more columns to that table:
Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126.
Again, I know the table is too "wide" in theory, but I don't know any query I can run to calculate how wide it is. Further, I'm not getting this error on my RDS server when I run this, and since I don't know how to get the "width" of the table I don't know how to compare (though given localhost's VARCHARs are populated with FEWER characters, I expect it to be smaller regardless).
Everything I'm reading on StackOverflow says that all tables have the same "width" limit, regardless of engine. I suspect it's a configuration issue, but if this is a hard limit, why is it working on my RDS server but not localhost?
My only thought was character code differences? Have you looked into that? (Sorry can't comment too new a user).

Average query size

I'm doing profiling on an application I've written and one thing that I want to know is the average query size. There are times when the app sends batch insert statements and one customer with a remote mysql server (from shared webhosting provider) had an extremely low max_allowed_packet config out of his control.
I've got the full query log enabled on a dev server but I'm having trouble finding any tool that reports the average query sizes, just so that I'm aware of what we're using. Also, any advice on good query log analyzers is appreciated too.
You could always divide the size of the query log file by the number of lines in the query log file to get a rough idea of your average size. What matters most is maximum, though.
Why don't you set max_allowed_packet to 1GB and not worry about it?

How to reclaim space after turning on page compression in SQL 2008?

I have just turned on page compression on a table (SQL 2008 Ent) using the following command:
ALTER TABLE [dbo].[Table1] REBUILD PARTITION = ALL
WITH
(DATA_COMPRESSION = PAGE
)
The hard drive now contains 50GB less space than before. I'm guessing that I need to run a command to reclaim the space. Anyone know it?
I feel embarrassed even asking this question, but is it something that could be fixed by shrink the database in question? As it compressed the pages, perhaps it left the space free all throughout the file, and the data files just need to be condensed and shrunk to reclaim the space...
If it created a new, compressed copy of the table and then removed the old one from the file, but didn't shrink the file internally, this might also explain your sudden lack of space on the drive as well.
If this is the case, then a simple "DBCC SHRINKDATABASE('my_database')" should do the trick. NOTE: This may take a long time, and lock the database during that time so as to prevent access, so schedule it wisely.
Have you checked using the table size using sp_spaceused?
Disk space used does not equal space used by data. The compression will have affected log file size (all has to be logged) and required some free working space (like the rule of thumb that index rebuild requires free space = 1.2 times largest table space).
Another option is that you need to rebuild the clustered index because it's fragmented. This compacts data and is the only way to reclaim space for text columns.
Also, read Linchi Shea's articles on data compression