zabbix server and huge databse files - mysql

i have Zabbix server with mariadb database,today both ( zabbix and databadse ) services has fail and i found that the root partition had no storage remain . i sort database files with size and the result :
12G /var/lib/mysql/zabbix/trends_uint.ibd
8.9G /var/lib/mysql/zabbix/events.ibd
8.8G /var/lib/mysql/zabbix/trends.ibd
6.3G /var/lib/mysql/zabbix/history.ibd
6.1G /var/lib/mysql/zabbix/history_uint.ibd
2.9G /var/lib/mysql/zabbix/event_recovery.ibd
168M /var/lib/mysql/zabbix/history_str.ibd
and df -h command :
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda1 xfs 50G 50G 352K 100% /
how can i delete this files and start services ?
thanks in advance

If you delete these files, you break Zabbix. Expand the partition if you can. Depending on how long your data retention is, a Zabbix database can grow quite large. Using PostgreSQL and Timescale DB can help in reducing space occupancy.
If you really want, delete folder /var/lib/mysql/zabbix/, restart the database service, and create a new zabbix database. All your Zabbix configuration and metrics data will be lost.

Related

How to track disk usage on Container-Optimized OS

I have an application running on Container-Optimized OS based Compute Engine.
My application runs every 20min, fetches and writes data to a local file, then deletes the file after some processing. Note that each file is less than 100KB.
My boot disk size is the default 10GB.
I run into "no space left on device" error every month or so while attempting to write the file locally.
How can I track disk usage?
I manually checked the size of the folders and it seems that the bulk of the space is taken by /mnt/stateful_partition/var/lib/docker/overlay2.
my-vm / # sudo du -sh /mnt/stateful_partition/var/lib/docker/*
20K /mnt/stateful_partition/var/lib/docker/builder
72K /mnt/stateful_partition/var/lib/docker/buildkit
208K /mnt/stateful_partition/var/lib/docker/containers
4.4M /mnt/stateful_partition/var/lib/docker/image
52K /mnt/stateful_partition/var/lib/docker/network
1.6G /mnt/stateful_partition/var/lib/docker/overlay2
20K /mnt/stateful_partition/var/lib/docker/plugins
4.0K /mnt/stateful_partition/var/lib/docker/runtimes
4.0K /mnt/stateful_partition/var/lib/docker/swarm
4.0K /mnt/stateful_partition/var/lib/docker/tmp
4.0K /mnt/stateful_partition/var/lib/docker/trust
28K /mnt/stateful_partition/var/lib/docker/volumes
TL;DR: Use Stackdriver Monitoring and create an alert for DISK usage.
Since you are using COS images, you can enable Stackdriver Monitoring agent by simply adding the “google-monitoring-enabled” label set to “true” on GCE Instance metadata. To do so, run the command:
gcloud compute instances add-metadata instance-name --metadata=google-monitoring-enabled=true
Replace instance-name with the name of your instance. Remember to restart your instance to get the change done. You don't need to install the Stackdriver Monitoring agent since is already installed by default in COS images.
Then, you can use disk usage metric to get the usage of your disk.
You can create an alert to get a notification each time the usage of the partition reaches a certain threshold.
Since you are in a cloud, it is always the best idea to use the Cloud resources to solve Cloud issues.
Docker uses /var/lib/docker to store your images, containers, and local named volumes. Deleting this can result in data loss and possibly stop the engine from running. The overlay2 subdirectory specifically contains the various filesystem layers for images and containers.
To cleanup unused containers and images via command:
docker system prune.
Monitor it via command "watch"
sudo watch "du -sh /mnt/stateful_partition/var/lib/docker/*"

error with storage 28 from storage engine with mysql

I am having an issue with a storage . I recieved this error
Got error 28 from storage engine
I have checked the storage capacity and it was still available and it was not full. what can be the reason for this? I have checked everything with no success
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
TL;DR
Issue the following commands to inspect the location of your server's data and temporary directories respectively:
SHOW GLOBAL VARIABLES LIKE 'datadir'
SHOW GLOBAL VARIABLES LIKE 'tmpdir'
The values of these variables are typically absolute paths (relative to any chroot jail in which the server is running), but if they happen to be relative paths then they will be relative to the working directory of the process that started the server.
However...
As documented under The MySQL Data Directory (emphasis added):
The following list briefly describes the items typically found in the data directory ...
Some items in the preceding list can be relocated elsewhere by reconfiguring the server. In addition, the --datadir option enables the location of the data directory itself to be changed. For a given MySQL installation, check the server configuration to determine whether items have been moved.
You may therefore also wish to inspect the values of a number of other variables, including:
pid_file
ssl_%
%_log_file
innodb_data_home_dir
innodb_log_group_home_dir
innodb_temp_data_file_path
innodb_undo_directory
innodb_buffer_pool_filename
If your server is not responsive...
You can also inspect the server's startup configuration.
As documented under Specifying Program Options, the server's startup configuration is determined "by examining environment variables, then by processing option files, and then by checking the command line" with later options taking precedence over earlier options.
The documentation also lists the locations of the Option Files Read on Unix and Unix-Like Systems, should you require it. Note that the sections of those files that the server reads is determined by the manner in which the server is started, as described in the second and third paragraphs of Server Command Options.
Once you have found the locations where MySQL stores files, run a command in the shell:
df -Ph <pathname>
Where <pathname> is each of the locations you want to test. Some may be on the same disk volume, so they'll show up as the same when reported by df.
[vagrant#localhost ~]$ mysql -e 'select ##datadir'
+-----------------+
| ##datadir |
+-----------------+
| /var/lib/mysql/ |
+-----------------+
[vagrant#localhost ~]$ df -Ph /var/lib/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 38G 3.7G 34G 10% /
This tells me that the disk volume for my datadir is the root volume, including the top-level directory "/" and everything beneath that. The volume is 10% full, with 34G unused.
If the volume where your datadir reaches 100%, then you'll start seeing errno 28 issues when you insert new data and it needs to expand a MySQL tablespace, or write to a log file.
In that case, you need to figure out what's taking so much space. It might be something under the MySQL directory, or like in my case, your datadir might be part of a larger disk volume, where all your other files exist. In that case, any accumulation of files on the system might cause the disk to fill up. For example, log files or temp files even if they're not related to MySQL.
I'd start at the top of the disk volume and use du to figure out which directories are so full.
[vagrant#localhost ~]$ sudo du -shx /*
33M /boot
34M /etc
40K /home
28M /opt
4.0K /tmp
2.0G /usr
941M /vagrant
666M /var
Note: if your df command told you that your datadir is on a separate disk volume, you'd start at that volume's mount point. The space used by one disk volume does not count toward another disk volume.
Now I see that /usr is taking the most space, of top-level directories. Drill down and see what's taking space under that:
[vagrant#localhost ~]$ sudo du -shx /usr/*
166M /usr/bin
126M /usr/include
345M /usr/lib
268M /usr/lib64
55M /usr/libexec
546M /usr/local
106M /usr/sbin
335M /usr/share
56M /usr/src
Keep drilling down level by level.
Usually the culprit ends up being pretty clear. Like if you have some huge 500G log file /var/log somewhere that has been growing for months.
An example of a typical culprit is the http server logs.
Re your comments:
It sounds like you have a separate storage volume for your database storage. That's good.
You just added the du output to your question above. I see that in your 1.4T disk volume, the largest single file by far is this:
1020G /vol/db1/mysql/_fool_Gerg_sql_200e_4979_main_16419_2_18.tokudb$
This appears to be a TokuDB tablespace. There's information on how TokuDB handles full disks here: https://www.percona.com/doc/percona-server/LATEST/tokudb/tokudb_faq.html#full-disks
I would not remove those files. I'm not as familiar with TokuDB as I am with InnoDB, but I assume those files are important datafiles. If you remove them, you will lose part of your data and you might corrupt the rest of your data.
I found this post, which explains in detail what the files are used for: https://www.percona.com/blog/2014/07/30/examining-the-tokudb-mysql-storage-engine-file-structure/
The manual also says:
Deleting large numbers of rows from an existing table and then closing the table may free some space, but it may not. Deleting rows may simply leave unused space (available for new inserts) inside TokuDB data files rather than shrink the files (internal fragmentation).
So you can DELETE rows from the table, but the physical file on disk may not shrink. Eventually, you could free enough space that you can build a new TokuDB data file with ALTER TABLE <tablename> ENGINE=TokuDB ROW_FORMAT=TOKUDB_SMALL; (see https://dba.stackexchange.com/questions/48190/tokudb-optimize-table-does-not-reclaim-disk-space)
But this will require enough free disk space to build the new table.
So I'm afraid you have painted yourself into a corner. You no longer have enough disk space to rebuild your large table. You should never let the free disk space get smaller than the space required to rebuild your largest table.
At this point, you probably have to use mysqldump to dump data from your largest table. Not necessarily the whole table, but just what you want to keep (read about the mysqldump --where option). Then DROP TABLE to remove that large table entirely. I assume that will free disk space, where using DELETE won't.
You don't have enough space on your db1 volume to save the dump file, so you'll have to save it to another volume. It looks like you have a larger volume on /vol/cbgb1, but I don't know if it's full.
I'd dump the whole thing, for archiving purposes. Then dump again with a subset.
mkdir /vol/cbgdb1/backup
mysqldump fool Gerg | gzip -c > /vol/cbgdb1/backup/Gerg-dump-full.sql.gz
mysqldump fool Gerg --where "id > 8675309" | gzip -c > /vol/cbgb1/backup/Gerg-dump-partial.sql.gz
I'm totally guessing at the arguments to --where. You'll have to decide how you want to select for a partial dump.
After the big table is dropped, reload the partial data you dumped:
gunzip -c /vol/cbgb1/backup/Gerg-dump-partial.sql.gz | mysql fool
If there are any commands I've given in my examples that you don't already know well, I suggest you learn them before trying them. Or find someone to pair with who is more familiar with those commands.

MySQL my.cnf config setup for MyISAM

I currently have a Cloud based server with the following config.
CentOS 7 64-Bit
CPU:8 vCore
RAM:16 GB
MariaDB/MySQL 5.5.5
Unfortunately, I've inherited a MyISAM database and tables that I have no control to convert to INNODB even though the application performs many writes from many connections. The data is Wordpress Posts with the typical large text and photos.
I'm experimenting with my.cnf config changes and was wondering if the config I've developed here is making use of the resources in the most effecient way. Is there anything glaring I could increase/decrease to squeak out more performance?
key_buffer_size=4G
thread_cache_size = 128
bulk_insert_buffer_size=256M
join_buffer_size=64M
max_allowed_packet=128M
query_cache_limit=128M
read_buffer_size=16M
read_rnd_buffer_size=16M
sort_buffer_size=16M
table_cache=128
tmp_table_size=128M
This will depend entirely on the type of data you are storing, the structure and size of your tables and the type of usage your database has. Not to mention the amount of available RAM and the type of disks your server has.
The best recommendation, if you have shell access to the server (which I assume you must, otherwise you couldn't change my.cnf) is to download the mysqltuner script from major.io
Run this script as a user with privileges to access your database, and preferably with root privileges on mysql too (the ideal is to run it under sudo or root) and it will analyse your database access since mysql's last restart, and then give you recommendations to change the options in my.cnf
It isn't perfect, but it'll get you much further, and more quickly, than anyone on here trying guess what values would be appropriate for your use case.
And, while not trying to pre-empt the results, I wouldn't be surprised if mysqltuner recommends that you drastically increase the size of your join buffer, table_cache and query_cache_limit.

mysql 5.6 Linux vs windows performance

The below command takes 2-3 seconds in a Linux MySQL 5.6 server running Php 5.4
exec("mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file");
On windows with similar configuration it takes 10-15 seconds. The windows machine has a lot more ram (16gb) and similar hard drive. I installed MySQL 5.6 and made no configuration changes. This is on windows server 2012.
What are configurations I can change to fix this?
The database file creates about 40 innodb tables with very minimal inserts.
EDIT: Here is the file I am running:
https://www.dropbox.com/s/uguzgbbnyghok0o/database_14.4.sql?dl=0
UPDATE: On windows 8 and 7 it was 3 seconds. But on windows server 2012 it is 15+ seconds. I disabled System center 2012 and that made no difference.
UPDATE 2:
I also tried killing almost every service except for mysql and IIS and it still performed slowly. Is there something in windows server 2012 that causes this to be slow?
Update 3
I tried disable write cache buffer flush and performance is now great.
I didn't have to do this on other machines I tested with. Does this indicate a bottleneck With how disk is setup?
https://social.technet.microsoft.com/Forums/windows/en-US/282ea0fc-fba7-4474-83d5-f9bbce0e52ea/major-disk-speed-improvement-disable-write-cache-buffer-flushing?forum=w7itproperf
That is why we call it LAMP stack and no doubt why it is so popular mysql on windows vs Linux. But that has more to do more with stability and safety. Performance wise the difference should be minimal. While a Microsoft Professional can best tune the Windows Server explicitly for MySQL by enabling and disabling the services, but we would rather be interested to see the configuration of your my.ini. So what could be the contributing factors w.r.t Windows on MySQL that we should consider
The services and policies in Windows is sometimes a big impediment to performance because of all sorts of restrictions and protections.
We should also take into account the Apache(httpd.conf) and PHP(php.ini) configuration as MySQL is so tightly coupled with them.
Antivirus : Better disable this when benchmarking about performance
Must consider these parameters in my.ini as here you have 40 Innodb tables
innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, query_cache_size, innodb_flush_method, innodb_log_file_size, innodb_file_per_table
For example: If file size of ib_logfile0 = 524288000, Then
524288000/1048576 = 500, Hence innodb_log_file_size should be 500M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
https://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html
When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert
SET autocommit=0;
Most importantly innodb_flush_log_at_trx_commit as in this case it is about importing database. Setting this to '2' form '1' (default)hm can be a big performance booster specially during data import as log buffer will be flushed to OS file cache on every transaction commit
For reference :
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/a/72766/60318
http://kvz.io/blog/2009/03/31/improve-mysql-insert-performance/
Lastly, based on this
mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file
If the mysqldump (.sql) file is not residing in the same host where you are importing, performance will be slow. Consider to copy the (.sql) file exactly in the server where you need to import the database, then try importing without --host option.
Windows is slower at creating files, period. 40 InnoDB tables involves 40 or 80 file creations. Since they are small InnoDB tables, you may as well set innodb_file_per_table=OFF before doing the CREATEs, thereby needing only 40 file creations.
Good practice in MySQL is to create tables once, and not be creating/dropping tables frequently. If your application is designed to do lots of CREATEs, we should focus on that. (Note that, even on Linux, table create time is non-trivial.)
If these are temporary tables... 5.7 will have significant changes that will improve the performance (on either OS) in this area. 5.7 is on the cusp of being GA.
(RAM size is irrelevant in this situation.)

saving SQL backup / dump on remote location

I would like to backup / dump my SQL (mysql - InnoDB or XtraDB depending if I will use Oracle's Mysql or MariaDB) database regularly.
Now, my hosting for the beginning will be 60 GB on SSD so it will soon fill with pictures and rows of tables, so the space is limited.
I want to dump my database safely and securely and non-intrusively (meaning not stressing out the server when doing it) and I NEED the file to be saved say on my local Win7 desktop, not on the server.
mysqldump does the trick, but fills up space on the server, and if my database will grow to 20 GB and I have 20 GB of pictures, the dump will fill out the remaining space or maybe would not fit at all.
so what are the ways to remotely save the dump (not on the same server) ?
i figure I can save / dump my tables from phpMyAdmin for sure, but when the tables get to 2 GB or 10 GB...don't know for sure if it works anymore (millions of rows)
thanks!
If you have remote access to your database server, this is only a matter of using mysqldump with the correct host option from a machine with enough disk space to hold your backup.
# if your database server has DNS name :
mysqldump -h my.database-server.local ...
# if you access your database server by its IPv4 address
mysqldump -h 192.168.0.22 ...
Or do I totally missed the point here?