I have a question about getting a huge table to local machine from mysql running in AWS.
I just created a table which has a size of 2.3GB, however I have only 2 GB free disk space.
This lead into a situation that I even can not dump my table into a dump file which would cause error 28. Then I have two choices.
Clean up the disk with 300+MB free space.
I have already tried to delete everything I could.
I have only 2.5G database but mysqldb1 takes up to 4GB size which I have no idea.
ubuntu#ip-10-60-125-122:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 5.6G 2.0G 74% /
udev 819M 12K 819M 1% /dev
tmpfs 331M 184K 331M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 827M 0 827M 0% /run/shm
/dev/xvdb 147G 188M 140G 1% /mnt
Split my table into two different tables or more which I could dump then seperately and then put them together later.
I am new to mysql and hope a safe and easy solutions could be provided.
Best regards and let me know if I could do anything to improve my question.
If you're sure that you actually don't have that much data stored in the database, you might want to take a look at this other question here on SO:
MySQL InnoDB not releasing disk space after deleting data rows from table
By default, MySQL doesn't reduce the file sizes if you delete data. If your MySQL is configured for per-table files, you should be able to reduce the data by optimizing the database. Otherwise, you'll have to get all the data to another machine and recreate the database with per-table files configured.
Related
i have Zabbix server with mariadb database,today both ( zabbix and databadse ) services has fail and i found that the root partition had no storage remain . i sort database files with size and the result :
12G /var/lib/mysql/zabbix/trends_uint.ibd
8.9G /var/lib/mysql/zabbix/events.ibd
8.8G /var/lib/mysql/zabbix/trends.ibd
6.3G /var/lib/mysql/zabbix/history.ibd
6.1G /var/lib/mysql/zabbix/history_uint.ibd
2.9G /var/lib/mysql/zabbix/event_recovery.ibd
168M /var/lib/mysql/zabbix/history_str.ibd
and df -h command :
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda1 xfs 50G 50G 352K 100% /
how can i delete this files and start services ?
thanks in advance
If you delete these files, you break Zabbix. Expand the partition if you can. Depending on how long your data retention is, a Zabbix database can grow quite large. Using PostgreSQL and Timescale DB can help in reducing space occupancy.
If you really want, delete folder /var/lib/mysql/zabbix/, restart the database service, and create a new zabbix database. All your Zabbix configuration and metrics data will be lost.
I'm trying to create a compute instance on Google Cloud with a 20TB disk attached, but I'm seeing something strange. When I specify the size of the disk size in the gcloud command I do not see that same disk size reflected when I check the size of the instance. I've also tried creating new disks and attaching them, resizing the attached disks, but it does not go about 2TB. Is 2TB the max disk size for compute instances?
$ gcloud compute instances create instance --boot-disk-size 10TB --scopes storage-rw
Created [https://www.googleapis.com/compute/v1/projects/project/zones/us-central1-a/instances/instance].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance us-central1-a n1-standard-1 10.240.0.2 104.154.45.175 RUNNING
$ gcloud compute ssh gm-vcf
Warning: Permanently added 'compute.8994721896014059218' (ECDSA) to the list of known hosts.
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
user#instance:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 2.0T 880M 1.9T 1% /
udev 10M 0 10M 0% /dev
tmpfs 743M 8.3M 735M 2% /run
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
You can attach up to 64TB of Standard Persistent Disk per VM for most machine types, you can refer to this blogpost for details. You need to resize the file system so that the operating system can access the additional space on your disk. You can refer to this link for steps to resize the disk.
I'd say it's not the disk size, but the partition size (you can confirm by running fdisk -l and checking the disk size). Depending on how the disk is partitioned, the maximum size will be 2 TB. Perhaps it'd be better to use a smaller disk for the system and attach another one that is bigger to store the data (this new one you'd be able to partition as you'd like)
I am using innodb. I have a problem with app performance. When I run mysqlturner.pl I get:
[--] Data in InnoDB tables: 8G (Tables: 1890)
[!!] Total fragmented tables: 1890
Ok, i have run mysqlcheck -p --optimize db.
I have diceded that innodb_file_per_table is disabled. The last 2 years a database wasn't reindex. How I can do it?
Make mysqldump?
Stop mysql serice?
insert enable innodb_file_per_table into my.cnf?
Start mysql serice?
Import from mysqldump?
run mysqlcheck -p --optimize db?
Will everything be ok?
Tables fragmented
Bogus.
All InnoDB tables are always (according to that tool) fragmented.
In reality, only 1 table in 1000 needs to be defragmented.
OPTIMIZE TABLE foo; will defrag a single table. But, again, I say "don't bother".
ibdata1 bloated
On the other hand... If you are concerned about the size of ibdata1 and Data_free (SHOW TABLE STATUS) shows that a large chunk of ibdata1 is "free", then the only cure is painful: As already mentioned: dump, stop, remove ibdata1, restart, reload.
But... If you don't have enough disk space to dump everything, you are in deep weeds.
You have 8GB of tables? (Sum Data_length and Index_length across all InnoDB tables.) If ibdata1 is, say 10GB, don't bother. If it is 100GB, then you have a lot of wasted space. But if you are not running out of disk space, again I say "don't bother".
If you are running out of disk space and ibdata1 seems to have a lot "free", then do the dump etc. But additionally: If you have 1890 tables, probably most are "tiny". Maybe a few are somewhat big? Maybe some table fluctuates dramatically in size (eg, add a million rows, then delete most of them)? I suggest having innodb_file_per_table ON when creating big or fluctuating tables; OFF for tiny tables.
Tiny tables take less space when living in ibdata1; big/fluctuating tables can be cleaned up if needed via OPTIMIZE if ever needed.
Yes that would do it. Take a backup before you start and test the entire process out first on an copy of the host, esp. the exact mysqldump command, you don't want databases like 'information_schema' in the dump.
Also ensure that other services cannot connect to your db once you start the export - any data they change would be lost during the import.
There are detailed instructions in this previous question:
Howto: Clean a mysql InnoDB storage engine?
i have one query which is causing problem and according to google it is because of insufficient temp memory.
the same query was working just fine few days back and my website got hacked and after restoring from backup i am getting this type of error however database was as usual old one.
"Incorrect key file for table /tmp/#sql_xxx_x.MYI" error
1030 Got error 28 from storage engine
both cases i searched and found that it is because of temp memory but how suddenly temp become prblem the same query was working just fine few days back and i checked that query using mysql explain and its output was good as it says only 144 rows are examined to give the output of 20 rows.
then i used this command to see how much really i am having memory in temp
and it says
ddfdd#drddrr[~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
3.6T 49G 3.4T 2% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 243M 86M 145M 38% /boot
/usr/tmpDSK 4.0G 3.8G 0 100% /tmp
so where is the problem and how i can resolve it?
any advice will be highly appreciated.
Double check that /tmp does not have other files using all of the space and preventing MySQL from creating an on disk tmp table.
Alternatively you can create a new tmp directory off your root slice as it has plenty of space and then change your tmpdir variable in the my.cnf to point to it. Though this is not a dynamic variable and will require a restart. Make sure to chown the directory so MySQL can write to it.
I've got a WP database with approx. 10k+ posts. Everytime a user saves / posts (insert or update query), apache server CPU spikes to 100% and MySQL spikes to 100% CPU usage, eventually crashing. I'm assuming the culprit here is MySQL, however there are zero errors in the log and there are no relative slow query log. The wp_posts table is myisam based and not innodb (using full text search). Could this be a configuration issue with myisam?
Specs:
Wordpress: 3.4.2
Server: Amazon EC2 Small Instance (1.7gb ram, 40% free)
Thanks,
Mike
Issue determined: It was a plugin. Make sure you always test out your plugins!