I am using Windows 7 and My Computer Name is 'COREI5' and have a 1tb Hard Drive.
My Hard Drive is showing as Full but i was not able to Locate which File is so huge to Block the Drive Space.Now Seems i Figured out the File Source.
C:\ProgramData\MySQL\MySQL Server 5.6\data\COREI5-PC-slow
So it seems this 'COREI5-PC-slow' is the culprit file as it is showing a size of aprox 640GB.Note that this filw is shown as a txt file.
My queries are:
1) Will deleting this file harm my computer ? (I am getting error "You Need permission from Computer administrator to make changes")
2) I am not able to delete this file (even after i logged in as administrator)
3) Also Tried to give special permissions but now working
Any Solution ?
Note: I am not much savvy with Such Programs and commands to request you to give details or keep it simple.
I suspect the file is the "slow query" log in the MySQL data directory.
To confirm, connect to MySQL database, and run a query:
SHOW VARIABLES LIKE 'slow%'
Variable_name Value
------------------- --------------------------------------------------------------
slow_launch_time 2
slow_query_log OFF
slow_query_log_file C:\ProgramData\MySQL\MySQL Server\MyLaptop-slow.log
I suspect that in your case, slow_query_log is set to ON. If the filename shown for slow_query_log_file matches the file on your system, you can safely turn off the slow_query_log and then delete the file.
To turn off the slow query log:
SET GLOBAL slow_query_log = 0
Re-run the SHOW VARIABLES LIKE 'slow%' to confirm it's off.
And then you can delete the file from the filesystem. (If you are doing it from the GUI, don't just Delete the file and put it the Recycle Bin. Hold down the shift key when you click Delete, and it will prompt you if you want to "permanently" delete the file.
I'd be concerned that MySQL has logged 640GB worth of slow queries.
That slow_query_launch_time determines the amount of time a query executes before it's considered slow. There also may be a setting that sends all queries that don't use an index into the slow query log, even if it runs faster than slow_query_launch_time.
While you're at it, check that the general log is turned off as well.
SHOW VARIABLES LIKE 'general%'
This question might better be asked on dba.stackexchange.com
For hunting down huge space consumers, I recommend TreeSize Free from JAM Software. An easy to use old-style windows explorer interface, that gives you the total size of directories and files.
My final objective was to delete that file stated above and i was able to achieved the same with help of SHIFT+DELETE and then restart of PC.
It worked - thank you once again.
Related
I accidentally truncated my table from online server and I wasn't able to back up it. Please anyone help me on what should I do.
Most viable, least work:
From a backup
Check again if you have one
Ask your hoster if they do backups; their default configuration for some setups might include a backup that you are unaware of, e.g. a database backup for wordpress or a file backup if you have a vm
Viable in some situations, little work if applicable:
From binary logs. Check if they are enabled (maybe as part of your hosters default configuration, also maybe only the hoster can access them, so you may need to ask them). They contain the most recent changes to your database, and, if you are lucky, "recent" might be long enough to include everything
Less viable, more work:
Try to recover from related data, e.g. history tables, other related tables or log files (e.g. the mysql general query log or log files that your application created); you can try to analyze them to figure out what should be in your table
Least viable, most work, most expensive:
In theory, since the data is still stored on the harddrive until it is overwritten by new data, you can try to recover the data, similar to tools that find lost blocks or deleted files on your harddrive
You need to stop any activity on your harddrive to increase probability of success. This will depend on your configuration and setup. E.g., in shared hosting, freed diskspace might be overwritten by other users beyond you control, on the other hand, if you are using innodb and disabled innodb_file_per_table, the data is stored in a single file (and the disk space is not freed), so stopping your mysql server should prevent any remaining recoverable data from being overwritten.
While there are some tools to help you with that, you will likely have to pay someone to do it for you (and even then you only get back the data that hasn't been overwritten so far), so this option is most likely only viable if your data is very valuable
Please let me know what I am doing wrong. As website goes down at every 300 concurrent users.
First step to consider is in your my.cnf [mysqld] section
thread_cache_size=100 # CAP suggested by V8.0 for avoid OOM.
This should be a Dynamic Global Variable that can be set with
SET GLOBAL thread_cache_size=100;
to avoid shutdown/restart.
top looks like ~2000 threads were trying to do something with ~50 running which may drive context switching through the roof.
Also please post your error log for any abnormal shutdown, there are likely clues of the leading cause.
I have to change max_allowed_packet size in MySQL using phpmyadmin, but I don't know how to do it. When I try set global max_allowed_packet=10M in phpmyadmin it give this error
#1227 - Access denied; you need the SUPER privilege for this operation
I can't get SUPER privilege, because server is not in my control.
So, How can I change it?
You will have to set this in MySQL as well .. Generally found here:
/etc/mysql/my.cnf
Example:
max_allowed_packet = 16M
If the server is not in your control, you are going to have to ask for access to said file.
You cannot.
To change it dynamically, as with the SET you tried, you need the SUPER privilege, there is no way around it. And this is a good thing, because 1. the setting is global, which means it affects all connections, and 2. it might jeopardize the server (it makes it easier to DoS a server, for example).
To set it permanently, you need access to the MySQL configuration file and be able to restart the service, as Zak advises.
The real question is, however, why do you need such a high limit. Unless you are trying to import a large dump, having a need for such a limit almost always suggests something was wrongly designed in the first place. If you are importing a dump, try to import smaller bits at a time.
You can change variables from the "Server variables and settings" page, which is accessible via "Variables" at the top or at [server]/phpmyadmin/server_variables.php
Look up "max_allowed_packet", and hit Edit - default is 4194304 (4MB, in bytes).
Recently, I write a python code to insert HTML text into table. After my code writing about 200,000 html page, I can use select to retrieval all these data. However, I find that the MySQL server does not write any data into files. I check the memory usage and find that mysqld.exe program consumes more than 1.5GB memory. I search the whole disk about the table name but I only found a 9KB file related to my table name. By the way, I also checked the mysql.ini file. The path configuration is correct. Then, I use mysqldump to backup that table. This command gives me more than 7GB sql file. I check it again and find there is 20GB file ibdata in my datadir folder. What is that file mean? Why there is no file related to my table ? Does MySQL just store the data in memory?
Run SHOW TABLE STATUS and check the storage engine value. It might be using MEMORY, though I would be surprised if it is and you didn't know that (because you would have to set it explicitly).
Maybe you're on InnoDB but haven't enabled innodb_file_per_table, so everything's being written to one file.
Looking at this now. Any script recommendation will do. Using Rails app too.
The difference with current scripts around is it's a full backup, my MySQL database files are like 120MB+ for now, which will increase overtime. So I wonder if there is any incremental method around.
This recent thread on mysql.com discusses it.
Basically, you have to set the server up to do binary logging and set a threshold for each log to whatever size increment you prefer for backing up. Then just upload a complete backup once and start your binary logging from that point forward. Then just upload each log once it is closed and a new one is opened.
It is more complicated than that but I think that should get you started.