Windows Server 2019 Cannot empty recycle bin - windows-server

I have a Windows Server 2019 with 16GB Ram and 400GB HD space. One of the apps saves images uploaded by users which used up all of our HD space. Among other things I found 2 directories with 1 million and 1.2 million text files so I deleted these as the first action to free up some space. All of these went into the recycle bin (at the time I didn't know I could do a shift+del to delete them directly bypassing the recycle bin. Now we we have about 30GB free space on the hard-drive, but cannot empty the recycle bin.
If I open recycle bin, it just hangs while calculating the number of files and space. As it does this its slowly eating up memory until it uses all memory and the server crashes. If I right click on the recycle bin and select empty, nothing appears to happen, but if I look at task manager I can see that Windows Explorer is slowly eating up memory until the system crashes again. So even though I don't open the GUI, the recycle bin is still calculating things which eats up memory until it crashes.
I tried doing this with PowerShell running the command Clear-RecycleBin using the force parameter, it appears to hang in the command window and I can see in task manager that its processing and once again, eating up memory until the system crashes.
I'm really stuck here. How can I empty the recycle bin with out making it to first count the files and estimate the size of data it will remove?
Thanks.

Related

Force Docker to use memory instead of disk space

When I run my docker containers in Google Cloud Run, any disk space they use comes from the available memory.
I'm running several self-hosted github action runners on a single local server, and they have worn out my SSD over the past year. The thing is, all the data they are saving is pointless. None of it needs to be saved. It exists for a few minutes during the build, and then is deleted.
Is it possible for me to instead run all these docker machines using memory for the disk space? That would improve performance and stop putting unnecessary wear on a hard drive.

Trying to import 11GB+ SQL file to mysql

For an ongoing project I need to install a local server using XAMPP. The colleague exported the mySQL DB from his nGINX environment (I suppose it uses the same mySQL as my apache) but the size is an insane 11GB+. In order to prepare, I did the necessary php.ini modifications listed here, entered myisam_sort_buffer_size=16384M to my.ini and also followed a step-by-step tutorial from here. I have 8GB DDR4 RAM and 8th generation i3, so this should not be a problem. The SQL import was running from 0:30 to 14:30 when I noticed that it simply stopped.
Unfortunately the shell import command seems to stop at the 13149th line of the 18061 lines. I see no error messages, and I do not see the imported database in phpMyAdmin. I see the flashing underscore, but no more SQL commands are executed.
I am wondering if there is a solution to this - I want to ensure that the cc 14 hours of processing does not go to waste, so my question is:
If I terminate the ongoing but seemingly frozen CMD, can I continue importing the remaining 4912 lines from a separate SQL file?
16GB sort buffer size is absurdly high. I suspect you mean maximum_packet_size, and even for that you pobably men 16M rather than 16G.
If you are certain which line is stuck yes, you can resume from there, including the line that failed.

MySQLDump on Windows performance degrading over time

I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.

HDD space in Ubuntu Apache server is running out

I've created a 10 GB HDD and 3.75 GB RAM instance in Google Cloud and hosted a quite heavy DB transaction application's backend/API there. The OS is Ubuntu 14.04 LTS and I'm using Apache web server with PHP and MySQL for the backend. The problem here is that the HDD space has almost run out of memory very quickly.
Using Linux (Ubuntu) commands, I've found that my source code (/var/www/html) size is about 200 MB and the MySQL DB folder (/var/lib/mysql) size is 3.7 GB (around 20,000,000 records in my project DB). I'm confused how rest of my HDD space is occupied (except OS files). As of today, I only have 35 MB left. Once for testing purpose, I copied the source code to another folder. Even then I had the same problem. When I realized that my HDD space is running out, I deleted that folder and freed around 200 MB. But later (around 10 minutes) that freed space has also gone!!!
I figured that some log file like Apache error log, access log, MySQL error log or CakePHP debug log may occupy that space but I've disabled and truncated those files long ago and checked if these file are creating again but it doesn't. So how????????
I'm seriously worried about this project to continue with this instance. I thought about adding additional HDD to remedy this situation but I need to be sure how my HDD space is being occupied first. Any help will be highly appreciated.
You can start by searching all the largest files in your system.
On the / directory type:
sudo find . -type f -size +5000k -exec ls -lh {} \;
Once you find the files you can start to troubleshoot.
If you get many file you can increase +5000k to aim for the larger files.

HG - why does it cause my system to become unresponsive

I am using HG, mercurial, to backup some binary and text files (about 12G worth). I am pushing to a remote repository and my system becomes unresponsive (the mouse and keyboard don't do anything).
The CPU is busy, but not flat-lined, also, I have about 1.5GB of free memory.
What is causing my system to choke? This is the first time I have ever pushed the contents to the repository, so there should be about 3.5GB of total data to transmit (how much space HG is using).
Try running Process Monitor to get a log of what HG is trying to do while your computer is hung.