I am using HG, mercurial, to backup some binary and text files (about 12G worth). I am pushing to a remote repository and my system becomes unresponsive (the mouse and keyboard don't do anything).
The CPU is busy, but not flat-lined, also, I have about 1.5GB of free memory.
What is causing my system to choke? This is the first time I have ever pushed the contents to the repository, so there should be about 3.5GB of total data to transmit (how much space HG is using).
Try running Process Monitor to get a log of what HG is trying to do while your computer is hung.
Related
When I run my docker containers in Google Cloud Run, any disk space they use comes from the available memory.
I'm running several self-hosted github action runners on a single local server, and they have worn out my SSD over the past year. The thing is, all the data they are saving is pointless. None of it needs to be saved. It exists for a few minutes during the build, and then is deleted.
Is it possible for me to instead run all these docker machines using memory for the disk space? That would improve performance and stop putting unnecessary wear on a hard drive.
I have a Windows Server 2019 with 16GB Ram and 400GB HD space. One of the apps saves images uploaded by users which used up all of our HD space. Among other things I found 2 directories with 1 million and 1.2 million text files so I deleted these as the first action to free up some space. All of these went into the recycle bin (at the time I didn't know I could do a shift+del to delete them directly bypassing the recycle bin. Now we we have about 30GB free space on the hard-drive, but cannot empty the recycle bin.
If I open recycle bin, it just hangs while calculating the number of files and space. As it does this its slowly eating up memory until it uses all memory and the server crashes. If I right click on the recycle bin and select empty, nothing appears to happen, but if I look at task manager I can see that Windows Explorer is slowly eating up memory until the system crashes again. So even though I don't open the GUI, the recycle bin is still calculating things which eats up memory until it crashes.
I tried doing this with PowerShell running the command Clear-RecycleBin using the force parameter, it appears to hang in the command window and I can see in task manager that its processing and once again, eating up memory until the system crashes.
I'm really stuck here. How can I empty the recycle bin with out making it to first count the files and estimate the size of data it will remove?
Thanks.
Trying to troubleshoot an issue with a mysterious disk io bottleneck caused by MySQL.
I'm using the following commands to test disk read/write speed:
#write
dd if=/dev/zero of=/tmp/writetest bs=1M count=1024 conv=fdatasync,notrunc
#read
echo 3 > /proc/sys/vm/drop_caches; dd if=/tmp/writetest of=/dev/null bs=1M count=1024
I rebooted the machine, disabled cron so none of my usual processes are running queries, killed the web server which usually runs, and killed mysqld.
When I run the read test without mysqld running, I get 1073741824 bytes (1.1 GB) copied, 2.19439 s, 489 MB/s. Consistently around 450-500 MB/s.
When I start back up the mysql service back up, then run the read test again, I get 1073741824 bytes (1.1 GB) copied, 135.657 s, 7.9 MB/s. Consistently around 5MB/s.
Running show full processlist in mysql doesn't show any queries (and I disabled everything that would be running queries anyway). In MySQLWorkbench's Server Status tab, I can see InnoDB reads fluctuate between 30-200 reads per second, and 3-15 writes per second even when no queries are running.
If I run iotop -oPa I can see that mysqld is racking up like 1MB disk reads per second when no queries are running. That seems like a lot considering no queries are running, but at the same time that doesn't seem like enough to cause my dd command to take so long... The only other thing performing disk io is jbd2/sda3-8.
Not sure if it's related, but if I try to kill the mysql server with service mysql stop it says "Attempt to stop MySQL timed out", and the mysqld process continues running, but I can no longer connect to the DB. I have to use kill -9 to kill the mysqld process and restart the server.
All of this appears to be out of the blue. This server was doing heavy duty log parsing, high volume inserts and selects for months, until this last weekend we started seeing this disk io bottleneck.
How can I find out why MySQL is doing so much disk reading when it's essentially idle?
Did you update/delete/insert a large number rows? If so, consider these "delays" in writing to disk:
The block containing the data is not written back to disk immediately.
Ditto for UNIQUE keys.
Updates to secondary indexes go into the "change buffer" They get folded into the index blocks, often even later.
Updates/deletes leave behind a "history list" that needs to be cleaned up after the transaction is complete.
Those things are handled by background tasks that do not show up in the PROCESSLIST. They may be visible on mysqld process(es), mostly as I/O. (CPU is probably minimal.)
Was there a ROLLBACK? Transactions are "optimistic". So a ROLLBACK has to do a lot of work to "undo" what was optimistically already committed.
If you abruptly kill mysqld (or turn off the power), then the ROLLBACK occurs after restarting.
SSDs have no "seek" time. HDDs must move the read/write heads by a variable amount; this takes time. If your dd is working on one end of the disk, and mysqld is working on the other end, the "seeking" adds to the apparent I/O time.
This turned out, like many performance problems, to be a multifaceted issue.
Essentially the issue turned out to be with nightly system and db backups writing to a separate HDD raid array running into the next day, then the master sending FLUSH TABLES and causing mysql jobs and replication work to wait for that. In addition, an unnecessary side process copying many gigabytes of text files around the system a few times a day. Tons of context switching as the system was trying to copy data for backups while also performing mysql work (replication and other jobs).
I ended up reducing the number of tables we were replicating (some were unnecessary), reducing the copying of text files around the system when not needed, increasing memory and io allocated to the mysql server, streamlining the mysql backups and system backups, and limiting cron jobs running mysql processes to give the mysql backups more time to complete. With all that, the backups were barely completing by 7AM each morning, so I ended up determining that we need to run the mysql backups only on weekends instead of nightly, which is fine since this is all fairly static data.
I'm looking for some insight whether or not rsyncing a copy of the data folder from MariaDB well running in docker will provide a usable backup. I'm deploying several containers with mapped folders in a production environment using docker.
I'm thinking of using rsnapshot for nightly backups as it uses hardlinks incrementally and I can specify the number of weekly / daily / monthly copies to keep.For the code and actual files I suspect this will work wonderfully.
For MariaDB I could run mysqldump every night but this would essentially use a new copy of the database each time instead of an incremental one. If I could rsync the data folder and be 100% sure the backup would be fully intact it would be advantageous I presume. Is there any chance this backup method would fail if data was written during the rsync? Would all the files inside of MariaDB change with daily usage (it wouldn't be advantageous if so)?
This is probably a frequent question, but I can't find a really exact match right now.
The answer is NO — you can't use filesystem-level copy tools to back up a MySQL database unless the mysqld process is stopped. In a Docker environment, I would expect the container to stop if the mysqld process stops.
Even if there are no queries running, the InnoDB engine is probably doing writes in the background to flush pages from memory into the tablespace, clean up rolled-back transactions, or finish some deferred index merges.
If you try to use rsync or cp or any other filesystem-level tools to copy InnoDB files, you will only get corrupted files that can't be restored.
Some people use LVM snapshots to get an atomic snapshot of the whole filesystem as of a single instant, and this can be used to get quick backups.
Another useful tool is Percona XtraBackup, which copies the InnoDB tablespace files while it is also copying the InnoDB transaction log continually. Only with both of these in sync can the backup be restored. Read the documentation here: https://www.percona.com/doc/percona-xtrabackup/LATEST/index.html
At my current job, we use Percona XtraBackup to make nightly backups for thousands of MySQL instances. We run Percona Server (not MariaDB) in Docker pods, and Percona XtraBackup runs as another container in the pod. It works very well, and it's free, open-source software.
I have a batch script I run before our performance tests that does some pre-test setup on our server; it clears log files, starts the proper services, restores the database, sets some app settings and turns on perfmon logging.
My problem; the w3wp process we need to monitor is not always present at the time we turn on perfmon logging. It's pretty much hit-or-miss if this process is in the log. The test takes anywhere from 4 to 18 hours to complete, and I don't know until the test is done whether or not w3wp was monitored (it doesn't seem that perfmon detects new processes even though my log file is configured to monitor Process(*)), which ends up wasting a lot of time.
Is there a way to force w3wp to get loaded? Is there some command I can call just prior to starting the perfmon logs?
Or, is it possible to configure the perfmon log to monitor processes that may not exist at the time the log is started?
If you install the IIS Admin tools, you can call a command line app called TinyGet. You can pass in any page on your webserver to initialize it. This would start up the process so you can capture it.