Transferring memory space fro Drive C: to Drive D: - partitioning

I am having a difficulty on finding how am i going to transfer memory space from c: to D:.
I don't know why my drive c: has 444 GB and my drive D: has only 19.4 GB , I don't know if it's good or bad to have that kind of partition but now my D: is so bloody. HAHA
Pls help/guide me to transfer space or make changes in the partition of my Disks without negative changes on my laptop. There's no Unallocated space.
But I have here a screenshot of my Disk mgmt. I'm glad for your positive response. Thank you.enter image description here

Its good you've already opened Disk Mgmt. Just right click on the C: Partition there and you will get an option named "Shrink". Click on it and enter the Amount of Space you want to take out from Drive C:. After you are done, Right click on the D: Partition in the Disk Management and click Extend, You should then enter the amount of space you want to extend your partition to.
Congrats :) you saved your D:
But you should Keep in mind next time that this site is for programming questions and for tech support you should go for other support sites.

Related

Offline Data Augmentation in Google Colab - Problems with RAM

What would be most efficient way to perform OFFLINE data augmentation in Google Colab?
Since I am not from US I cannot purchase Google Chrome for bigger RAM, so I am trying to be "smart" about it. For example, when I finish loading 11000 images, first as NumPy arrays and then creating pandas DataFrame from them, it occupies around 7.5GB of RAM. Problem is, I tried to del every object (NumPy array, tf.data object etc) in order to check if RAM changes, and RAM does not changed.
Is it better to try and be smart about RAM or maybe write to disk any time I augment image and do not keep anything in RAM? If this is the case, is using TFRecords a smart approach for this?

How to free up space in disk on Colab TPU?

I am training a few deep learning models on Google Colab with runtime type set to TPU. The RAM and disk status shows that I have used most of my disk storage on Colab. Is there a way to reset it? Or to delete something to free up some more disk space? I know that I can change to GPU which will give me a lot more disk space, however, my models take forever to change, so I would really like to stay with TPU. Thanks in advance!
A few places you might delete by rm -rf and reclaim some spaces.
5.6G from /usr/local/lib/python2.7
5.3G from /swift
3.0G from /usr/local/cuda-10.1
3.0G from /usr/local/cuda-10.0
2.1G from /tensorflow-2.0.0
1.3G from /usr/local/lib/python3.6/dist-packages/torch
788M from /opt/nvidia
474M from /usr/local/lib/python3.6/dist-packages/pystan
423M from /usr/local/lib/python3.6/dist-packages/spacy
I don't think there is a way to make more space than is available when you first open the Colab document. What is already there is there for a reason, it is there to run your environment. You can still try to remove existing files at your own risk by running the linux remove command like so in a cell.
!rm <path>
Otherwise, you'll have to switch to GPU because I know it offers a whole lot more space at the expense of longer training time. I think another option might be to pay to upgrade, but I don't know if it only gives you more TPU time or if it increases your RAM as well.

What's the difference between "memory" and "memory footprint" fields on Chrome's task manager?

I'm using Chrome 64 and noticed that there's two fields called "memory" on Chrome's task manager. See the picture below:
I can't find any explanation of the difference between these fields on Chrome, there's no tooltips available (at least not on macOS). The "memory footprint" field seems to be new, because I don't recall seeing it before yesterday.
In Chrome, the memory column represents Shared Memory + Private Memory. If you enable those two columns and add the numbers you will find they match the Memory column. In the task manager or activity monitor of the computer you can see that these values match the Shared Memory Size and Private Memory Size.
The Memory Footprint column matches the number of MB reported for the Memory column of the process within the Task Manager or Activity Monitor.
Real Memory in a Mac's Activity Monitor maps to the RSS (Resident Set Size) in Unix. The link below explains this.
https://forums.macrumors.com/threads/memory-vs-real-memory.1749505/#post-19295944
The Memory column on a Mac's Activity Monitor roughly correlates to the Private Memory Size, however it seems to be calculated slightly smaller. This column will match the Memory Footprint column in Chrome.
Please note that this answer references Mac because that's what I'm currently using. The column names and answer would change slightly for Linux and Windows system monitor and task manager.
As Josh pointed out, it reports "Private Memory Footprint" as described in consistent memory metrics
Disclaimer: I'm writing this answer as I do some testing and observation because I had this question myself and this is the only relevant result I found through a Google search. Here goes...
I'm comparing the processes in Chrome's task manager with those in Sysinternal's Process Explorer (for Windows). In doing so, I see that the "Memory footprint" in Chrome is exactly identical to "Private Bytes" shown in Process Explorer exactly for every process ID.
Private Bytes is the size of memory that a process has allocated to it (but not necessarily actively using) that cannot be shared with other processes.
So in line with what Josh and Patrick answered, the memory footprint represents memory reserved entirely for that process.
Unfortunately, I can't come to a conclusion on what "Memory" represents specifically. I would expect it to be equivalent to the "Working Set", but that doesn't match up with what Process Explorer shows.
Things also get a little muddier... If you right-click on the column headers in Chrome's task manager, you'll see there's another column available, titled, "Private memory". If you enable that, you'll see the numbers match very closely, but not exactly to the numbers in the "Memory" column (off by 200K at most). :| This is a confusing title, given that we have already confirmed the "Memory footprint" to represent the private memory footprint.
I don't know what the miniscule difference between "Memory" and "Private memory" here is, but I speculate that maybe either or both columns represent the private memory allocated to the process that are actively in use (in contrast to the private bytes definition I gave above). Or it could be the old calculation that they kept in there for some reason. I really am just guessing here.
Sorry I could not be of more help, but since there seems to be no answer to this out there, I wanted to share what I could figure out and hopefully spur the conversation a bit so someone more knowledgeable can add to it.

Couchbase: Approach full disk warning while still have a lot of free disk space

We are using couchbase 3.0 and have around 67 GB free disk space, which is 35% free. But we face this warning this morning from the tab "Cluster Overview" in couchbase's admin page:
Approaching full disk warning. Usage of disk "/" on node "local.node" is around 100%
Even the admin page shows the correct Disk Overview: Usable Free Space (67.9 GB). But how come this warning appears?

swap space used while physical memory is free

i recently have migrated between 2 servers (the newest has lower specs), and it freezes all the time even though there is no load on the server, below are my specs:
HP DL120G5 / Intel Quad-Core Xeon X3210 / 8GB RAM
free -m output:
total used free shared buffers cached
Mem: 7863 7603 260 0 176 5736
-/+ buffers/cache: 1690 6173
Swap: 4094 412 3681
as you can see there is 412 mb ysed in swap while there is almost 80% of the physical ram available
I don't know if this should cause any trouble, but almost no swap was used in my old server so I'm thinking this does not seem right.
i have cPanel license so i contacted their support and they noted that i have high iowait, and yes when i ran sar i noticed sometimes it exceeds 60%, most often it's 20% but sometimes it reaches to 60% or even 70%
i don't really know how to diagnose that, i was suspecting my drive is slow and this might cause the latency so i ran a test using dd and the speed was 250 mb/s so i think the transfer speed is ok plus the hardware is supposed to be brand new.
the high load usually happens when i use gzip or tar to extract files (backup or restore a cpanel account).
one important thing to mention is that top is reporting that mysql is using 100% to 125% of the CPU and sometimes it reaches much more, if i trace the mysql process i keep getting this error continually:
setsockopt(376, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
i don't know what that means nor did i get useful information googling it.
i forgot to mention that it's a web hosting server for what it's worth, so it has the standard setup for web hosting (apache,php,mysql .. etc)
so how do i properly diagnose this issue and find the solution, or what might be the possible causes?
As you may have realized by now, the free -m output shows 7603MiB (~7.6GiB) USED, not free.
You're out of memory and it has started swapping which will drastically slow things down. Since most applications are unaware that the virtual memory is now coming from much slower disk, the system may very well appear to "hang" with no feedback describing the problem.
From your description, the first process I'd kill in order to regain control would be the Mysql. If you have ssh/rsh/telnet connectivity to this box from another machine, you may have to login from that in order to get a usable commandline to kill from.
My first thought (hypothesis?) for what's happening is...
MySQL is trying to do something that is not supported as this machine is currently configured. It could be missing a library or an environment variable is not set or any number things.
That operation allocates some memory but is failing and not cleaning up the allocation when it does. If this were a shell script, it could be fixed by putting an event trap command at the beginning that runs a function that releases memory and cleans up.
The code is written to keep retrying on failure so it rapidly uses up all your memory. Refering back to the shell script illustration, the trap function might also prompt to see if you really want to keep retrying.
Not a complete answer but hopefully will help.