OpenShift tools show disk quota has NOT been exceeded but file saving fails due to disk space - openshift

I have the following error in my app logs:
==> app-root/logs/postgresql.log <==
2016-06-22 13:05:40 GMT ERROR: could not extend file "base/16385/123494": Disk quota exceeded
2016-06-22 13:05:40 GMT HINT: Check free disk space.
Running all the Red Hat's suggested commands about disk space usage (including rhc app-tidy) show that the gear is still under the maximum limit for the free plan:
$ rhc show-app --gears quota
Gear Cartridges Used Limit
------------------------ ---------------------------------- ------ -----
<CUT> python-2.7 postgresql-9.2 cron-1.4 294 MB 1 GB
SSH'ing to my gear:
[<CUT>]\> du -a -h --max-depth=1 | sort -hr
281M .
102M ./app-root
...
Checking the suggested inode usage reveals nothing:
[<CUT>]\> quota -s
Disk quotas for user <CUT> (uid 2372):
Filesystem blocks quota limit grace files quota limit grace
/dev/mapper/EBSStore01-user_home01
281M 0 1024M 18088 0 80000
So all seems to be good and within the limits but still the file save fails.
What other tools can I use in OpenShift to see the real disk space usage or how would I solve this?

Related

Programmatically check data transfer on IPFS

We are building a desktop app, on Electron, to share media on IPFS. We want to incentivize the people, who either by an IPFS add or pin, make data available to other users and in effect are "seeding" the data. We want to track how much data is being sent and received by each user, programmatically and periodically.
Is there a standard pattern or a service to be able to do this?
TIA!
On the CLI you can use the ipfs stats bw -p <peer id> command to see the total bytes sent and recieved between your node and the peer id you pass in.
$ ipfs stats bw -p QmeMKDA6HbDD8Bwb4WoAQ7s9oKZTBpy55YFKG1RSHnBz6a
Bandwidth
TotalIn: 875 B
TotalOut: 14 kB
RateIn: 0 B/s
RateOut: 0 B/s
See: https://docs.ipfs.io/reference/api/cli/#ipfs-stats-bw
You can use the ipfs.stats.bw method to the data programatically from the js implementation of IPFS js-ipfs or via the js-ipfs-http-client talking to the http api of a locally running ipfs daemon.
ipfs.stats.bw will show all traffic between to peers, which can include dht queries and other traffic that isn't directly related to sharing blocks of data.
If you want info on just blocks of data shared then you can use ipfs bitswap ledger from the command line.
$ ipfs bitswap ledger QmeMKDA6HbDD8Bwb4WoAQ7s9oKZTBpy55YFKG1RSHnBz6a
Ledger for QmeMKDA6HbDD8Bwb4WoAQ7s9oKZTBpy55YFKG1RSHnBz6a
Debt ratio: 0.000000
Exchanges: 0
Bytes sent: 0
Bytes received: 0
See: https://docs.ipfs.io/reference/api/cli/#ipfs-bitswap-ledger
That api is not directly available in js-ipfs or the js-http-api-client yet.

Memory usage issues with VPS (ubuntu): MySQL process dies

I'm running a VPS, with specs:
Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-32-generic x86_64)
512mb RAM
1 CPU
20gb SSD
If you're wondering it's a DigitalOcean droplet. It's running TS3, LAMP (with wordpress), OpenVPN, BYOBU, and OwnCloud.
Now my problem is with mySQL dying on me after like 30m to 1hour. Usually after a reboot, the memory usage is 54% and mySQL doesn't have a problem, but as the memory usage goes towards 80-89% I start to get issues.
System load: 0.01 Users logged in: 0
Usage of /: 22.1% of 19.56GB IP address for eth0: *****
Memory usage: 90% IP address for as0t0: *****
Swap usage: 0% IP address for as0t1: *****
Processes: 93
As you can see, the memory usage is VERY high, and I've noticed the trend that mySQL process dies as the memory usage gets higher. However the swap usage is 0%.
Is there a way to make mySQL and the other processes to use the swap?
Would letting mySQL make use of the swap stop letting it die after my memory usage gets so high?
After the high memory usage, the process dies and I get this error:
[2002] SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
The processor load never goes above 25% in most cases. The server also runs a fast SSD, so it wouldn't be a problem to use a swap, and I don't have that much traffic.
Fixed it, by making a swap file of size 256mb. mySQL doesn't stop now after having no available memory to work in.
After following this tutorial by Etel Sverdlov:
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04
I was able to make a swap file. I'll copy the tutorial for the sake it gets deleted.
How To Add Swap on Ubuntu 12.04
About Linux Swapping
Linux RAM is composed of chunks of memory called pages. To free up pages of RAM, a “linux swap” can occur and a page of memory is copied from the RAM to preconfigured space on the hard disk. Linux swaps allow a system to harness more memory than was originally physically available.
However, swapping does have disadvantages. Because hard disks have a much slower memory than RAM, virtual private server performance may slow down considerably. Additionally, swap thrashing can begin to take place if the system gets swamped from too many files being swapped in and out.
Check for Swap Space
Before we proceed to set up a swap file, we need to check if any swap files have been enabled on the VPS by looking at the summary of swap usage.
sudo swapon -s
An empty list will confirm that you have no swap files enabled:
Filename Type Size Used Priority
Check the File System
After we know that we do not have a swap file enabled on the virtual server, we can check how much space we have on the server with the df command. The swap file will take 256MB— since we are only using up about 8% of the /dev/sda, we can proceed.
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda 20907056 1437188 18421292 8% /
udev 121588 4 121584 1% /dev
tmpfs 49752 208 49544 1% /run
none 5120 0 5120 0% /run/lock
none 124372 0 124372 0% /run/shm
Create and Enable the Swap File
Now it’s time to create the swap file itself using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Subsequently we are going to prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
All credit to: Etel Sverdlov at: https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-12-04

Multiple HAProxy instances on OpenShift

I have an application (Node.JS) deployed on OpenShift (bronze plan) with the Web Load Balancer activated, the minimum gears active are 3 and the max are 16.
Sometimes in the main gear I can see more than one HAProxy instance running, for example now I have:
> ps -ef|grep /usr/sbin/haproxy
3505 37488 1 1 08:46 ? 00:00:01 /usr/sbin/haproxy -f /var/lib/openshift/<APP_ID>/haproxy//conf/haproxy.cfg -sf 37237
3505 149643 1 1 May28 ? 00:09:08 /usr/sbin/haproxy -f /var/lib/openshift/<APP_ID>/haproxy//conf/haproxy.cfg -sf 114873
looking the logs I can't any error. Any explanation about this?
Thanks!
This could be a consequence of executing Haproxy reload script (/etc/init.d/haproxy). This will usually create a new haproxy process to accept new connections. It will also keep the old process alive until there are still open connections to it. Once they are closed, old haproxy process will be terminated.

google compute engine mounting persistant disk issues

I am following this guide https://developers.google.com/compute/docs/troubleshooting#ssherrors specifically the section about recovering your persistent disk with another vm.
I am trying to follow this part:
mount /dev/disk/by-id/scsi-0Google_PersistentDisk_myinstance-debugging /mnt/myinstance
This is the error I get:
root#debugger:~# mount /dev/disk/by-id/scsi-0Google_PersistentDisk_marty-wll-debugging /mnt/marty-wll
mount: you must specify the filesystem type
I am unsure of the filesystem due to google-compute disks being used, and the system has already been deleted and attached to another machine following the google developers guide I referenced above.
parted scsi-0Google_PersistentDisk_marty-wll-debugging -l
root#debugger:/dev/disk/by-id# parted scsi-0Google_PersistentDisk_marty-wll-debugging -l
Model: Google PersistentDisk (scsi)
Disk /dev/sda: 10.7GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 10.7GB 10.7GB primary ext4
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 10.7GB 10.7GB primary ext4
gave me the information that its "ext4".
although when I issue the following command I still get an error:
root#debugger:~# mount -t ext4 /dev/disk/by-id/scsi-0Google_PersistentDisk_marty-wll-debugging /mnt/marty-wll
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg of syslog said :
[ 2452.205447] EXT4-fs (sdb): VFS: Can't find ext4 filesystem
any ideas?
Thanks for pointing this out, I will update the docs. Try adding -part1 to the end of your device name. This will mount the partition, instead of the disk. For your specific case:
mount /dev/disk/by-id/scsi-0Google_PersistentDisk_myinstance-debugging-part1 /mnt/myinstance
Also, there are cleaner aliases, so this should work as well:
mount /dev/disk/by-id/google-myinstance-debugging-part1 /mnt/myinstance

NTFS/GPT Mount exited with Exit Code 13

This is a duplicated post since I didn't get any help on askubuntu.com.
I have a 1TB external hard drive that I recently formatted to NTFS. It was mounting on my Ubuntu 11.10 fine until just now. I didn't make any changes to affect my OS or my exhdd.
The error that I get is:
Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 0).
Failed to mount '/dev/sdb2': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.
I did read this and this. But neither helped.
I tried installing ntfsfix but no such package exists anymore.
I have never used this HDD on a windows machine. If I need to use an other machine to do stuff to fix this, I have access to a mac.
Any advice?
This is my sudo fdisk -l output:
What in the world is GPT? I didn't do that. It used to be NTFS.
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000586fb
Device Boot Start End Blocks Id System
/dev/sda1 * 2148 961320312 480659082+ 83 Linux
/dev/sda2 961320313 976773167 7726427+ 5 Extended
/dev/sda5 961320314 976773167 7726427 83 Linux
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcfd88605
Device Boot Start End Blocks Id System
/dev/sdb1 1 1953525167 976762583+ ee GPT
This is the thing that worked:
I first needed to get ntfs-3g (sudo apt-get install ntfs-3g)
Run sudo fdisk -l to figure out where the mount point is. Mine was /dev/sdb1
I ran ntfsfix -b /dev/sdb1 and that fixed the problem.
Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sda1': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1).
Please see the 'dmraid' documentation for more details.
Solution :-
sudo fdisk -l
sudo ntfsfix /dev/select_disk_name
To find Disk name:
Go dashboard -> Disk utility -> Click disk -> then show Device /Dev/***