HDD failure occurred.
So, a new primary HDD was added in and the old HDD was added in as a secondary one.
I'm trying to mount my secondary HDD but there are errors occurring.
I made /media/qwe/.
I then went on Putty and used these SSH commands:
root#chicken [/]# mount /dev/sdb2 /media/qwe
mount: unknown filesystem type 'LVM2_member'
But, I got an error.
root#chicken [/]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup" using metadata type lvm2
Found volume group "VolGroup" using metadata type lvm2
root#chicken [/]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 3 0 wz--n- 1.82t 0
VolGroup 1 3 0 wz--n- 1.82t 0
I use cPanel and WHM.
I am trying to recover the MySQL databases that were lost. I managed to mount the sdb1 bit, but I think that's the boot partition. I don't need that. I need to access the other files!
Any help?
You don't need file system to get you data back.
Start with taking an image from the failed disk
Related
My first problem looked like this:
Writing objects: 60% (9/15)
It freezed there for some time with very low upload speed (in kb/s), then, after long time, gave this message:
fatal: the remote end hung up unexpectedly
Everything up-to-date
I found something what seemed to be a solution:
git config http.postBuffer 524288000
This created a new problem that looks like this:
MacBook-Pro-Liana:LC | myWebsite Liana$ git config http.postBuffer 524288000
MacBook-Pro-Liana:LC | myWebsite Liana$ git push -u origin master
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 4 threads
Compressing objects: 100% (14/14), done.
Writing objects: 100% (15/15), 116.01 MiB | 25.16 MiB/s, done.
Total 15 (delta 2), reused 0 (delta 0)
error: RPC failed; curl 56 LibreSSL SSL_read: SSL_ERROR_SYSCALL, errno 54
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
Please help, I have no idea what’s going on...
First, Git 2.25.1 made it clear that:
Users in a wide variety of situations find themselves with HTTP push problems.
Oftentimes these issues are due to antivirus software, filtering proxies, or other man-in-the-middle situations; other times, they are due to simple unreliability of the network.
This works for none of the aforementioned situations and is only useful in a small, highly restricted number of cases: essentially, when the connection does not properly support HTTP/1.1.
Raising this is not, in general, an effective solution for most push problems, but can increase memory consumption significantly since the entire buffer is allocated even for small pushes.
Second, it depends on your actual remote (GitHub? GitLab? BitBucket? On-premise server). Said remote server might have an incident in progress.
I've been developing an application for some weeks, and it's been running in a OpenShift small gear with DIY 0.1 + PostgreSQL cartridges for several days, including ~5 new deployments. Everything was ok and a new deploy stopped and started everything in seconds.
Nevertheless today pushing master as usual stops the cartridge and it won't start. This is the trace:
Counting objects: 2688, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (1930/1930), done.
Writing objects: 100% (2080/2080), 10.76 MiB | 99 KiB/s, done.
Total 2080 (delta 1300), reused 13 (delta 0)
remote: Stopping DIY cartridge
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Logging in with ssh and running the start action hook manually fails because database is stopped. Restarting the gear makes everything work again.
The failing deployment has nothing to do with it, since it only adds a few lines of code, nothing about configuration or anything that might break the boot.
Logs (at $OPENSHIFT_LOG_DIR) reveal nothing. Quota usage seems fine:
Cartridges Used Limit
---------------------- ------ -----
diy-0.1 postgresql-9.2 0.6 GB 1 GB
Any suggestions about what could I check?
Oh, dumb mistake. My last working deployment involved a change in the binary name, which now matches the gear name. stop script, with ps grep and so on was wrong, not killing only the application but also the connection. Changing it fixed the issue.
Solution inspired by this blogpost.
I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses
I am following this guide https://developers.google.com/compute/docs/troubleshooting#ssherrors specifically the section about recovering your persistent disk with another vm.
I am trying to follow this part:
mount /dev/disk/by-id/scsi-0Google_PersistentDisk_myinstance-debugging /mnt/myinstance
This is the error I get:
root#debugger:~# mount /dev/disk/by-id/scsi-0Google_PersistentDisk_marty-wll-debugging /mnt/marty-wll
mount: you must specify the filesystem type
I am unsure of the filesystem due to google-compute disks being used, and the system has already been deleted and attached to another machine following the google developers guide I referenced above.
parted scsi-0Google_PersistentDisk_marty-wll-debugging -l
root#debugger:/dev/disk/by-id# parted scsi-0Google_PersistentDisk_marty-wll-debugging -l
Model: Google PersistentDisk (scsi)
Disk /dev/sda: 10.7GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 10.7GB 10.7GB primary ext4
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 10.7GB 10.7GB primary ext4
gave me the information that its "ext4".
although when I issue the following command I still get an error:
root#debugger:~# mount -t ext4 /dev/disk/by-id/scsi-0Google_PersistentDisk_marty-wll-debugging /mnt/marty-wll
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg of syslog said :
[ 2452.205447] EXT4-fs (sdb): VFS: Can't find ext4 filesystem
any ideas?
Thanks for pointing this out, I will update the docs. Try adding -part1 to the end of your device name. This will mount the partition, instead of the disk. For your specific case:
mount /dev/disk/by-id/scsi-0Google_PersistentDisk_myinstance-debugging-part1 /mnt/myinstance
Also, there are cleaner aliases, so this should work as well:
mount /dev/disk/by-id/google-myinstance-debugging-part1 /mnt/myinstance
I need to setup a hadoop/hdfs cluster with one namenode and two datanodes. I am aware of conf/slaves file which lists the machines datanodes are running. But how can I specify where hadoop/hdfs is locally installed on slave node? Also the user account to start hdfs there?
Edit: in log files, I find following error, when I tried to start-dfs.sh
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
The user is expected to be the same as on the master node. The location of the actual data can be modified by changing the dfs.data.dir node inhadoop-site.xml.