I have been following the directions on setting up a MySql database on EC2 here: http://aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1
I got a problem when I run:
$ sudo mkfs.xfs /dev/sdh
Cannot stat /dev/sdh: No such file or directory
What can it be?
Edit:
I ran the following based on #karudzo advice
$ cd /dev
$ sudo /sbin/MAKEDEV sdh
$ sudo mkfs.xfs /dev/sdh
mkfs.xfs: cannot open /dev/sdh: No such device or address
That means that /dev/sdh doesn't exist yet. Try this: http://getsatisfaction.com/cohesiveft/topics/attaching_a_ebs_volume_to_ec2_instance_at_dev_sdh_doesnt_appear_to_attach
You can't create ad-hoc devices for your instances. In the Amazon console you'll need to:
1) Create an EBS volume
2) Attach that volume to your instance at device location /dev/sdh
Then you'll be able to run mkfs.xfs on the block device
Related
I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.
I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.
Am trying to view disk space on VM instance to which I have a attached Persistent Disk.
From the Cloud console, the disk is shown as attached.
But Using the command: "df -h"; am not able to see the attached Persistent Disk.
Strangely though, am able to cd to the mounted Persistent Disk.
Any ideas?
After a new disk is attached to a running instance, the disk is not mounted automatically. You can find the disk under '/dev/disk/by-id'. Once the disk is attached, you can format and mount the disk using the following command:
sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/disk/by-id/<disk_name> <mount_point>
More information can be found here: Attaching a persistent disk to an instance
Executed below commands and it worked :
sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0Google_PersistentDisk_s4vm1-www.stutzen.co
Above command will format the disk which success message in console. The Disk-ID will be different based on the name provided while creating disk. You can view all attached disk by executing below command
cat /dev/disk/by-id/
Then execute below command to mount the disk in required location.
sudo mount -o discard,defaults /dev/disk/by-id/scsi-0Google_PersistentDisk_s4vm1-www.stutzen.co /mnt/d1/
Make sure "/mnt/d1/" directory is already created if not execute
sudo mkdir -p /mnt/d1
src: https://cloud.google.com/compute/docs/disks/persistent-disks#formatting
I have added additional disk on a running instance, named as "apps".
Followed these steps to add disk:
sudo mkdir -p /apps
List disk id by:
ls -l /dev/disk/by-id
Found the disk id and used in the following commands.
sudo mkfs.ext4 -F /dev/disk/by-id/scsi-0Google_PersistentDisk_apps
sudo mount -o discard,defaults /dev/disk/by-id/scsi-0Google_PersistentDisk_apps /apps
df -h lists the volume.
What shows you command:
sudo fdisk -l
maybe try df -ah
I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos).
I installed kubernetes according to the guide I found here and created the json for the pod using my images.
When I execute sudo ./kubecfg list /pods I get the following error:
F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused
Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods
EDIT: Update
Instead of running the commands myself I integrated into the vagrant file (as such) .
This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above.
EDIT 2: Update
I managed to get it to run again, however I am unsure if it will run smoothly
I had to re-execute the following commands.
sudo systemctl start etcd
sudo systemctl start download-kubernetes
sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy
I believe it is in fact the apiserver that needs restarting
What is the source of this "timeout"? (Where are any logs I can find for this matter)
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself.
The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like:
./kubectl get pods.
With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver:
KUBERNETES_MASTER=http://IPADDRESS:8080.
The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running:
journalctl -f -u kube-apiserver
from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with:
systemctl start kube-apiserver
On CoreOS you should look at the logs using journalctl.
For example, if you wish to see etcd logs, which Kubernetes relies on for storing the state of it's minions, run journalctl _COMM=etcd, and similarly journalctl _COMM=apiserver will show you the logs from the apiserver, one of key components in Kubernetes.
You also get last few log entries if you run systemctl status apiserver.
Based on errordevelopers advice, my recent installation ran against a similar problem.
Using systemctl status apiserver and sudo systemctl start apiserver I managed to get the environment up and running again.
I have an AWS instance running CentOS 6.5. It has been updated, secured, and setup for web hosting (LAMP). I attached an EBS volume to the instance and mounted it under /data.
Two questions:
How can I get MySQL to use the /data directory as its database storage location? (I don't want to run the program from the /data directory, just put the .sql file there.
How can I do the same for my web site? I plan on running a wordpress site and its current location is in the /var/www/html directory. I want to change this to /data/site.
I want to keep the web site files and database on a separate volume: /data. If my instance was to get corrupt or inaccessible, I can attach the EBS volume to a new instance.
I have read dozens of tutorials and articles on how to get MySQL moved to a different directory, but nothing is working. MySQL refuses to start up after. Can I keep MySQL installed as is, but have it read/write the database on a different directory like /data which is a mounted EBS volume or is this not possible at all with linux?
Here are some of the tutorials and articles I been following/testing with:
aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1
spruce.it/noise/setting-up-a-proper-lamp-stack-on-aws-ec2-ebs/
EDIT:
This is what I am doing.
Create a new instance using this ami: https://aws.amazon.com/marketplace/pp/B00IOYDTV6?ref=cns_srchrow
Once the instance is up, I run updates using: sudo yum update -y
One updated, I set it up as a LAMP web server using these instructions: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
In addition to the above steps, I allow port 80 tcp connections on the built-in firewall. I run these commands: sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT and sudo service iptables save
Once this is done, I test my site at http://IP-ADDRESS (this shows me the Apache Test Page)
Once LAMP is installed, I install the MySQL Server by running this: yum install mysql-server
After that is installed, I proceed to the "To secure the MySQL server" instructions on the previous Amazon document.
Next, I install PHPMyAdmin using these two tutorials: http://tecadmin.net/installing-apache-mysql-php-on-centos-redhat/# and http://tecadmin.net/how-to-install-phpmyadmin-on-centos-using-yum/
At this point, I have a fully functioning web server. Now, I want to use the AWS EBS volume to store all the databases and website files. First, I attach the newly create AWS EBS volume. I use this tutorial to do this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
THIS IS WHERE THE PROBLEMS START.
Using the information in this tutorial: aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1. It says FAILED.
So one thing you can do is the following that avoids copying all directories. You need to make sure that all permissions are setup correctly for it to work:
mysql dat dir:
mv /var/lib/mysql /var/lib/mysql.orig
mkdir -p /<your-new-ebs-mountpoint>/var/lib/mysql
chown mysql.mysql /<your-new-ebs-mountpoint>/var/lib/mysql
chmod 700 /<your-new-ebs-mountpoint>/var/lib/mysql
etc configs:
mkdir -p /<your-new-ebs-mountpoint>/etc
cp /etc/my.cnf /<your-new-ebs-mountpoint>/etc/my.cnf
mv /etc/my.cnf /etc/my.cnf.orig
ln -s /<your-new-ebs-mountpoint>/etc/my.cnf /etc/my.cnf
logs:
mkdir -p /<your-new-ebs-mountpoint>/var/log
mv /var/log/mysqld.log /var/log/mysqld.log.orig
touch /<your-new-ebs-mountpoint>/var/log/mysqld.log
chown mysql.mysql /<your-new-ebs-mountpoint>/var/log/mysqld.log
chmod 640 /<your-new-ebs-mountpoint>/var/log/mysqld.log
ln -s /<your-new-ebs-mountpoint>/var/log/mysqld.log /var/log/mysqld.log