Fiware Lab: Context Broker instance disk space - fiware

I have deployed at Fiware Lab an instance of the Context Broker.
The instance has assigned 40GB but when I execute the command
df -h
I see the following
/dev/vda1 4,8GB
tmpfs 1,9 GB
Where are the other GBs?
Best Regards

That's a known bug in Orion Instance. --- Please, check this answer which might solve your problem:
How to extend default partition after creating an VM instance?

You could try sudo fdisk -l (or just fdisk -l as root user) in order to see if there is space in VM disks but not mounted or partitioned.

Related

Google compute engine, instance dead? How to reach?

I have a small instance running in GCE, had some troubles with the MongoDb so after some tries decided to reset the instance. But... it didn't seem to come back online. So i stopped the instance and restarted it.
It is an Bitnami MEAN stack which starts apache and stuff at startup.
But... i can't reach the instance! No SCP, no SSH, no webservice running. When i try to connect via SSH (in GCE) it times out, cant make connection on port 22. In the information it says 'The instance is booting up and sshd is not running yet', which is possible of course.... But i cant reach the instance in no possible manner not even after an hour wait :) Not sure what's happening if i cant connect to it somehow :(
There is some activity in the console... some CPU usage, mostly 0%, some incomming traffic but no outgoing...
I hope someone can give me a hint here!
Update 1
After the helpfull tip form Serhii... if found this in the logs...
Booting from Hard Disk 0...
[ 0.872447] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
/dev/sda1 contains a file system with errors, check forced.
/dev/sda1: Inodes that were part of a corrupted orphan linked list found.
/dev/sda1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
fsck exited with status code 4
The root filesystem on /dev/sda1 requires a manual fsck
Update 2...
So, i need to fsck the drive...
Created a snapshot, made a new disk from that snapshot, added the new disk as an extra disk to another instance. Now that instance wont boot with the same problem... removing the extra disk fixed it again. So adding the disk makes it crash even though it isn't the boot-disk?
First, have a look at the Compute Engine -> VM instances -> NAME_OF_YOUR_VM -> Logs -> Serial port 1 (console) and try to find errors and warnings that could be connected to lack of free space or SSH. It'll be helpful if you updated your post by providing this information. In case if your instance run out of free space follow this instructions.
You can try to connect to your VM via Serial console by following this guide, but keep in mind that:
The interactive serial console does not support IP-based access
restrictions such as IP whitelists. If you enable the interactive
serial console on an instance, clients can attempt to connect to that
instance from any IP address.
more details you can find in the documentation.
Have a look at the Troubleshooting SSH guide and Known issues for SSH in browser. In addition, Google provides a troubleshooting script for Compute Engine to identify issues with SSH login/accessibility of your Linux based instance.
If you still have a problem try to use your disk on a new instance.
EDIT It looks like your test VM is trying to boot from the disk that you created from the snapshot. Try to follow this guide.
If you still have a problem, you can try to recreate the boot disk from a snapshot to resize it.

how to trace restart of MySQL container?

I have mysql running in k8s with one replicaset, it keeps crashing at random times with exit code 137. Memory consumption is 82% while crashing.
I couldn't find anything in syslog, mysql error log and kubelet log other than restart message.
Instance is already having 64gb and once after the restart it is able to handle the application requests. So increasing memory should not be an actual solution.
Also monitoring tools says only 82% of the memory is being used at the time of crash.
How kubernetes calculates the memory consumption of a pod?
How to find why it is crashing?
You can use kubectl logs your-pod -c container-name -n your-namespaces to see your log, use kubectl describe pod your-pod -n your-namespaces to see pod events.

Instances in instance-group rebooting when given sudo poweroff

I have a strange problem whereby instances in an instance-group reboot themselves when I give them a sudo poweroff command (I'm doing this in a startup-script is that makes any difference...)
I've also tried the more elaborate gcloud compute instances delete -q --zone europe-west1-c $HOSTNAME to no avail.
What is the correct way to do this?
Instance groups spawn and restart instances on demand as required by its management policy. If necessary, when an instance goes down, the policy will wake it again; when deleted, another one will be created in its place.
Removing an instance from an instance group requires modifying the instance group as described here. Resizing the instance group size depends on the management policy:
For replica pool managed instance groups check here
For autoscaler managed instances groups check here
Hope this helps.

switch ebs volume between instance

Here is the situation
In instance A , have EBS volume where my mysql db data located was created based on this http://qugstart.com/blog/amazon-web-services/how-to-set-up-db-server-on-amazon-ec2-with-data-stored-on-ebs-drive-formatted-with-xfs/
I want to move db into separate instance B so I have created instance and installed Mysql already.
Both instances and volume were in same region.
My question here was if I detach ebs volume from instance A and attach to instance B will work automatically or do I have to make any precaution steps?
If you are following the instructions from the link/blog. You don't have to shutdown the instance to detach the EBS volume. You only need to shutdown your EC2 instance if your EC2 volume is the root volume. i.e /dev/sda1 /dev/sda /dev/xvda
Having said that, you do need to shutdown your mysql service on instance A before detaching the volume:
service mysqld stop
Then you can bring up another instance B and then attach the EBS volume where your data is and then mount it. (Assuming you are attaching to /dev/sdh or /dev/xvdh)
echo "/dev/sdh /vol xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir -m 000 /vol
sudo mount /vol
You can move EBS volumes, but before you detach it from the original server, you should stop the server.
When you attach the volume to the new server, look into EC2 console to see where it is attached to (i.e. /dev/xvdb). Then all you need is mount it somewhere. Your Mysql server's data directory should point to that mount location:
http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_datadir
I have been able to easily detach ebs volumes from on instance and reattach to another running instance with no problems at all.
I would certainly make sure you first terminate any programs that may have open files or are using that volume before detaching.
Not very familiar with MySQL, but I assume when you attach the new volume you will need to let MySQL know about the database and where it is. In SQL Server you would do this by 'Attaching' it to a running sql server instance - mySQL probably has a similar process.

How to get a new EC2 instance to mount an existing volume on which there is a MySQL database?

Several months ago, I followed http://aws.amazon.com/articles/1663 and got it all running. Then, my PC crashed and I lost the keypair (http://stackoverflow.com/questions/7949835/accessing-ec2-instance-after-losing-keypair) and could no longer access the instance.
I want to now launch a new instance and mount this MySQL/DB volume which is left over from before and see if I can get to the data on it. How can I go about doing that?
You outlined the correct approach to this problem already, and the author of the article you referenced, Eric Hammond, has written another one detailing this very process, see Fixing Files on the Root EBS Volume of an EC2 Instance - it boils down to:
start another EC2 instance
stop the EC2 instance you can't access anymore
detach the EBS volume from the stopped instance
attach the EBS volume to the running instance
SSH into the running instance
mount the EBS volume in the running instance
perform whatever fixes necessary, i.e. adjust the /var permissions in your case
Please see Eric's instructions for details on how to do this from the command line; obviously you can achieve all steps up to the SSH access via the AWS Management Console as well, removing the need to install the Amazon EC2 API Tools, in case they aren't readily available already.