I can't seem to find a way to start a shell using all the servers set up in conf/servers
Only found it possible to submit to cluster jobs using /bin/snappy-job.sh where I specify the lead location, but would like to try real time shell to perform some tests using the whole cluster
Thank you,
Saif
Please see this link. It tells how to start a spark-shell and connect it to snappy store.
http://snappydatainc.github.io/snappydata/connectingToCluster/#using-the-spark-shell-and-spark-submit
Essentially you need to provide the locator property and this locator is the same which you have used to start the snappy cluster.
$ bin/spark-shell --master local[*] --conf snappydata.store.locators=locatorhost:port --conf spark.ui.port=4041
Note that with the above a different compute cluster is created to run your program. The snappy cluster is not used for computation when you run your code from this shell. The required table definition and data is fetched in efficient fashion from the snappy store.
In future we might make this shell connect to the snappy cluster in such a way that it uses the snappy cluster itself as its compute cluster.
Related
I have a small instance running in GCE, had some troubles with the MongoDb so after some tries decided to reset the instance. But... it didn't seem to come back online. So i stopped the instance and restarted it.
It is an Bitnami MEAN stack which starts apache and stuff at startup.
But... i can't reach the instance! No SCP, no SSH, no webservice running. When i try to connect via SSH (in GCE) it times out, cant make connection on port 22. In the information it says 'The instance is booting up and sshd is not running yet', which is possible of course.... But i cant reach the instance in no possible manner not even after an hour wait :) Not sure what's happening if i cant connect to it somehow :(
There is some activity in the console... some CPU usage, mostly 0%, some incomming traffic but no outgoing...
I hope someone can give me a hint here!
Update 1
After the helpfull tip form Serhii... if found this in the logs...
Booting from Hard Disk 0...
[ 0.872447] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
/dev/sda1 contains a file system with errors, check forced.
/dev/sda1: Inodes that were part of a corrupted orphan linked list found.
/dev/sda1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
fsck exited with status code 4
The root filesystem on /dev/sda1 requires a manual fsck
Update 2...
So, i need to fsck the drive...
Created a snapshot, made a new disk from that snapshot, added the new disk as an extra disk to another instance. Now that instance wont boot with the same problem... removing the extra disk fixed it again. So adding the disk makes it crash even though it isn't the boot-disk?
First, have a look at the Compute Engine -> VM instances -> NAME_OF_YOUR_VM -> Logs -> Serial port 1 (console) and try to find errors and warnings that could be connected to lack of free space or SSH. It'll be helpful if you updated your post by providing this information. In case if your instance run out of free space follow this instructions.
You can try to connect to your VM via Serial console by following this guide, but keep in mind that:
The interactive serial console does not support IP-based access
restrictions such as IP whitelists. If you enable the interactive
serial console on an instance, clients can attempt to connect to that
instance from any IP address.
more details you can find in the documentation.
Have a look at the Troubleshooting SSH guide and Known issues for SSH in browser. In addition, Google provides a troubleshooting script for Compute Engine to identify issues with SSH login/accessibility of your Linux based instance.
If you still have a problem try to use your disk on a new instance.
EDIT It looks like your test VM is trying to boot from the disk that you created from the snapshot. Try to follow this guide.
If you still have a problem, you can try to recreate the boot disk from a snapshot to resize it.
So I have mysql install on my ec2 instance but when I try to start it I get the following error:
ubuntu$ mysql --version
mysql Ver 14.14 Distrib 5.6.19, for debian-linux-gnu (x86_64) using EditLine wrapper
ubuntu$ mysql start
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
There are many different ways of setting up an EC2 AMI with MySQL, including using any of the pre-configured AMIs supplied by Amazon.
The default Getting Started AMI provided by Amazon uses Fedora Core 4, and you can install MySQL by using yum:
shell> yum install mysql
This installs both the MySQL server and the Perl DBD::mysql driver for the Perl DBI API.
Alternatively, you can use one of the AMIs that include MySQL within the standard installation.
Finally, you can also install a standard version of MySQL downloaded from the MySQL Web site. The installation process and instructions are identical to any other installation of MySQL on Linux. See Chapter 2, Installing and Upgrading MySQL.
The standard configuration for MySQL places the data files in the default location, /var/lib/mysql. The default data directory on an EC2 instance is /mnt (although on the large and extra large instance you can alter this configuration). You must edit /etc/my.cnf to set the datadir option to point to the larger storage area.
Important
The first time you use the main storage location within an EC2 instance it needs to be initialized. The initialization process starts automatically the first time you write to the device. You can start using the device right away, but the write performance of the new device is significantly lower on the initial writes until the initialization process has finished.
To avoid this problem when setting up a new instance, you should start the initialization process before populating your MySQL database. One way to do this is to use dd to write to the file system:
root-shell> dd if=/dev/zero of=initialize bs=1024M count=50
The preceding creates a 50GB on the file system and starts the initialization process. Delete the file once the process has finished.
The initialization process can be time-consuming. On the small instance, initialization takes between two and three hours. For the large and extra large drives, the initialization can be 10 or 20 hours, respectively.
In addition to configuring the correct storage location for your MySQL data files, also consider setting the following other settings in your instance before you save the instance configuration for deployment:
Set the MySQL server ID, so that when you use it for replication, the ID information is set correctly.
Enabling binary logging, so that replication can be initialized without starting and stopping the server.
Set the caching and memory parameters for your storage engines. There are no limitations or restrictions on what storage engines you use in your EC2 environment. Choose a configuration, possibly using one of the standard configurations provided with MySQL appropriate for the instance on which you expect to deploy. The large and extra large instances have RAM that can be dedicated to caching. Be aware that if you choose to install memcached on the servers as part of your application stack you must ensure there is enough memory for both MySQL and memcached.
Once you have configured your AMI with MySQL and the rest of your application stack, save the AMI so that you can deploy and reuse the instance.
My free trial period has been cancelled but I would like to continue using my existing engines.
The console says my instance is terminated and is no longer running.
Rebooting the instance gives "The resource .... is not ready' error.
How can I continue using my engines with exact same IP setting and other configurations?
Once an instance is in a 'TERMINATED' state it can no longer be booted. You will need to recreate an instance with the same configuration, IP address and boot disk as you indicate. See this FAQ for more information about the terminated state: https://cloud.google.com/compute/docs/troubleshooting#terminate
To retain your existing IP address you will need to promote it to a static IP address resource. You can then reassign this address resource to your new instance.
$ gcloud compute addresses create address-name --addresses IP_ADDRESS --region REGION
See this article for the exact steps:
https://cloud.google.com/compute/docs/instances-and-network#promote_ephemeral_ip
To migrate the existing data on your disk you could create a snapshot and then restore that snapshot when creating a new instance:
$ gcloud compute disks snapshot DISK
See this article for the detailed steps:
https://cloud.google.com/compute/docs/disks#creating_snapshots
Finally to migrate all the associated configuration and metadata you could use the describe subcommand in the Cloud SDK:
$ gcloud compute instances describe INSTANCE
This would print out the entire configuration for your existing instance which you can then use to recreate in your new instance.
The exact steps are quite similar to the process of migrating an instance from one zone to another. You could essentially follow the guide for that process but recreate your new instance in the same zone if you prefer not to move the location of your data. The steps for migrating an instance across zones can be found here:
https://cloud.google.com/compute/docs/instances#moving_an_instance_between_zones
Each node in my cluster can have more than one components, where the components are:
PSQL
Mongo Config server
Mongo shards
Redis
Celery Workers
Python processing node
So on..
The UI nodes are under auto scaling of AWS, and cannot be run on any other nodes. We can configure one or more components on a node by some CLI commands that we have built. Just to you an idea, we have commands like:
Turn off/on Redis
Turn off/on PSQL
Add another shard (can be done only if shard is running on this node)
etc.
So each CLI command's execution depends on the components installed on that node. Moreover, each CLI command's interaction is different, some take just one parameter, some may need a lot more. Now as the cluster grows in size, there is a requirement to centrally execute these commands somehow. I think this can probably be done as follows:
Build a tab specifically for super user admin, where he can see all nodes, and after selecting one of the nodes, he can select on all the possible CLI commands
Depending on CLI command, an Expect script should be made to run on that node
Now, I know this is all quite messy, I was hoping to know if there's a simple utility/framework which kind of helps simplify all this, if possible?
I run a pretty customized cluster for processing large amounts of scientific data based on a basic LAMP design. In general, I run a separate MySQL server with around 128GB of ram and about 1TB of storage. Separately, I run a head node that serves as an nfs mount point for the data input of my process, and a webserver to display results. Finally, I trypically have a few compute nodes that get their jobs from a mysql table, get the data from NFS, do some heavy lifting, then put results into mysql.
I have come across a dataset I would like to process which is pretty large (1TB of input data), and I don't really have the hardware on hand to handle it. As a result, I began investigating google compute engine etc, and the prospect of scaling instances to process these data rapidly with the results stored in a mysql instance. Upon completion the mysql tables could be dumped from the cloud and brought up locally for analysis. I would have no problem deploying a MySQL server, along with the rest of the LAMP pieces and the compute nodes, but I can't quite figure out how I would do this in the cloud.
A major sticking point seems to be the lack of read/write NFS which would allow me to get the data onto several instances, crunch it, then push the results to MySQL. This is a necessary step for me as I could queue hundreds of jobs from the webserver, then have the instances (as many as 50-100) pick the jobs up by connecting to a centralized mysql instance to find out what jobs an instance needs to do and where the data is. Process the data (there is a file conversion that happens which make the write part necessary), crunch the data, then load results to mysql. I hope I'm explaining my situation clearly. This seems like a great example of a CPU intensive process that would scale nicely in the cloud, I just can't seem to put all the pieces together... Any input is appreciated!
It sounds quite possible; I've been doing similar things in GCE for a while now.
NFS mount - you just need configure it as you would normally. Set up the NFS server on the head node, and then configure the clients on the slave nodes to mount it. Here and here are some basic configuration instructions for Centos 6 I used to get NFS up and running.
Setting up a LAMP stack is very straightforward. These machines run pretty much vanilla Linux distros, so you can just use yum or apt-get to install components.
For the cluster, you will probably end up having an image for the head node you use once, and then another image for the slave nodes that you replicate for each one.
For the scheduler, I've used Condor and Sge successfully, but I'm sure the other ones would work just as well.
Hope this helps.