Elastic Beanstalk confused about application versions? - amazon-elastic-beanstalk

When I run eb deploy I get
ERROR: You cannot have more than 500 Application Versions. Either remove some Application Versions or request a limit increase.
however, there is currently only one version of the application, and I can't delete that one because it is deployed. below is a screenshot of my terminal:
My research so far indicates this should work, but a bug in en EB seems unlikely. what am I missing?

I believe the limit of 500 is on the number of application versions across all applications in the given region for your AWS account number. You verify this by executing:
aws elasticbeanstalk describe-application-versions --region <your region> | grep VersionLabel | wc -l
You can also request AWS to increase the limit on the number of app versions if you choose not to delete any of existing application versions.

Related

Elastic Beanstalk - Environment properties not saving

I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?

Instance is overutilized. Consider switching to the machine type: g1-small

I created a new f1 micro instance with Ubuntu 16.04. I haven't logged in yet as I have not figured out how to create the SSH key-pair yet. But after two days, the Dashboard now shows:
Instance "xxx" is overutilized. Consider switching to the machine type: g1-small
Why is this happening? Isn't a f1 micro similar to an ec2 t1.nano? I have a t1.nano running a Node.js web site (with nginx, pm2, etc) and my CPU credit has been consistently at the maximum of 150 during this period with only me as a test user.
I started the f1 micro to run the same Node application to see which is more cost-effective. The parameter that was cloudy to me was that unexplained "0.2 virtual CPU". Is 0.2 CPU virtually unuseable? Would 0.5 (g1 small) be significantly better?
To address your connection problems, perhaps temporarily until you figure out the manual key management, you might want to try SSH from the browser which is possible from the Cloud Platform console or use gcloud CLI to assist you.
https://cloud.google.com/compute/docs/instances/connecting-to-instance
Once you get access via the terminal I would run 'top' or 'ps'.
Example of using ps to find the top CPU users:
ps wwaxr -o pid,stat,%cpu,time,command | head -10
Example of running top to find the top memory users:
top -l 1 -o rsize | head -20
Google Cloud also offers a monitoring product called Stackdriver which would give you this information in the Cloud console but it requires an agent to be running on your VM. See the getting started guide if this sounds like a good option for you.
https://cloud.google.com/monitoring/quickstart-lamp
Once you get access to the resource usage data you should be able to determine if 1) the VM isn't powerful enough to run your node.js server or 2) perhaps something else got started on the host unexpectedly and that's the source of your usage.

How to install MySQL 5.7 on Amazon ec2

How am I able to install MySQL 5.7 in the cloud on Amazon EC2?
Most of the Amazon Machine Instances (AMIs) that I see either lack any MySQL server or possess an older version such as MySQL Server 5.5
I want to use the latest and greatest.
This is a relatively quick setup of MySQL 5.7.14 on Red Hat Enterprise Linux version 7 (RHEL7).
I am not affiliated with AWS; I just enjoy using their services.
Make sure you have an AWS EC2 account. Note that even though Amazon requires a creditcard on file, there will be no charges incurred for the first year if you adhere to their Free-tier terms. Typically this means a single micro-instance (1 Gb RAM) server running 24/7.
Launch of EC2 RHEL instance
Step 1: On AWS EC2 click "Launch Instance" and select "Red Hat Enterprise Linux 7.2 (HVM), SSD Volume Type - ami-775e4f16" as seen in the picture below. Note that the versions of the available or promoted AMIs (Amazon Machine Instance) will rotate over time and this is as of this writing. But the AMI number is shown above in the text.
Normally, I chose Amazon Linux AMI as my distro of choice. I don't do that anymore as it is their own hodge-podge and there is naturally uncertainty of which package manager to choose and therefore the files. So I stick with RHEL now.
On the "Choose an Instance Type" screen, select a free-tier eligible instance type as seen below:
Click Next. On the next Details screen click "Next" to accept defaults. On the storage screen change the size to 16GB and click "Next". Then "Next" again on Tag info. Next comes the "Configure Security Group" screen pictured below:
Accept the radio button of "🔘 create a new security group" for now. Note SSH port 22 is open to all (Anywhere) IP addresses with the 0.0.0.0/0 CIDR. Other options include detection of "My IP" (as in yours), or "Custom". Rest assured that with the next screen access will be locked down based on Security Keys we will setup. There is a button for Add Rule under the ports open to add such ports as MySQL 3306 or HTTP 80. But for now we will skip that. Note the security group name. Fill one in or accept the default for now. It is not critical to get this right as the security group can be changed later for a running Instance. Click Review and Launch.
Then click "Launch" (fear not, it is not going to Launch yet). As the next screen presents:
Note that as I already have some keypairs generated, it defaults to "Choose an existing keypair" in the first drop-down. Otherwise, you "Create a new key pair" with a given reminder name and proceed to "Download Key Pair". At this point you have the Key Pair as a .pem file. Treat that with the utmost of security, saving it to a place that you will not lose it. Preferably in a password protected area such as under your operating system User directory.
When you finally click "Launch Instances" on this same screen, the launch takes place in relation to that key pair (either just created or a pre-existing one). Note, the generation of a key pair might be a task you perform just once a year. Re-using a key pair again and again, up to you.
After you launch the instance, you have roughly five minutes before it comes live. Under the Instances left menu item, you know the instance is live when the Instance State reads "running" and the Status Checks reads "2/2 passed":
Remember the .pem file that you downloaded? Well just one time you need to create a .ppk file out of that for PuTTY, an SSH client program that will communicate cryto-secured to your running instance. For that we use the puttygen tool that works in harmony with PuTTY. So we run puttygen, load the .pem downloaded minutes before, and generate the .ppk file with a "Key passphrase" such as "I & love%ancHovies2_fjdi8Ha". Below is a picture of puttygen:
And the AWS EC2 page entitled Connecting to Your Linux Instance from Windows Using PuTTY. The steps are File / Load private key. Change the filter to All files (.). Find the .pem file. Hit "Open", then "Ok". Type in a Passphrase. Click "save private key", and save it in the same folder as a .ppk file alongside your .pem file. As mentioned, this is not something you might do but yearly.
Now run PuTTY, the SSH client. Use the Session / Host Name as something like
ec2-user#ec2-www-xxx-yyy-zzz.us-west-2.compute.amazonaws.com
So it is basically ec2-user# concatenated with the Public DNS name that is seen under Instances on the EC2 Control panel. As for specifying the PuTTY .ppk file, it would look like the below, with the .ppk file chosen next to the Browse button:
Go back to the Session upper left hierarchy shown below, give this a profile name under Saved Sessions, and hit "Save". Hereafter when you load PuTTY, you merely load the session by name:
Don't forget that just about all you are doing here is saving the .ppk reference into a friendly named profile. And you may ocassionally need to change Host Name (certainly when you save an Instance image on EC2 and come back in with a new Instance IP address on a subsequent launch).
Ok, it is not easy. But it is what it is.
When you click Open it will attempt to connect to your RHEL instance. Hit Yes on the signature warning. Enter the prior saved .ppk Key Passphrase, and you should be sitting at a Linux prompt.
MySQL Installation (I will put these notes on GitHub)
URL01: Download MySQL Yum Repository
URL02: Chapter 1 Installing MySQL on Linux Using the MySQL Yum Repository
You now have MySQL 5.7.14 loaded and running on EC2 with a database and user setup. Your servers need to be imaged. See this answer here of mine for creating images (AMI's). Backup your data. See the EC2 documentation such as Best Practices for Amazon EC2.
Back to security: best practices certainly suggest not opening up your db to direct connects through Security Groups for port 3306. How you choose to adhere to that is your choice, such as with a PHP, Java, or other programming API. Note that various db client programs can connect through SSH tunnels such as MySQL Workbench. In addition various development libraries exist with SSH Tunnels but they are not terribly easy to develop against (mainly due to difficult key chains and lack of extensive developer experiences). For instance, there is one for C# here.
In addition AWS has RDS and other database offering for less hands-on and rolling your own like the above. The reason many developers target EC2 is due to the fact that you have a full blown server for your other programming initiatives.
If you do modify the Security Groups as mentioned before, please consider using IP Ranges based on CIDR entries and use caution before over-exposing your datastores. Or over granting. Much the same best practices as you would for on-premise work.
Concerning this MySQL section, my GitHub notes for the above few pictures are located Here.
I had the same issue, but i didn’t want to use Red Hat or any other OS than Amazon Linux AMI. So, here is the process to install MySQL 5.7 and upgrade an older version.
Short path (without screenshots)
wget https://dev.mysql.com/get/mysql57-community-release-el6-11.noarch.rpm
yum localinstall mysql57-community-release-el6-11.noarch.rpm
yum remove mysql55 mysql55-common mysql55-libs mysql55-server
yum install mysql-community-server
service mysqld restart
mysql_upgrade -p
Long path (with screenshots)
First of all, just to validate you can check the current version.
Then, you should download the repo for EL6 11
wget https://dev.mysql.com/get/mysql57-community-release-el6-11.noarch.rpm
Next, make a localinstall:
yum localinstall mysql57-community-release-el6-11.noarch.rpm
This is probably the key for a successful installation. You should remove the previous packages, regarding to MySQL 5.5
yum remove mysql55 mysql55-common mysql55-libs mysql55-server
Finally, you can install MySQL 5.7
yum install mysql-community-server
Restart the MySQL Server and upgrade your database
service mysqld restart
mysql_upgrade -p
You can validate you installation by authenticating to MySQL
sudo yum install mysql57-server

Setting up multiple instances of couchbase on fedora

I am trying to setup multiple instance of couchbase (couchbase-server-enterprise-4.5.0-DP1-centos7.x86_64.rpm)in fedora OS 21 .
I am following the steps in the below URL to setup the multiple instances of couchdb.
[http://docs.couchbase.com/admin/admin/Install/rhel-multiple-instances.html][1]
I have completed the first steps and able to launch couchbase in the http://localhost:8091/ui/index.html.
I have installed couchbase only once. Is it fine or do I need to install one more instance.
I am not sure how to proceed from step 2 onwards.
If I don't create any user defined ports in /opt/couchbase/etc/couchbase/static_config, will it be a problem?
I have setup nofile parameter as below in /etc/security/limits.conf file as mentioned in step 2.
Also, in the step 4 it is mentioned that, there is only one /opt/couchbase/etc/couchbase/static_config file. How can I setp different short_name parameter in that file.
How , once instance of couchbase identifies another instance of couchbase?
Couchbase really is much happier, and you will be too, to be on it's own OS, even if it is in something like Docker, VMs, etc. So I strongly recommend that you get something like Docker or VirtualBox to play with. Getting Couchbase to run multiple instances on the same OS is just not worth the hassle given the ease of other tools like I already mentioned.
That said, for development purposes, one of the best things about Couchbase is that you can develop on one node of Couchbase, but deploy against a much larger production cluster even with Multi-Dimensional Scaling enabled. It works very very well.

Compute Engine VM instance group got wiped out?

I'm new to GCE and want to migrate my web site there. I created a VM instance group hoping. I installed all the packages and set it up a couple days ago. But today I noticed my VM instance group has a different name (postfix, to be exact), and the disk is flushed empty. Is it possible to restore its status, or at least make sure it won't get wiped out again? I'm so surprised that GCE wiped out everything and I wonder if I'm missing something during setup.
A few details in case they are related:
I'm using a trusty image for the VM.
The cloud storage is chosen to be a regular persistent disk.
It was working with emphemeral IP, and yesterday I started to use Cloud DNS to host my domain. I should have used a static IP, but that mistake shouldn't cause the VM instance group to be flushed...
I'm using cloud sql as the database service.
Maybe I should just use VM instance, given I don't have much traffic now?
Any help will be greatly appreciated~