elastic beanstalk: Rolling Updates through eclipse SDK - amazon-elastic-beanstalk

Does deploying via eclipse on elastic beanstalk with "Rolling updates" enabled on beanstalk work?
I am trying to use eclipse sdk to push out updates i make in a rolling fashion to elastic beanstalk and somehow all my instances are getting updated instantaneously instead of the rolling fashion..

Rolling updates are not supported in beanstalk for app version changes.. They are supported for only env changes.. See the blogs below.. As on today web deployments or version updates cause a brief downtime because beanstalk updates all servers at once..
Following URLs are https urls: I cant add the entire link coz outlook is acting over smart and mod'ing the links
https://forums.aws.amazon.com/thread.jspa?messageID=502158&#502158
https://forums.aws.amazon.com/thread.jspa?messageID=328344&#328344
https://forums.aws.amazon.com/thread.jspa?messageID=506438&#506438
You can do something like this:
https://forums.aws.amazon.com/message.jspa?messageID=258782

Related

Unable to create the application version in AWS Elastic Beanstalk

I am trying to deploy a node-based application in beanstalk (I have done so successfully before), but every time I try to upload the zip file I get an error simply saying "Unable to create the application version." I know this alone is not very helpful, but it's literally all AWS is giving me - would anyone know how to troubleshoot this further to find possible causes?
I was having a similar problem. It ended up being an issue with the latest version of Firefox (96.0.2 released on Jan. 20). Once I switched browsers to Chrome, I was able to upload new application versions fine.

Legacy GCE and GKE metadata requests from google_daemon/manage_addresses.py

I have an old Debian Compute Engine instance (created and running since December 2013) and got an email warning about the turndown of Legacy GCE and GKE metadata server endpoints (more details at https://cloud.google.com/compute/docs/migrating-to-v1-metadata-server).
I followed the directions for locating the process and found that the requests were coming from /usr/share/google/google_daemon/manage_addresses.py. The script seems to be the same as what's at https://github.com/gtt116/gce/blob/master/google_daemon/manage_addresses.py (also with what's in that directory).
I don't recall installing this, so I'm imaging it came with the provided Debian image I used in 2013.
Does anyone know what this manage_addresses.py script is, what it does, and what I should do with it now that the legacy metadata server endpoints are turning down? Is it safe to just stop running it? Or is there a new script I should replace it with? Or should I just try to update it myself to use the new endpoint?
I dug around and was able to trace /usr/share/google/google_daemon/manage_addresses.py as being installed by a package called google-compute-daemon. A search for that brought me to https://github.com/GoogleCloudPlatform/compute-image-packages#troubleshooting which explains that google-compute-daemon has been replaced with python-google-compute-engine. That led me to https://cloud.google.com/compute/docs/images/install-guest-environment . I followed the instructions there and manually installed the guest environment.
I noticed during installation that it said it was removing the google-compute-daemon package (and a packaged called google-startup-scripts), so this seems like the right thing. And I'm no longer seeing any requests to the legacy endpoints. So it seems like at some point the old guest environment failed to update.
TLDR; If you have this problem, follow the instructions at https://cloud.google.com/compute/docs/images/install-guest-environment#installing_guest_environment to manually update the guest environment.

click to deploy Hadoop on GCE not working

I'm trying "Click-to-deploy Hadoop on Google Compute Engine" here
Unfortunately this doesn't seems to work : either the process stops almost immediately, or it's like it's frozen.
message displayed is
Deployment may take 3 to 10 minutes to complete, depending on the size of your cluster
Creating deployment
In any case, I can't have any cluster. Tried several zones, Hadoop versions, nothing.
Any thought ?
The problem is occurring because your Cloud project does not have a project id associated with it, but only a project number, which is true for some long-standing Cloud projects.
https://developers.google.com/console/help/new/#projectnumber
You can fix this by going into Developers Console, selecting your project from the project list, selecting Billing & settings from the left-hand navigation, and adding the project id there.
The following URL should take you there directly:
https://console.developers.google.com/project/_/settings
Thanks,
-Matt
A few items to help diagnose the problem:
Go to the Compute Engine instance list and check if there are any instances created for the deployment.
Check if there are any errors raised to the Javascript Console for your browser.
BTW, what browser and version are you using?
Thanks.
No instance deployed (however I can (and had) deployed compute engine VM instances)
I have a 404 in console :
POST https://console.developers.google.com/m/deploy?pid=1090158225078&cmd=custom…ion=europe-west1&app=hadoop&xsrf=R5Ezthkrr1L8xU1STye3sXUiHiA:1414055456964 404 (Not Found)
on Chrome, Windows7
I tried on Firefox too : no 404 in console but same effect : no deployment at all.
The "customdeploy" command should not be returning a 404, so let's check if there's something going on with your Cloud project.
Click to Deploy uses the preview version of Deployment Manager on the backend. Let's check the objects (if any) that Deployment Manager has created for the Hadoop deployment.
To do this, you will need to:
Install the Google Cloud SDK (if you have not already)
Add the preview component
Query for Deployment Manager templates
Query for Deployment Manager deployments
Install the Google Cloud SDK:
Instructions are here: https://cloud.google.com/sdk/
Add the preview component:
gcloud components update preview
Query for Deployment Manager templates
gcloud preview --project=<projectid> deployment-manager templates list
Query for Deployment Manager deployments
gcloud preview --project=<projectid> deployment-manager deployments --region europe-west1 list
One last question. Is this a relatively "new" or "old" Google Cloud project? Sometimes old projects need a feature to be enabled that is automatically enabled on new projects.
Thanks.

Switching where my domain points to for safe production deploys

Say I have one prod environment and one dev environment in elastic beanstalk. I deploy my code to dev and it works and all's well, but when I deploy to production I get an error (note this is possible since sometimes instances get corrupted during deploys and apache breaks). What are the pros and cons of this solution:
have 2 prod environments that you toggle between on every deploy
deploy to the one not being used
if the deploy works, point yourdomain.com to the new production and if not, your old production is safe
Now, is SEO a concern -- if I switch around my domain between two elastic beanstalk environments, would the SEO be harmed?
The following solution is one that I have used many times without incident but remember to always test you solutions before production use.
The solution will use the following environment names which you should map to internal DNS names:
PROD01.elasticbeanstalk.com > www.example.com
PROD02.elasticbeanstalk.com
DEV01.elasticbeanstalk.com > dev-www.example.com
Typically, after developing and testing your application locally, you will deploy your application to AWS Elastic Beanstalk environment DEV01. At this point, your application will be live at URL dev-www.example.com.
Now that you have tested your application, it is easy to edit your application, redeploy, and see the results.
When you are satisfied your changes you made to your application, you can deploy it to your PROD02.elasticbeanstalk.com production environment. Using the Application Versions page promote the code running on DEV01 onto PROD02. Using your hosts file make sure everything is in order and then hit the URL swap.
This will switch PROD01.elasticbeanstalk.com and PROD02.elasticbeanstalk.com environment URL's seamlessly with zero downtime on your application.
Once you've made sure all your traffic is switched you can then update your original production environment following the same method, switch back and remove PROD01.elasticbeanstalk.com to prevent the extra cost (or you can leave it if you don't mind the $$ spend).

Why does my custom beanstalk keep restarting?

I am trying to customize the default AMI of beanstalk, but everytime I get server restarts after some random time. I went so far as not to change anything, but nothing works.
I have tried the following:
find the instance of running beanstalk, create AMI, modify the AMI of beanstalk-crashing
create new instance with same AMI as on beanstalk, create AMI, modify configuration-crashing
I have tried both stopping the instance before creating AMI, and creating AMI of running instance.
Edit: I found the answer here: Can't generate a working customized EC2 AMI from Amazon Beanstalk sample appl
From personal experience, place the health status page to point to a dummy, static .html file. Although not recommended, this will prevent the health checks from restarting the machine and you could make more inside inspection.
AWS captures into the S3 logs only the ones output via java.util.logging. It means all console logging is not transferred.
That said, make sure you define an private key in your environment config, so you could ssh to it easily and see its output (it changes - for Tomcat 7, it is at /opt/tomcat7. For tomcat6, it is under /usr/share/tomcat6)
Just to add to what aldrinleal wrote (can't comment yet): In the past, I would often find a failed Healthcheck would also disable my site. By which I mean: If you have the health check on your actual app and that app threw an exception, you wouldn't actually get to see anything, the environment would just report a failed state. Only after I changed to a static file for the health check, did I manage to see the errors.
Now I obviously this is more a problem with a dev environment and you can always just pull the logs. But especially in the beginning as someone new to AWS/Beanstalk this helped me a lot.