Can the instance used for the packer build be encrypted? (Not talking about the ami made from it) - packer

Due to some security policies, we can’t spin up instances that are not encrypted. When I run a packer build, it tries to first make an instance named “Packer Builder” that is not encrypted and fails. Is there a way to tell packer to encrypt the actual instance it’s running the provisioners on?

Related

Elastic Beanstalk - Environment properties not saving

I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?

Creating a Staging VM in Google Compute Engine

I'm trying to set up a Staging VM for a site that's in production that I have just inherited. The site is running Wordpress/Woocommerce and has not been updated in a while. The VM it's hosted on is running an old version of PHP. Obviously, this all needs to be fixed up but I'm unfamiliar with GCP Compute Engine. Also any attempt to run backup/clone plugins crashes the site and requires a restore from the daily snapshot which is very annoying.
Is it possible to clone the VM/disk to a new instance, point that at a temporary domain, and test/update the site? I have been trying to do this for a while now without much luck any suggestions would be much appreciated. Thanks.
Creating a clone of an existing VM is possible and quite easy.
Create a snapshot of the VM. If possible stop the VM before doing this to ensure 100% accuracy - this way you will have exact snapshot of the drive without any errors. You can do it while the VM is running too if stopping it is out of the question.
Create a VM from the shapshot - select as a boot disk a snapshot that you've just created. Remember to assign a static public IP to this VM (unless you want it changed after VM restart and since you're going to do some configuration this would likely happen). You can change the VM's specs at this time too - nothing stops you from adding/removing CPU's, RAM etc. It may well be that your VM is underutilised and you can use something smaller to save costs. Or the opposite.
Start the machine. Now you can modify your WP configuration to point to a new domain. Depending on the SSL certificate - you can either use external one or the one provided by GCP (most convinient solution).
If you already own a domain you want to use for staging you can host it in Cloud DNS or at some other provider - just point it to the external IP you just reserved.
If you will be hosting your domain in the Cloud DNS then you will find necessary infomration in the documentation about managed zones (domains).
You can also consider creating a new VM and setting it as a template for creating a group of VM's (managed autoscaled group) and creating an external HTTPS load balancer in front of it. But this adds a little to the complexity so it's just my idea if you needed to handle a lot more traffic.

How to upload my servers on virtual machine?

I have 2 server at which I am working locally. The first is a front-end in Vuejs, and the second is back-end in Flask. From the client I request an api to the second.
I have to upload these two on a remote Linux VM (Debian), for which I have credentials and I can successfully connect it via PuTTy.
How do I transer my 2 directories to the VM?
Then, I should change the address that the client uses for api requests of the server, that is all? Or I will have to do something else?
You can copy directories by the scp or sftp protocol. In your case, this can be done most easily by the winscp software.
Both scp, sftp (implemented by winscp) and ssh (implemented by putty) use the ssh protocol. Putty is for remote terminal (i.e. you can give commands to the server), while winscp uploads, downloads and manages files on it.
If you are developing something, it is likely that you will need to this deployment more regularly. These softwares are only good for single-time deployments. In professional environments this deployment is automatized and happens quickly.
It is very likely that you also have some database in your project. Here the most common options are either some db-level synchronization, or dumping the database into files and synchronizyng on the file level. But it is already another topic.
It is also unlikely that you will need two different VMs for the vuejs and for the flask. You could wire them together to a single VM, that would make your task far more easy.
You will likely have a hard time to make your deployment on your server well working. This all is just the beginning. But don't worry, after you've learnt it all, it will be easy!

Are custom metadata values for GCE instance stored securely?

I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.

Java EE application deployment on Amazon EC2

We have a Java EE application (EAR file deployed on JBoss, MySQL, MongoDB) which we would like to deploy on an Amazon EC2 instance. I have several questions regarding deployment best practices.
What is the most commonly used Linux AMI which we can rely on for a robust deployment (There are so many Linux variants, and I am not sure which AMI is commonly used, is it Fedora, CentOS, Red Hat, SUSE ...)
How do we handle production upgrades (EAR file modifications or schema upgrades). Are there any tools which are available to handle this installation or rollback of these changes.
What kind of data backup capability is available for the database?
Should I rely on Amazon RDS for MySQL support?
How should I handle support for MongoDB?
This is the first time, I am hosting an web-app and would appreciate some inputs on how to manage the production instance.
I agree with Mark Robinson's answer: Use whichever Unix variant you're most comfortable with. It may pay to pick one with decent cloud support. For my site I use Ubuntu.
I have a common image which is the base of every version deploy I do. I have www.mysite.com pointing to an Elastic IP so I can decide which instance it goes to. The common image has all the software I need installed (Postgres/Postgis/Tomcat/etc) but the database and web server data folders and symlinked to Elastic Block Store (EBS) instances.
When it comes time to do a deploy I start a new instance up, freeze and snapshot the EBS volumes on production and make new volumes. I point my new instance at the new volumes and then install whatever I need to onto that. Once I've smoke tested everything successfully I can switch the Elastic IP to point to the new instance and everything keeps on going.
I'll note that I currently have the advantage where only I can modify the database; no users can. This will become a problem shortly.
If you use the XFS filesystem on top of the EBS volume then you can tell XFS to freeze the file system (so no updates happen) then call the EC2 api to snapshot the volume then unfreeze the file system. The result is that the snapshot is taken quickly and sent to S3. I have a nightly script which does this.
If RDS looks like it will suit your needs then use it. Amazon is building lots of solid tools quickly and this will ease your scalability issues if you have any.
I'm sorry, I have no idea.
Good question!
1) I would recommend going with whatever Linux variant you are most comfortable with. If you have someone who is really keen on CentOS, go with that. Once you have selected your AMI, take it and customize it by configuring how you want it. Then save that AMI as you base-layout. It will make rolling out new machines much easier and save your bacon if EC2 goes down.
2) Upgrades with EC2 can be tres cool. Instead of upgrading a live system, take your pre-configured AMI, update that and save that AMI as myAMI-1.1 (or whatever). That way, you can flip over to the new system almost instantly AND roll back to a previous version in case something breaks. You can also back-up DB instances to S3. It's cheap at about $0.10/GB/Month.
3) It depends where you are storing your DB. If you are storing it on your EC2 instance you are in trouble. The EC2 instances have no persistence storage. So if your machine crashes, you lose everything. I'm not familiar with Amazon DB system but you should also look into Elastic Block Store. It's basically an actual hard-drive you can write to. When you want to upgrade your schema, do a full DB dump to S3 and then do an upgrade of your actual schema. If something goes wrong, you can pull the previous version out of S3.
4) & 5) I have never used those so I can't help you.
What is the most commonly used Linux AMI which we can rely on for a robust deployment (There are so many Linux variants, and I am not sure which AMI is commonly used, is it Fedora, CentOS, Red Hat, SUSE ...)
How do we handle production upgrades (EAR file modifications or schema upgrades). Are there any tools which are available to handle this installation or rollback of these changes.
What kind of data backup capability is available for the database?
Should I rely on Amazon RDS for MySQL support?
How should I handle support for MongoDB?
Any Linux AMI will do the job, what you need is a JRE only. (assuming development work not required). If you need to monitor the JVM behavior then get JConsole installed.
Easiest and painless way is to SSH into the local home directory, transfer the updated class file/EAR file (depends the number of changes applied) and copy and replace into the Tomcat deployment directory, restart apache. (make sure you tested locally before upload to production).
Depends on which database you are using, if you are using MySQL then just do scheduled backup that writes to your home directory so that from time to time you could SSH in and download a copy for backup purpose.
I would not consider reply on Amazon RDS for MySQL support due to 2 reasons: MySQL is small enough and manageable, and also I would want to have total complete control of the database and why pay for more when you can do it yourself FOC?
The usage of MongoDB should be align with the purpose of your application and benefits you gain from that. I would recommend you use MongoDB for static data retrieval like state, country, area etc... where MySQL to be use for transaction data only.
If you can live with deploying your Java EE application on TomEE instead of JBoss, Boxfuse does what you want.
For you Java EE application you literally only have to execute (TomEE uses war files instead of ear files):
boxfuse run my-tomee-app-1.0.war -env=prod
This will
Create AMI containing TomEE and your application ready to boot
Create an Elastic IP or ELB
Create a security group with the correct ports defined
Create an auto-scaling group
Launch your instance(s)
Any subsequent update will be done as a zero downtime blue/green deployment.
More info: https://boxfuse.com/blog/javaee-aws