I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.
Related
I'm trying to set up a Staging VM for a site that's in production that I have just inherited. The site is running Wordpress/Woocommerce and has not been updated in a while. The VM it's hosted on is running an old version of PHP. Obviously, this all needs to be fixed up but I'm unfamiliar with GCP Compute Engine. Also any attempt to run backup/clone plugins crashes the site and requires a restore from the daily snapshot which is very annoying.
Is it possible to clone the VM/disk to a new instance, point that at a temporary domain, and test/update the site? I have been trying to do this for a while now without much luck any suggestions would be much appreciated. Thanks.
Creating a clone of an existing VM is possible and quite easy.
Create a snapshot of the VM. If possible stop the VM before doing this to ensure 100% accuracy - this way you will have exact snapshot of the drive without any errors. You can do it while the VM is running too if stopping it is out of the question.
Create a VM from the shapshot - select as a boot disk a snapshot that you've just created. Remember to assign a static public IP to this VM (unless you want it changed after VM restart and since you're going to do some configuration this would likely happen). You can change the VM's specs at this time too - nothing stops you from adding/removing CPU's, RAM etc. It may well be that your VM is underutilised and you can use something smaller to save costs. Or the opposite.
Start the machine. Now you can modify your WP configuration to point to a new domain. Depending on the SSL certificate - you can either use external one or the one provided by GCP (most convinient solution).
If you already own a domain you want to use for staging you can host it in Cloud DNS or at some other provider - just point it to the external IP you just reserved.
If you will be hosting your domain in the Cloud DNS then you will find necessary infomration in the documentation about managed zones (domains).
You can also consider creating a new VM and setting it as a template for creating a group of VM's (managed autoscaled group) and creating an external HTTPS load balancer in front of it. But this adds a little to the complexity so it's just my idea if you needed to handle a lot more traffic.
We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.
I have an Azure App Service and an Azure Storage Account. I know there is a server/vm behind the app service, but I have not explicitly started a machine.
I'm trying to import data from an access database which will be regularly uploaded to a fileshare in my storage account. I'd like to use an Azure WebJob to do the work in the background.
I'm trying to use DAO to read the data:
string path = #"\\server\share\folder\datbase.mdb";
DBEngine dbe = new DBEngine();
Database db = dbe.OpenDatabase(path);
DAO.Recordset rs = db.OpenRecordset("select * from ...");
This works when I run it locally, but when I try to run it in my web job accessing a fileshare in my storage account, it is not finding the file. I assume this is because DBEngine knows nothing of Azure and Azure account names and security keys, doesn't send them and Azure Storage doesn't respond.
So what I'd like to try is to see if I can map an Azure Storage Fileshare onto the server underlying my App Service. I've tried a number of different things, but have received variations of "Access Denied" each time. I have tried:
Running net use T: \name.file.core.windows.net\azurefileshare
/u:name key from the App Service consoles in the Azure Portal
Running
net use from a process within my webjob
Invoking WNetAddConnection2
from within my webjob
Looks like the server is locked down tight. Does anyone have any ideas on how I might be able to map the fileshare onto the underlying server?
Many thanks
As I know, Azure web app runs in sandbox. we could not map an Azure file share to Azure web app. So Azure file storage is a good place if you choose Azure web app. From my experience, there are below workarounds for you. Hope this could give you some tips.
1) Use Azure file storage, but choose Azure VM or Cloud service as host service.
2) Still choose Azure web app as host service, but include the access db in the solution and upload to Azure web app.
3) Choose SQL Azure as database instead. Here is the article that could help us to migrate the access database to SQL Azure
In the end, as Jambor rightly says, the App Service VM is locked down tight.
However, it turns out that the App Service VM comes with some local temporary storage for the use of the various components running on the VM.
This is at D:\local\Temp\ and can be written to by a web job.
Interestingly, this is a logical folder on a different share/drive from D:\local and the size of this additional storage is dependent on the App Service's scale.
I'm trying to solve the problem described by CWE-798, specifically how to allow my application to authenticate to a database securely. I would like to set a mysql password within mysqld and push that information out to a PHP application server. This entails communicating the new password from mysqld to PHP before a PHP instance attempts to connect to the mysqld.
(I did read through the suggested approaches on mitre.org and have some knowledge of privileged access management - however NONE of the recommendations actually solve the problem).
Unless this is initiated within the mysqld e.g. using its event scheduler, then I need to maintain some sort of script outside MySQL which will need credentials to connect - thus defeating the objective.
My problem is that I don't know how to get MySQL to initiate a client connection to the application to inject the new password; it does not appear to provide a standard function for invoking a URL nor for executing a program.
Is my only option to implement a UDF?
The vulnerability you're describing seems to primarily relate to applications that are in the hands of users that can freely inspect what they've been given, such as might be the case in a desktop application or a mobile app. If you have credentials in there you must take great pains to encrypt them, and then prevent that encryption from being cracked by protecting your key, but seeing as how all of this has to happen on the user's hardware you're fighting a battle you may never win.
This is how the DVD encryption was cracked, the private key for decrypting DVD data was stored in a desktop application and eventually uncovered.
Server-side code has different concerns. Here you want to avoid hard-coding credentials into your application not because you're concerned about hostile users per-se, though that can be an issue, but because you do not ever want your credentials to end up in a version control system.
One way to ensure this never happens is to have the credentials stored in a file external to your application, like a config file that the application can reference. Most frameworks have some kind of configuration file (.yml, .ini, .xml) that define how they connect to the database. This file should be stored on the server and only on the server, not on developer workstations, not in your version control, and especially not somewhere shared.
You can go down the road of using something like Zookeeper to manage your configuration files but the investment of time required makes this a futile exercise unless you're managing hundreds of servers.
So the short answer here is: Don't put your credentials in your code, or store it with your code. Put it in a config file that's kept on the server and the server alone.
I'm thinking about upgrading my company's integration server with a the repos on a separate disk that would be shared with a backup server. Like so:
[Main Integration Server] ---R/W--- [Repo Vdisk] ---R/O--- [Backup Integration Server]
My problem is that according to the GCE docs, if I attach the same Vdisk to more than one instance, all instances must only access the disk in read-only mode. What I'm looking to do would be to have one instance access it in read-write, and one in read-only mode.
Is this at all possible without powering up a third instance to act as a sort of "storage server"?
As you quoted from the docs and as mentioned in my earlier answer, if you attach a single persistent disk to multiple instances, they must all mount it in read-only mode.
Since you're looking for a fully-managed storage alternative so you don't have to run and manage another VM yourself, consider using Google Cloud Storage and mount your bucket with gcsfuse which will make it look like a regular mounted filesystem.