For our product we have decided to implement a Secret Management tool (AWS secrets manager) that will securely store and manage all our secrets such as DB credentials, passwords and API keys etc.
In this way the secrets are not stored in code, database or anywhere in the application. We have to provide the AWS credentials - Access Key Id and Secret access key to programmatically access the APIs of Secrets manager.
Now the biggest question that arises is, where to keep this Initial Trust – the credentials to authenticate the AWS secrets manager.? This is a bootstrapping problem. Again, we have to maintain something outside of the secret store, in a configuration file or somewhere. I feel If this is compromised then there is no real meaning to store everything in a Secret management tool.
I read the AWS SDK developer guide and understand that there are some standard ways to store AWS credentials like – storing them in environmental variables, credentials file with different profiles and by Using IAM roles for Amazon EC2 Instances.
We don’t run/host our application in Amazon cloud, we just want to use AWS secrets manger service from AWS cloud. Hence, configuring the IAM roles might not be the solution for us.
Are there any best practices (or) a best place to keep the initial Trust credentials?
If you're accessing secrets from EC2 instance, ECS docker container, Lambda function, you can use Roles with policy that allows access to Secrets Manager.
if IAM Role is not an option, You can use Federation Login to get temporary credentials (IAM Role) with policy that allows access to Secrets Manager.
As #Tomasz Breś said, you can use federation if you are already using an on-premis Auth system like Active directory or Kerberos.
If you do not have any type of credentials already on the box, you are left with two choices: store your creds in a file and use file system permissions to protect them, or use hardware like an HSM or TPM to encrypt or store your creds.
In any case, when you store creds on the box (even AD/Kerberos), you should ensure only the application owner has access to that box (in the case of a stand alone app and not a shared CLI). You should also harden the box by turning off all un-necessary software and access methods.
Related
in the cloud functions is better use firebase config command and then use (for example ; functions.config().stripe.secret_key ) or Google Secret Manager ? From the documentation I can't tell, I only understood that the only thing not to use is the local env and that firebase functions config is actually server-side, so nothing is exposed.
In addition to #John Hanley's comment, storing environment variables in .env files is not recommended as it is not a secure way to store sensitive information such as API keys, credentials, passwords, certificates, and other sensitive data that could be decoded.
Environment variables stored in .env files can be used for function configuration, but you should not consider them a secure way to store sensitive information such as database credentials or API keys. This is especially important if you check your .env files into source control.
To help you store sensitive configuration information, Cloud Functions for Firebase integrates with Google Cloud Secret Manager. This encrypted service stores configuration values securely, while still allowing easy access from your functions when needed.
It is recommended to create and use a secret manager to secure your sensitive information. You can check this documentation on configuring your environment and storing and accessing sensitive configuration information for Firebase.
You can check this documentation on Secret Manager for more details on how to manage and secure your secret.
I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.
We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.
I have a microsoft access database that I use to track client specifications. I would like store the front-end and back-end database in the cloud and use it in real time. What recommendations and cheap options are available
One option is to host the frontend and backend on an AWS t2.small instance running Windows:
AWS instance
Then access this via Remote Desktop as any other machine.
There is an option to use your Access database locally and update the data on the cloud, using Google Drive or other similar services to share this among several users, it is an asynchronous solution though.
deployment of a multi user Access app, among geographically separated users, is via Remote Desktop Server, aka citrix, aka terminal services...
in any multi user deployment the app is split, and each user has their own front end file - and all connect to the one single back end file. this remains true in an RDS deployment.
this is a main stream method to deploy, and very functional....lots of orgs do virtual desktop now, and then there are hosting companies that will do a single app... whether or not one views this as "cheap" however is in the eye of the beholder... but any other attempt/method is generally a kludge and is typically round-robin where only 1 user at a time can use the app... also - - one does not put the WAN in the middle... both the front and back end files are RDS and so the user simply has Win10 and does not need any Access or Office license on their local pc.
Since I am making an android application that has its own mysql database in my server, how can I map the user of wirecloud with the user of my own database? The point of this is to recognize which user is consuming a widget deployed on the wirecloud.
I suppose that Wirecloud uses mongo db?
The best way for mapping the users of WireCloud with the ones of you own database is the use of a single authentication source.
WireCloud is based on Django so you can use any of the method supported by it for customising the authentication. This include the use of third-party modules (e.g. django-auth-ldap for authenticating using a LDAP server) and the use of the integration with the FIWARE IdM provided by WireCloud.
It's technically possible to make WireCloud use your database directly, but I don't recommend you to do that because will be a pain to maintain such integration. In my opinion, the best options are migrating your app for using the FIWARE IdM or creating a custom authentication backend for authenticating users using your database.