AWS authentication to Vault - aws-sdk

We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi

The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.

Related

How can I port forward in openshift without using oc client . Is there a way we can usejava client to portforward in a pod just like“oc port forward”

I need to access a postgres database from my java code which resides in openshift cluster. I need a way to do so. without initiating port forwarding manually through oc port forward command.
I have tried using openshift java client class openshift connection factory to get the connection by passing server url and username password through which I log in to the console but it dint help.
(This is mostly just a more detailed version of Will Gordon's comment, so credit to him.)
It sounds like you are trying to expose a service (specifically Postgres) outside of your cluster. This is very common.
However the best method to do so does depend a bit on your physical infrastructure because we are by definition trying to integrate with your networking. Look at the docs for Getting Traffic into your Cluster. Routes are probably not what you want, because Postgres is a TCP protocol. But one of the other options in that chapter (Load Balancer, External IP, or NodePort) is probably your best option depending on your networking infrastructure and needs.

Where to keep the Initial Trust credentials of a Secrets Management tool?

For our product we have decided to implement a Secret Management tool (AWS secrets manager) that will securely store and manage all our secrets such as DB credentials, passwords and API keys etc.
In this way the secrets are not stored in code, database or anywhere in the application. We have to provide the AWS credentials - Access Key Id and Secret access key to programmatically access the APIs of Secrets manager.
Now the biggest question that arises is, where to keep this Initial Trust – the credentials to authenticate the AWS secrets manager.? This is a bootstrapping problem. Again, we have to maintain something outside of the secret store, in a configuration file or somewhere. I feel If this is compromised then there is no real meaning to store everything in a Secret management tool.
I read the AWS SDK developer guide and understand that there are some standard ways to store AWS credentials like – storing them in environmental variables, credentials file with different profiles and by Using IAM roles for Amazon EC2 Instances.
We don’t run/host our application in Amazon cloud, we just want to use AWS secrets manger service from AWS cloud. Hence, configuring the IAM roles might not be the solution for us.
Are there any best practices (or) a best place to keep the initial Trust credentials?
If you're accessing secrets from EC2 instance, ECS docker container, Lambda function, you can use Roles with policy that allows access to Secrets Manager.
if IAM Role is not an option, You can use Federation Login to get temporary credentials (IAM Role) with policy that allows access to Secrets Manager.
As #Tomasz Breś said, you can use federation if you are already using an on-premis Auth system like Active directory or Kerberos.
If you do not have any type of credentials already on the box, you are left with two choices: store your creds in a file and use file system permissions to protect them, or use hardware like an HSM or TPM to encrypt or store your creds.
In any case, when you store creds on the box (even AD/Kerberos), you should ensure only the application owner has access to that box (in the case of a stand alone app and not a shared CLI). You should also harden the box by turning off all un-necessary software and access methods.

AWS RDS MySQL database username and password sufficient for commercial security

I'm new to cloud computing so this might be an obvious question. I have a desktop Java application that will connect to an AWS RDS MySQL database using JDBC. Is using the endpoint, username and password for the database the preferred commercial way of connecting to the database?
To encrypt communication I plan to use SSL.
You could open your database instance to the outside, using regular credentials. But, a safer way to proceed might be to create an endpoint in AWS, possibly running in Java, which would expose one or more APIs which in turn would hit the MySQL database running in RDS. That is, you would not expose the RDS instance to the outside world directly, but only internally to this API, also running in AWS. Then, your desktop Java application would talk to this intermediary application when it needs to access the database.
The advantage of this suggestion is that it lessens the risk of your RDS instance being attacked via something like DOS. Of course, the API you create on top of the database could also be attacked. But, Java web application running in a container (and other similar applications in other languages) were designed to be exposed to the outside, much less so database instances.

Are custom metadata values for GCE instance stored securely?

I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.

AWS SQS to receive message from outside of AWS

my company has a messaging system which sends real-time messages in JSON format, and it's not built on AWS, and will not have any VPN connection with AWS.
our team is trying to use AWS SQS to receive these messages, which will then have DynamoDB process JSON messages to TSV, then load into RDS.
however, as per the FAQ, SQS can only receive message from within AWS.
https://aws.amazon.com/sqs/faqs/
Q: Who can perform operations on a message queue?
Only an AWS account owner (or an AWS account that the account owner has delegated rights to can perform operations on an Amazon SQS message queue.
In order to use SQS, one way I can think of is to create a public-facing EC2 instance, which receives messages and passes over to SQS.
My questions here are:
is my idea correct?
if it's correct, can you share any details on how to build any applications on this EC2 instance to achieve the functionality (I have no experience on application development, your insights are really appreciated!)
is there any easier/better options in AWS that can achieve the goal to receive message in my use case?
is my idea correct?
No, it isn't.
You're misinterpreting the (admittedly somewhat unclear) information in the FAQ.
SQS is accessible and usable from anywhere on the Internet. Its only exposed interface is HTTP(S). In fact, from inside EC2, SQS is not accessible unless the EC2 instance actually has outbound access to the Internet.
The point being made in the documentation is not that you need to be "inside" AWS to use queues, but rather that you need to be in possession of an authorized set of AWS credentials in order to work with queues.¹
If you have an AWS account, you have credentials, and you can use SQS. There is no requirement that you access the queue from "inside" AWS.
Choose the endpoint closest to your servers (for lowest latency) and you should find it open and accessible, from anywhere.
¹Queues can be configured to allow anonymous acccess after they are created. (Don't do it, I'm just saying it is possible.) This section of the FAQ seems to be referring to a subset of operations, such as creating queues.
I was not able to write to SQS from an external service. I found some partial explanations but got stuck at the role creation.
The alternative I found is using AWS services Lambda + API Gateway to write to SQS.
This tutorial was extremely helpful, explaining all the steps in great details:
https://startupnextdoor.com/adding-to-sqs-queue-using-aws-lambda-and-a-serverless-api-endpoint/
You can access sqs from anywhere once you have proper permission through accesskey&secret key or IAM role.
SQS is not specific to vpc
It is clear that you try to do this :
Take message from your company messaging system, send it to SQS.
It is not wrong using your method (using EC2 as a bridge). However, you don't need EC2 to connect to SQS.
All AWS services can be access using AWS API(e.g. Python boto3, etc) from internet, as long as you provide the correct credential. So you can put your "middleware" in anywhere as long as you are able establish connection to the said services.
So there is lots of more options available to you. e.g. trigger from your messaging system; use AWS Lambda, etc.
Thanks for sharing the information and your insights with me!
I have tested below solution, which works for my use case:
created an endpoint in AWS API Gateway, which is able to receive messages from company messaging system, a system that does not carry AWS credentials
created a Lambda function triggered by API Gateway, so once a message arrives, Lambda will digest the JSON message and convert it to TSV, and then load into RDS