When user enter their API token on the browser, I need securely save their token. What is good approach to encrypt user's token?
I wax considering using AWS Secrets Manager to store User's token through API but it turn out it is really expensive. $0.40 per secret per month.
I might consider encrypting user token in the MySQL and store master secret in the .env file
Is there alternative approach?
Since you're already using AWS services it makes sense to take advantage of more resilient cloud-native solutions.
With SSM you only pay for your underlying AWS resources managed or created by AWS Systems Manager, however, parameter store as well as the majority of other AWS services using KMS for decryption and encryption purposes.
Additional alternatives:
Cache SSM params for instance https://github.com/alexcasalboni/ssm-cache-python
Use credstash (dynamodb + kms)
Use s3 with server and client side encryption https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Based on your usage you will need to do the math, KMS is not free, but has a decent monthly free tier
For your scenario, you can save the token AWS DynamoDB, encrypted at rest with AWS KMS.
In addition to the Lambda charges, AWS KMS will mainly cost $1 for CMK, and based on the on-demand encryption and decryption operations about $0.03 per 10,000 requests and the free tier will give 20,000 requests/month free of charge.
For more details about pricing, refer AWS KMS pricing section.
Related
I am doing some research for a mobile app I want to develop, and was wondering whether I could get feedback on the following architecture. Within my future app users should be able to authenticate and register themselves via the mobile app and retrieve and use their settings after a successful authentication.
What I am looking for is an architecture in which user accounts are managed by AWS Cognito, but all application related information is stored in a MySQL database hosted somewhere else.
Why host the database outside of AWS? Because of high costs / vendor lock-in / for the sake of learning about architecture rather than going all-in on AWS or Azure
Why not build the identity management myself? Because in the end I want to focus on the App and don't spent a lot of energy on something that AWS can already provide me with (yeah I know, not quite in line with my last argument above, but otherwise all my time goes into database AND IAM)
One of my assumptions in this design (please correct me if I am wrong) is that it is only possible to retrieve data from a MySQL database with 'fixed credentials'. Therefore, I don't want the app (the user's device) to make these queries (but do this on the server instead) as the credentials to the database would otherwise be stored on the device.
Also, to make it (nearly) impossible for users to run queries on the database with a fake identity, I want the server to retrieve the User ID from AWS Cognito (rather than using the ID token from the device) and use this in the SQL query. This, should protect the service from a fake user ID injection from the device/user.
Are there functionalities I have missed in any of these components that could make my design less complicated or which could improve the flow?
Is that API (the one in the step 3) managed by the AWS API Gateway? If so, your cognito user pool can be set as Authorizer in your AWS API Gateway, then the gateway will take care automatically of the token verification (Authorizers enable you to control access to your APIs using Amazon Cognito User Pools or a Lambda function).
You can also do the token verification in a Lambda if you need to verify something else in the token.
Regarding to the connection between NodeJS (assuming that is an AWS lambda) that will work fine, but keep in mind the security as your customers data will travel outside AWS, and try to use tools like AWS Secret Manager to keep your database passwords safe and rotate them from time to time in your lambda.
In a Cloud Function I need to retrieve a bunch of key-value pairs to process. Right now I'm storing them as json-file in Cloud Storage.
Is there any better way?
Env-variables don't suite as (a) there are too many kv pairs, (b) the same gcf may need different sets of kv depending on the incoming params, (c) those kv could be changed over time.
BigQuery seems to be an overkill, also given that some kv have few levels of nesting.
Thanks!
You can use Memorystore, but it's not persistent see the FAQ.
Cloud Memorystore for Redis provides a fully managed in-memory data
store service built on scalable, secure, and highly available
infrastructure managed by Google. Use Cloud Memorystore to build
application caches that provides sub-millisecond data access. Cloud
Memorystore is compatible with the Redis protocol, allowing easy
migration with zero code changes.
Serverless VPC Access enables you to connect from the Cloud Functions environment directly to your Memorystore instances.
Note: Some resources, such as Memorystore instances, require connections to come from the same region as the resource.
Update
For persisted storage you could use Firestore.
See a tutorial about using Use Cloud Firestore with Cloud Functions
I've been looking for info on how/if Forge encrypts data at rest. We have some customers with sensitive models that are asking the question.
Is data at rest encrytped?
If so, what method of encrypted is used and is it on by default?
If not, is this a planned feature in the future?
The Forge REST API is using https which means you are using the SSL protocol to transfer data between the client and server (both way). SSL encrypts the data for you automatically using the 'trusted' certificate. Here is a complete article on the protocol if you interested reading more about it.
Edited based on comments below - if we are talking about storage, all the data stored on the Forge servers are encrypted with your developer keys. Forge encrypts your data at the object level as it writes it to disks and decrypts it for you when you access it.
If we use Azure API management premium do we need to create a backup (disaster recovery) strategy?
It is replicated in as many separate regions as you want.
In the past, with non-premium we have called the API Management REST API to backup to Azure blog storage.
Obviously, you should always have a DR strategy but just wondering if it is overkill in this scenario.
Azure ApiManagement offers SLA on Proxy/Gateway uptime, so if you have a API Management deployed in multiple regions, the Proxy will continue to run, automatically failing over to non affected regions.
However the Publisher Portal, Developer Portal and Management REST Endpoint is still only hosted in the Master Region. If there a region wide disaster in the Master region of your service, they will not be accessible. Which would mean you cannot add new API/operations and new customers cannot subscribe for your service.
If one of the additional regions is impacted, the Proxy/Gateway it will sync up to latest configuration before starting up.
I have a PHP/MySQL application test-deployed on a server, with a domain name that I own. In order for this to be a real world scalable product, I decided to use Amazon Web Services. However, I'm new to using cloud services (this is my first), and since the past 2 days, after going through tutorials and "how to start" guides given on Amazon, I've still been unable to grasp "what exactly should I do, so that I can use my present domain name and use Amazon's services?" My users should be able to access my product using, let's say www.xyz.com which is the name I own. My PHP code gets some data from client, which it then stores in a SQL DB. This is the existing, working set up.
Now, how do I get my PHP code, to use Amazon Web Servics and store it in a database that Amazon provides? My product's DB will be continuously growing, and I will pay for whatever is used. Also, if I decide to use the PHP services from Amazon too, does Amazon host my code? In that case, what will be the domain name?
To summarize, my biggest concern is the domain name I've bought, and I've seen no documentation on how to go forward in such a case.
This is the only part I have been unable to figure out, rest was clear from the documentation..
Thanks for your help!
Amazon Web Services (AWS) is a cloud platform composed of multiple services that jointly enable you to host infrastructure and applications on it. It's not a single offering that magically does everything for you. In order to achieve your goal you will want to do the following:
Use Amazon Elastic Compute Cloud (EC2) to spin up servers that host your PHP application. They will handle the incoming traffic for you. Have a look at this link to get started.
In order to store data you will want to use some sort of database. AWS offers various database types. Since you are looking for a SQL-type database, you will want to use RDS. This service allows you to provision a functional database and relieves you of certain administrative tasks.
In order to use your current domain, you will have to transfer its registration to AWS Route53. Just Google 'Route53 domain transfer' and the documentation will show you how to do it.
There are many whitepapers available that show architectural patterns across the AWS cloud. I suggest you read them so you can get a better understanding of the platform.
To get started quickly I recommend using Amazon Elastic Beanstalk for your purposes:
Amazon Web Services (AWS) comprises dozens of services, each of which
exposes an area of functionality. While the variety of services offers
flexibility for how you want to manage your AWS infrastructure, it can
be challenging to figure out which services to use and how to
provision them.
With Elastic Beanstalk, you can quickly deploy and manage applications
in the AWS Cloud without worrying about the infrastructure that runs
those applications. AWS Elastic Beanstalk reduces management
complexity without restricting choice or control. You simply upload
your application, and Elastic Beanstalk automatically handles the
details of capacity provisioning, load balancing, scaling, and
application health monitoring.
Learn more about it here
regarding the domain, you could transfer it to route 53
OR
route your domain traffic by using route53 name servers