I'm deploying a Node.js app on GAE that connects to a Cloud SQL.
Following the docs, I'm told to store the user/password for the database inside app.yaml:
env_variables:
MYSQL_USER: YOUR_USER
MYSQL_PASSWORD: YOUR_PASSWORD
MYSQL_DATABASE: YOUR_DATABASE
# e.g. my-awesome-project:us-central1:my-cloud-sql-instance
INSTANCE_CONNECTION_NAME: YOUR_INSTANCE_CONNECTION_NAME
Is this really a good place to store the password?
Storing secrets in app.yaml risks them leaking (e.g., it's not uncommon to find them checked in accidentally on github). Storing secrets in a .gitignored file that you weave into app.yaml at deploy time is one approach. Another approach is to store the secrets in an Entity in the datastore.
For many of my apps, I store secrets in an Entity called Config, which stores stringified JSON. This simplifies the admin UI for editing them down to a single textarea, deferring the need for a more complicated UI.
For an example of this approach with a more full-featured UI, check out the Khan Academy 'snippets' app. https://github.com/Khan/snippets
Google does not have service for this thing (yet). I asked support about this before and their suggestion is to store the data in a datastore (encrypted)
What you should do:
put app.yaml in .gitignore, and then,
set your secrets in app.yaml, and then,
perform gcloud app deploy
You don't need to have app.yaml in your version control to still "set" your environmental variables in GAE.
Yes. We do the same. There is not much difference in storing credentials in environment variables or file. Storing them in file, I found more convenient (through it totally subjective). In terms of security concerns, you always can play with file permissions. You can create a user which will run app, and grant read access to this user.
Related
in the cloud functions is better use firebase config command and then use (for example ; functions.config().stripe.secret_key ) or Google Secret Manager ? From the documentation I can't tell, I only understood that the only thing not to use is the local env and that firebase functions config is actually server-side, so nothing is exposed.
In addition to #John Hanley's comment, storing environment variables in .env files is not recommended as it is not a secure way to store sensitive information such as API keys, credentials, passwords, certificates, and other sensitive data that could be decoded.
Environment variables stored in .env files can be used for function configuration, but you should not consider them a secure way to store sensitive information such as database credentials or API keys. This is especially important if you check your .env files into source control.
To help you store sensitive configuration information, Cloud Functions for Firebase integrates with Google Cloud Secret Manager. This encrypted service stores configuration values securely, while still allowing easy access from your functions when needed.
It is recommended to create and use a secret manager to secure your sensitive information. You can check this documentation on configuring your environment and storing and accessing sensitive configuration information for Firebase.
You can check this documentation on Secret Manager for more details on how to manage and secure your secret.
I created a project from https://fiware-tutorials.readthedocs.io/en/latest/time-series-data.html tutorial and just changed the entities name and type and everything work right. But after some time (usually a day) all entities in Orion disappears (although the data in Quantumleap persists) and I can not get the entities properties with this command :
curl -X GET \
--url 'http://localhost:1026/v2/entities?type=Temp'
What is the problem? is there some restriction in tutorial projects?
The tutorials have been written as an introduction to NGSI, not as a robust architectural solution. The idea is just to get something "quick and dirty" up and running on a developer's machine and various shortcuts have been taken. Indeed the docker-compose files all hold the following disclaimer:
WARNING: Do not deploy this tutorial configuration directly to a production environment
The tutorial docker-compose files have not been written for production deployment and will not
scale. A proper architecture has been sacrificed to keep the narrative focused on the learning
goals, they are just used to deploy everything onto a single Docker machine. All FIWARE components
are running at full debug and extra ports have been exposed to allow for direct calls to services.
They also contain various obvious security flaws - passwords in plain text, no load balancing,
no use of HTTPS and so on.
This is all to avoid the need of multiple machines, generating certificates, encrypting secrets
and so on, purely so that a single docker-compose file can be read as an example to build on,
not use directly.
When deploying to a production environment, please refer to the Helm Repository
for FIWARE Components in order to scale up to a proper architecture:
see: https://github.com/FIWARE/helm-charts/
Perhaps the most relevant factor here to answer your question, there is typically no Volume Persistence - the tutorials clean up after themselves where possible to avoid leaving data on a user's machine unnecessarily.
If you have lost all your entity data when connecting to Orion, my guess here is that the MongoDB database has exited and restarted for some reason. Since there is deliberately no persistent volume set up, this would mean that all previous entities are lost on the restart.
A solution on how to persist volumes and fix this behaviour can be found in answers to another question on this site - something like:
version: "3.9"
services:
mongodb:
image: mongo:4.4
ports:
- 27017:27017
volumes:
- type: volume
source: mongodb_data_volume
target: /data/db
volumes:
mongodb_data_volume:
external: true
For our product we have decided to implement a Secret Management tool (AWS secrets manager) that will securely store and manage all our secrets such as DB credentials, passwords and API keys etc.
In this way the secrets are not stored in code, database or anywhere in the application. We have to provide the AWS credentials - Access Key Id and Secret access key to programmatically access the APIs of Secrets manager.
Now the biggest question that arises is, where to keep this Initial Trust – the credentials to authenticate the AWS secrets manager.? This is a bootstrapping problem. Again, we have to maintain something outside of the secret store, in a configuration file or somewhere. I feel If this is compromised then there is no real meaning to store everything in a Secret management tool.
I read the AWS SDK developer guide and understand that there are some standard ways to store AWS credentials like – storing them in environmental variables, credentials file with different profiles and by Using IAM roles for Amazon EC2 Instances.
We don’t run/host our application in Amazon cloud, we just want to use AWS secrets manger service from AWS cloud. Hence, configuring the IAM roles might not be the solution for us.
Are there any best practices (or) a best place to keep the initial Trust credentials?
If you're accessing secrets from EC2 instance, ECS docker container, Lambda function, you can use Roles with policy that allows access to Secrets Manager.
if IAM Role is not an option, You can use Federation Login to get temporary credentials (IAM Role) with policy that allows access to Secrets Manager.
As #Tomasz Breś said, you can use federation if you are already using an on-premis Auth system like Active directory or Kerberos.
If you do not have any type of credentials already on the box, you are left with two choices: store your creds in a file and use file system permissions to protect them, or use hardware like an HSM or TPM to encrypt or store your creds.
In any case, when you store creds on the box (even AD/Kerberos), you should ensure only the application owner has access to that box (in the case of a stand alone app and not a shared CLI). You should also harden the box by turning off all un-necessary software and access methods.
I am trying to setup Keycloak server for our organisation. I have couple of questions.
How can we use our existing user database to authenticate users - User Federation. Keycloak only has LADP/Kerberos options. Is there any custom plugin which can be used for MySQL user authentication or can we use existing connectors itself (LDAP/Kerberos) via some adapter for the database?
Is it possible to have multiple Identity providers within Keycloak environment - (Have Keycloak as IDP for few services, while Keycloak Google IDP for other services).
I have followed the official documentation, but for some reason not able to view the content of the link. Any helpful links to proper guide would be great.
Check Keycloak Custom User Federation
It means that, to use diffirent datasource (or process) while Keycloak username / password login
see =>
https://github.com/keycloak/keycloak-documentation/blob/master/server_development/topics/user-storage/simple-example.adoc
https://tech.smartling.com/migrate-to-keycloak-with-zero-downtime-8dcab9e7cb2c github => (https://github.com/Smartling/keycloak-user-migration-provider)
First link => explaining how to configure external db to keycloak.
Second link (need changes)=> these examplecan change like that,
you can create a custom federation implementation,
it will be call your service,
your service will be query your db
your service will response your result
Second example(my suggestion) will be abstract your custom code (federation process, your service) and keycloak. Keycloak ony call your service, everything else are your implementation.
You should implement your own user storage SPI to integrate your MySQL db as an external user storage db
https://www.keycloak.org/docs/latest/server_development/index.html#_user-storage-spi
I answered a similar question regarding existing databases user and keycloak authentication (link here)
I published my own solution as a multi RDBMS implementation (oracle, mysql, postgresl, sqlserver) to solve simple database federation needs, supporting bcrypt and several types of hashes.
It is a configurable keycloak custom provider, you will only need the to set some SQL queries and it is ready to use.
It is already compatible with new keycloak quarkus deployment.
Feel free to clone, fork, contribute or do whatever you need to solve your issue.
GitHub repo:
https://github.com/opensingular/singular-keycloak-database-federation
We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.