I am trying to figure out what is involved to write a console application that will run as part of a VSTS Release task and that program will read a connection string (secret) from a preconfigured keyvault and then connect to an Azure SQL db using that connection string and apply some changes.
Currently I have my Web Apps connecting to KeyVault and the Azure SQL Server
using Azure AD Application Token authentication so I know what is involved on that front.
When you check "Allow scripts to access OAuth token" on agent settings page,
can this token be used (using ADAL) to connect to KeyVault and SQL Server.
(Assuming the VisualStudioSPNxxx has the appropriate access to the above resources).
If not what should I be looking for?
The vsts token (Allow scripts to access OAuth token) can’t be used to connect to KeyVault.
You need to register app with Azure Active Directory and enable to communicate with Azure Active Directory and Key Vault, then get the connectionstring dynamically.
More information, you can refer to: Protecting Secrets using VSTS and Azure Key Vault
This is made relatively very easy now with Variable Groups - https://learn.microsoft.com/en-us/vsts/pipelines/library/variable-groups?view=vsts
You can link a secret by connecting your Azure KV to a variable and then use this variable as you would normally use it in any script/task.
Related
I am trying to execute the Oracle cloud infrastructure rest API's from the postman application I have followed their document and made the setup but I am getting authentication errors. I need some information regarding the OCI herders.
You can go quickly to https://www.postman.com/oracledevs/workspace/oracle-cloud-infrastructure-rest-apis/overview, fork the API you want, and start using it.
No need for complex manual steps.
For the credentials, Fork the "OCI Credentials" and input your data, as shown here:
Oracle Cloud Infrastructure credentials for Postman
Please refer to below post for details.
https://redthunder.blog/2019/07/10/calling-oci-apis-from-postman/
In short, you need to create public/private key pair and upload public key to OCI via console. and use private key on your client (Postman) to authenticate your requests.
I have a .net core app installed as a docker on google cloud run, this app that needs to be connected to cloud sql (mysql).
When using the private ip address it, it's not working.
When using public IP, it's working, but It's not a good solution for production.
this is my connection string:
"ConnectionString": "server=10.4.16.6;database=mydb;user=root;pwd=mypwd"
When I create the app, Im able to select the database i need to connect to:
But this is not helping to connect.
The relevant docs are explaining how to do it for python and java explictly.
If you do not want to use public IP then you would need to rely on service account to connect to Cloud SQL. However, .net MySQL driver has no understanding of GCP IAM and Service accounts. So you will need to use a proxy called Cloud SQL Proxy. Cloud SQL Proxy understands IAM and Service accounts.
The flow will basically look like this:
Your app -> Regular MySQL Port -> Cloud SQL Proxy(Installed in the
app's network or locally) -> CloudSQL
You will need to do the following:
Create a service account
Assign the role of Cloud SQL Client to the created service account
Download the service account key in the json format
Set env variable GOOGLE_APPLICATION_CREDENTIALS=C:\Downloaded.json
Download Cloud SQL Proxy
Run it `cloud_sql_proxy -instances=projectname:regioname:instanceid=tcp:3306
At this point you MySQL proxy ready to accept connections at 3306, modify the connection string to take localhost or wherever you installed the Cloud SQL Proxy.
Learn more at About the Cloud SQL Proxy
You can create the Cloud Run app from the console (and select the Cloud SQL Connection) or from the gcloud command line and specify
--add-cloudsql-instances <INSTANCE-NAME> --set-env-vars INSTANCE-CONNECTION-NAME="INSTANCE_CONNECTION_NAME"
These settings automatically enable and configures the Cloud SQL proxy. You can connect to the proxy, from your asp.net Core app, using the unix domain socket using the format: /cloudsql/INSTANCE_CONNECTION_NAME.
I used the following connection string in my appsettings.json and it worked for me:
"Server=/cloudsql/INSTANCE_CONNECTION_NAME;Database=DB_NAME;Uid=USER_NAME;Pwd=PASSWORD;Protocol=unix"
NB. Make sure you have given the service account that your Cloud Run app is running under Cloud SQL Client role in IAM
For our product we have decided to implement a Secret Management tool (AWS secrets manager) that will securely store and manage all our secrets such as DB credentials, passwords and API keys etc.
In this way the secrets are not stored in code, database or anywhere in the application. We have to provide the AWS credentials - Access Key Id and Secret access key to programmatically access the APIs of Secrets manager.
Now the biggest question that arises is, where to keep this Initial Trust – the credentials to authenticate the AWS secrets manager.? This is a bootstrapping problem. Again, we have to maintain something outside of the secret store, in a configuration file or somewhere. I feel If this is compromised then there is no real meaning to store everything in a Secret management tool.
I read the AWS SDK developer guide and understand that there are some standard ways to store AWS credentials like – storing them in environmental variables, credentials file with different profiles and by Using IAM roles for Amazon EC2 Instances.
We don’t run/host our application in Amazon cloud, we just want to use AWS secrets manger service from AWS cloud. Hence, configuring the IAM roles might not be the solution for us.
Are there any best practices (or) a best place to keep the initial Trust credentials?
If you're accessing secrets from EC2 instance, ECS docker container, Lambda function, you can use Roles with policy that allows access to Secrets Manager.
if IAM Role is not an option, You can use Federation Login to get temporary credentials (IAM Role) with policy that allows access to Secrets Manager.
As #Tomasz Breś said, you can use federation if you are already using an on-premis Auth system like Active directory or Kerberos.
If you do not have any type of credentials already on the box, you are left with two choices: store your creds in a file and use file system permissions to protect them, or use hardware like an HSM or TPM to encrypt or store your creds.
In any case, when you store creds on the box (even AD/Kerberos), you should ensure only the application owner has access to that box (in the case of a stand alone app and not a shared CLI). You should also harden the box by turning off all un-necessary software and access methods.
We're using Vault to store our application secrets and config. When our app (Java) starts, a script does all the magic of getting the secrets and config from Vault and storing them locally for the application to read. The script is authenticating to Vault using AWS IAM role.
Now we're getting to a situation where the application needs to read secrets from Vault on the go, not just on startup. For that purpose, I need it to be able to do the authentication pretty much on every request. It's worth mentioning that the app might also run on the developer machine, so whatever authentication done - it needs to work on the EC2 instance as well as the local development environment.
I'm currently leaning towards creating a username and password, store them in Vault for the application to get when starting up. Then the application could use that username/password to authenticate to Vault when it needs.
I'm also considering AppRole, but can't really see any real advantage to it over simple user/password setup.
What's the best solution for this use-case? Any advise would be highly appreciated!
Thanks,
Yosi
The AWS recommendation for storing secrets is to use AWS Systems Manager Parameter Store.
Software running on an Amazon EC2 instance with an assigned Role can use those credentials to access the Parameter Store to retrieve application secrets.
The Parameter Store can also be used outside of EC2, but some AWS credentials will still be needed to authenticate to the Parameter Store.
I have an Azure App Service and an Azure Storage Account. I know there is a server/vm behind the app service, but I have not explicitly started a machine.
I'm trying to import data from an access database which will be regularly uploaded to a fileshare in my storage account. I'd like to use an Azure WebJob to do the work in the background.
I'm trying to use DAO to read the data:
string path = #"\\server\share\folder\datbase.mdb";
DBEngine dbe = new DBEngine();
Database db = dbe.OpenDatabase(path);
DAO.Recordset rs = db.OpenRecordset("select * from ...");
This works when I run it locally, but when I try to run it in my web job accessing a fileshare in my storage account, it is not finding the file. I assume this is because DBEngine knows nothing of Azure and Azure account names and security keys, doesn't send them and Azure Storage doesn't respond.
So what I'd like to try is to see if I can map an Azure Storage Fileshare onto the server underlying my App Service. I've tried a number of different things, but have received variations of "Access Denied" each time. I have tried:
Running net use T: \name.file.core.windows.net\azurefileshare
/u:name key from the App Service consoles in the Azure Portal
Running
net use from a process within my webjob
Invoking WNetAddConnection2
from within my webjob
Looks like the server is locked down tight. Does anyone have any ideas on how I might be able to map the fileshare onto the underlying server?
Many thanks
As I know, Azure web app runs in sandbox. we could not map an Azure file share to Azure web app. So Azure file storage is a good place if you choose Azure web app. From my experience, there are below workarounds for you. Hope this could give you some tips.
1) Use Azure file storage, but choose Azure VM or Cloud service as host service.
2) Still choose Azure web app as host service, but include the access db in the solution and upload to Azure web app.
3) Choose SQL Azure as database instead. Here is the article that could help us to migrate the access database to SQL Azure
In the end, as Jambor rightly says, the App Service VM is locked down tight.
However, it turns out that the App Service VM comes with some local temporary storage for the use of the various components running on the VM.
This is at D:\local\Temp\ and can be written to by a web job.
Interestingly, this is a logical folder on a different share/drive from D:\local and the size of this additional storage is dependent on the App Service's scale.