I am using firebase authentication in my nextjs app. I have stored my service account credentials in a file called secret.json. I wanna hide those credentials in my next.config.js file. How can I access those credentials in the secret.json file? Maybe this will be the same approach not only for nextjs apps but also for other apps. What is the common way to achieve that or is there any specific way for nextjs app?
You might consider storing your private key as an environment variable, which Next.js has built-in support for. You can then avoid the risks of exposing your secrets in next.config.js and services like Heroku and Vercel make it easy & secure to store your env vars in production.
To initialize Firebase on your server, you need just 3 things from your secret.json file:
project_id
client_email
private_key - store this as an env var (e.g., FIRESTORE_PRIVATE_KEY)
You can then use the firebase-admin package to initialize Firebase on your server:
import { cert, initializeApp } from 'firebase-admin/app'
const serviceAccount = {
projectId: 'my-project',
clientEmail: 'myServiceAccount#my-project.iam.gserviceaccount.com',
privateKey: process.env.FIRESTORE_PRIVATE_KEY,
}
const credential = cert(serviceAccount)
initializeApp({ credential })
Saving the private_key as its own env var will also avoid problems arising from attempting to save/parse the entire Firestore json as an env var (e.g., ENAMETOOLONG error) and not require you to do any string manipulation.
I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working
Having issues with setting up yii2 basic framework and ibm bluemix.
I was using this link as a guide: https://github.com/Twanawebtech/yii2App-Bluemix/blob/master/README.md
Step 1a: clone the code and setup
Open your terminal
$ git clone https://github.com/Twanawebtech/yii2App-Bluemix.git
$ cd yii2App-Bluemix
Step 1b: Set cookie validation key in config/web.php file to some random secret string:
'request' => [
// !!! insert a secret key in the following (if it is empty)
- this is required by cookie validation
'cookieValidationKey' => '<secret random string goes here>',
],
Step 1c: You can then access the application through the following URL:
http://localhost/yii2App-Bluemix/web
you should see the Congratulations Screen
Step 1d: Manifest
Go back to the yii2App-Bluemix folder
and open the manifest file
rename name and host to the name you want to call it.
Go to the command prompt and using cloud Foundary CLI (once you have setup cloud Foundary CLI)
enter cf login
API endpoint: https://api.ng.bluemix.net
enter Email Address:
enter Password:
Select a space (if you have more than one space allocated)
Step 2: Push your code to Bluemix
$ cf push yii2App-Bluemix -b https://github.com/cloudfoundry/php-buildpack.git -s cflinuxfs2
Step 3: Access your app by entering the following URL into your browser:
http://yii2basic.mybluemix.net/yii
However I don't get the congratulations screen but instead I get the following which is the source code of the index.php file found in the web folder
What am I missing or forgot to do???
When I simply run the following code, I always gets this error.
s3 = boto3.resource('s3')
bucket_name = "python-sdk-sample-%s" % uuid.uuid4()
print("Creating new bucket with name:", bucket_name)
s3.create_bucket(Bucket=bucket_name)
I have saved my credential file in
C:\Users\myname\.aws\credentials, from where Boto should read my credentials.
Is my setting wrong?
Here is the output from boto3.set_stream_logger('botocore', level='DEBUG').
2015-10-24 14:22:28,761 botocore.credentials [DEBUG] Skipping environment variable credential check because profile name was explicitly set.
2015-10-24 14:22:28,761 botocore.credentials [DEBUG] Looking for credentials via: env
2015-10-24 14:22:28,773 botocore.credentials [DEBUG] Looking for credentials via: shared-credentials-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: config-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: ec2-credentials-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: boto-config
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: iam-role
try specifying keys manually
s3 = boto3.resource('s3',
aws_access_key_id=ACCESS_ID,
aws_secret_access_key= ACCESS_KEY)
Make sure you don't include your ACCESS_ID and ACCESS_KEY in the code directly for security concerns.
Consider using environment configs and injecting them in the code as suggested by #Tiger_Mike.
For Prod environments consider using rotating access keys:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey
I had the same issue and found out that the format of my ~/.aws/credentials file was wrong.
It worked with a file containing:
[default]
aws_access_key_id=XXXXXXXXXXXXXX
aws_secret_access_key=YYYYYYYYYYYYYYYYYYYYYYYYYYY
Note that there must be a profile name "[default]". Some official documentation make reference to a profile named "[credentials]", which did not work for me.
If you are looking for an alternative way, try adding your credentials using
AmazonCLI
from the terminal type:-
aws configure
then fill in your keys and region.
Make sure your ~/.aws/credentials file in Unix looks like this:
[MyProfile1]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
[MyProfile2]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
Your Python script should look like this, and it'll work:
from __future__ import print_function
import boto3
import os
os.environ['AWS_PROFILE'] = "MyProfile1"
os.environ['AWS_DEFAULT_REGION'] = "us-east-1"
ec2 = boto3.client('ec2')
# Retrieves all regions/endpoints that work with EC2
response = ec2.describe_regions()
print('Regions:', response['Regions'])
Source: https://boto3.readthedocs.io/en/latest/guide/configuration.html#interactive-configuration.
I also had the same issue,it can be solved by creating a config and credential file in the home directory. Below show the steps I did to solve this issue.
Create a config file :
touch ~/.aws/config
And in that file I entered the region
[default]
region = us-west-2
Then create the credential file:
touch ~/.aws/credentials
Then enter your credentials
[Profile1]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
After set all these, then my python file to connect bucket. Run this file will list all the contents.
import boto3
import os
os.environ['AWS_PROFILE'] = "Profile1"
os.environ['AWS_DEFAULT_REGION'] = "us-west-2"
s3 = boto3.client('s3', region_name='us-west-2')
print("[INFO:] Connecting to cloud")
# Retrieves all regions/endpoints that work with S3
response = s3.list_buckets()
print('Regions:', response)
You can also refer below links:
Amazon S3 with Python Boto3 Library
Boto 3 documentation
Boto3: Amazon S3 as Python Object Store
from the terminal type:-
aws configure
then fill in your keys and region.
after this do next step use any environment. You can have multiple keys depending your account. Can manage multiple enviroment or keys
import boto3
aws_session = boto3.Session(profile_name="prod")
# Create an S3 client
s3 = aws_session.client('s3')
Create an S3 client object with your credentials
AWS_S3_CREDS = {
"aws_access_key_id":"your access key", # os.getenv("AWS_ACCESS_KEY")
"aws_secret_access_key":"your aws secret key" # os.getenv("AWS_SECRET_KEY")
}
s3_client = boto3.client('s3',**AWS_S3_CREDS)
It is always good to get credentials from os environment
To set Environment variables run the following commands in terminal
if linux or mac
$ export AWS_ACCESS_KEY="aws_access_key"
$ export AWS_SECRET_KEY="aws_secret_key"
if windows
c:System\> set AWS_ACCESS_KEY="aws_access_key"
c:System\> set AWS_SECRET_KEY="aws_secret_key"
Exporting the credential also work, In linux:
export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXX"
export AWS_ACCESS_KEY_ID="XXXXXXXXXXX"
These instructions are for windows machine with a single user profile for AWS. Make sure your ~/.aws/credentials file looks like this
[profile_name]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
I had to set the AWS_DEFAULT_PROFILEenvironment variable to profile_name found in your credentials.
Then my python was able to connect. eg from here
import boto3
# Let's use Amazon S3
s3 = boto3.resource('s3')
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
I work for a large corporation and encountered this same error, but needed a different work around. My issue was related to proxy settings. I had my proxy set up so I needed to set my no_proxy to whitelist AWS before I was able to get everything to work. You can set it in your bash script as well if you don't want to muddy up your Python code with os settings.
Python:
import os
os.environ["NO_PROXY"] = "s3.amazonaws.com"
Bash:
export no_proxy = "s3.amazonaws.com"
Edit: The above assume a US East S3 region. For other regions: use s3.[region].amazonaws.com where region is something like us-east-1 or us-west-2
If you have multiple aws profiles in ~/.aws/credentials like...
[Profile 1]
aws_access_key_id = *******************
aws_secret_access_key = ******************************************
[Profile 2]
aws_access_key_id = *******************
aws_secret_access_key = ******************************************
Follow two steps:
Make one you want to use as a default using export AWS_DEFAULT_PROFILE=Profile 1 command in terminal.
Make sure to run above command in the same terminal from where you use boto3 or you open an editor.[Understand the following scenario]
Scenario:
If you have two terminal open called t1 and t2.
And you run the export command in t1 and you open JupyterLab or any other from t2, you will get NoCredentialsError: Unable to locate credentials error.
Solution:
Run the export command in t1 and then open JupyterLab or any other from the same terminal t1.
In case of MLflow a call to mlflow.log_artifact() will raise this error if you cannot write to AWS3/MinIO data lake.
The reason is not setting up credentials in your python env (as these two env vars):
os.environ['DATA_AWS_ACCESS_KEY_ID'] = 'login'
os.environ['DATA_AWS_SECRET_ACCESS_KEY'] = 'password'
Note you may also access MLflow artifacts directly, using minio client (which requires a separate connection to the data lake, apart from mlflow's connection). This client can be started like this:
minio_client_mlflow = minio.Minio(os.environ['MLFLOW_S3_ENDPOINT_URL'].split('://')[1],
access_key=os.environ['AWS_ACCESS_KEY_ID'],
secret_key=os.environ['AWS_SECRET_ACCESS_KEY'],
secure=False)
I have solved the problem like this:
aws configure
Afterwards I manually entered:
AWS Access Key ID [None]: xxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxx
Default region name [None]: us-east-1
Default output format [None]: just hit enter
After that it worked for me
The boto3 is looking for the credentials in the folder like
C:\ProgramData\Anaconda3\envs\tensorflow\Lib\site-packages\botocore\.aws
You should save two files in this folder credentials and config.
You may want to check out the general order in which boto3 searches for credentials in this link. Look under the Configuring Credentials sub heading.
If you're sure you configure your aws correctly, just make sure the user of the project can read from ./aws or just run your project as a root
I just had this problem. This is what worked for me:
pip install botocore==1.13.20
Source: https://github.com/boto/botocore/issues/1892
In case of using AWS
In my case I had to add the following policy in IAM role to allow ec2 tags to be read by the EC2 instances. That would eliminate Unable to locate credentials error
:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:DescribeTags",
"Resource": "*"
}
]
}
I added MySQL, and PHPMyAdmin cartridges to my openshift php app.
After mysql cartridge was added I saw the page which says:
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
but I have no idea what does it mean.
When I access mysql database through PHPMyAdmin,
I see 127.8.111.1 as db host, so I configured my symfony 2 app (parameters.yml):
parameters:
database_driver: pdo_mysql
database_host: 127.8.111.1
database_port: 3306
database_name: <some_database>
database_user: admin
database_password: <some_password>
Now when I access my web page it throws an error, which I believe related to mysql connection. Can someone show me proper way of doing the above?
EDIT: It seems mysql connection works fine, but somehow
Error 101 (net::ERR_CONNECTION_RESET): Unknown error
is thrown.
The code I use and works very well to make my apps working both on localhost and openshift without changing database config parameters every time I move between them is this:
<?php
# app/config/params.php
if (getEnv("OPENSHIFT_APP_NAME")!='') {
$container->setParameter('database_host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database_port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));
$container->setParameter('database_name', getEnv("OPENSHIFT_APP_NAME"));
$container->setParameter('database_user', getEnv("OPENSHIFT_MYSQL_DB_USERNAME"));
$container->setParameter('database_password', getEnv("OPENSHIFT_MYSQL_DB_PASSWORD"));
}?>
This will tell the app that if is openshift environment it needs to load different username, host, database, etc.
Then you have to import this file (params.php) from your app/config/config.yml file:
imports:
- { resource: parameters.yml }
- { resource: security.yml }
- { resource: params.php }
...
And that's it. You will never have to touch this file or parameters.yml when you move on openshift or localhost.
Connection URL: mysql://$OPENSHIFT_MYSQL_DB_HOST:$OPENSHIFT_MYSQL_DB_PORT/
OpenShift exposes environment variables to your application containing the host and port information for your database. You should reference these environment variables in your configuration instead of hard-coding values. I am not a Symfony expert, but it looks to me like you would need to do the following in order to use this information in your app:
Create a pre-start hook for your application and export variables in Symfony's expected format. Add the following to the .openshift/action_hooks/pre_start_php-5.3 file in your application's git repo:
export SYMFONY__DATABASE__HOST=$OPENSHIFT_MYSQL_DB_HOST
export SYMFONY__DATABASE__PORT=$OPENSHIFT_MYSQL_DB_PORT
Symphony uses this pattern to identify external configuration in the environment, and will make the this configuration available for use in your YAML configuration:
parameters:
database_driver: pdo_mysql
database_host: "%database.host%"
database_port: "%database.port%"
EDIT:
Another option to expose this information for use in the YAML configuration is to import a php file in your app/config/config.yml:
imports:
- { resource: parameters.php }
In app/config/parameters.php:
$container->setParameter('database.host', getEnv("OPENSHIFT_MYSQL_DB_HOST"));
$container->setParameter('database.port', getEnv("OPENSHIFT_MYSQL_DB_PORT"));