I am using firebase authentication in my nextjs app. I have stored my service account credentials in a file called secret.json. I wanna hide those credentials in my next.config.js file. How can I access those credentials in the secret.json file? Maybe this will be the same approach not only for nextjs apps but also for other apps. What is the common way to achieve that or is there any specific way for nextjs app?
You might consider storing your private key as an environment variable, which Next.js has built-in support for. You can then avoid the risks of exposing your secrets in next.config.js and services like Heroku and Vercel make it easy & secure to store your env vars in production.
To initialize Firebase on your server, you need just 3 things from your secret.json file:
project_id
client_email
private_key - store this as an env var (e.g., FIRESTORE_PRIVATE_KEY)
You can then use the firebase-admin package to initialize Firebase on your server:
import { cert, initializeApp } from 'firebase-admin/app'
const serviceAccount = {
projectId: 'my-project',
clientEmail: 'myServiceAccount#my-project.iam.gserviceaccount.com',
privateKey: process.env.FIRESTORE_PRIVATE_KEY,
}
const credential = cert(serviceAccount)
initializeApp({ credential })
Saving the private_key as its own env var will also avoid problems arising from attempting to save/parse the entire Firestore json as an env var (e.g., ENAMETOOLONG error) and not require you to do any string manipulation.
Related
I am programmatically starting an IPFS node using JS ipfs-core(npm package) with a custom repository using a different storage backend(similar to S3). Now once the node is started in the AWS instance, I want to send requests to the node using a remote client written in Java.
java-ipfs-http-client can connect to the API port. But, the API and gateway service does not get initiated when the node is started. The Java server will be running on a different machine.
Is it possible to access the ipfs node started using ipfs-core programmatically from a java server running on a different instance?
Found the solution.
When we initialize node programmatically, we need to manually start API/Gateway in the following way.
import * as IPFS from 'ipfs-core'
import { HttpApi } from 'ipfs-http-server'
import { HttpGateway } from 'ipfs-http-gateway'
async function startIpfsNode () {
const ipfs = await IPFS.create()
const httpApi = new HttpApi(ipfs)
await httpApi.start()
const httpGateway = new HttpGateway(ipfs)
await httpGateway.start()
}
startIpfsNode()
This will start the ipfs node along with the API and Gateway
The configuration of API and Gateway port can be changed programmatically in the following way
const ipfs = IPFS.create()
await ipfs.config.set('Addresses.API', '/ip4/127.0.0.1/tcp/5002');
await ipfs.config.set('Addresses.Gateway', '/ip4/127.0.0.1/tcp/9090');
Once the API is started, the IPFS node can accessed from a Java Program using java-ipfs-http-client
GCP, terraform is installed on GCP project-A 'test-instance' instance, using terraform how to deploy instance on project-B ?
I was able to do it using gcloud command, does anyone knows how to do it ?
provider "google" {
project = "project-b"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {}
}
}
The problem you are facing is around access control.
You are trying to run terraform from a VM lives in Project-A and terraform code wants to create a new VM (or other resource) in Project-B.
By default, service account attached to Project-A VM does not have enough rights to create any resource in Project-B. To solve this, you can create a service account at Folder level (or Org level) which has permissions to create VM in required projects and then you can attach that service account to the VM which runs terraform.
Hope this helps.
I suggest you use Terraform Variables using .tfvars files and multiple Terraform Workspaces. You can then switch between workspaces and apply the tfvars for each particular project separately.
e.g.
# variables.tf
variable "project_id" {
type = string
}
And then use the variable in your terraform config:
# main.tf
provider "google" {
project = var.project_id
region = "us-central1"
zone = "us-central1-c"
}
The tfvars will then look like this:
# vars/dev.tfvars
project_id = "my-dev-project"
Full invocation within your workspace (see the docs) can then be done using plan/apply as you would normally do:
terraform workspace select dev
terraform plan -var-file vars/dev.tfvars
How to deploy the hashicorp vault in openshift with out using persistent volumes(PV)?
In the openshift cluster as a normal user(not a cluster admin),need to deploy the vault server. I followed the URL but it has persistent volumes (/vault/file) in vault.yaml file in it, which requires permission for my account to create persistent container but I do not have enough permission for my account. so i removed the pv mount paths in the vault-config.json like below, but I am seeing the below error.
{"backend":
{"file":
{"path": "/tmp/file"}
},
...
...
}
Is it possible to create the vault server without PV, like using the local file path (/tmp/file) as backend storage as a normal user?
What is the alternative way to deploy vault in openshift without PV to deploy hashicorp vault?
Below is the error when run with pv,
--> Scaling vault-1 to 1
--> FailedCreate: vault-1 Error creating: pods "vault-1-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
error: update acceptor rejected vault-1: pods for rc 'dev-poc-environment/vault-1' took longer than 600 seconds to become available
How to deploy the hashicorp vault in openshift with out using
persistent volumes(PV)?
You can use In-Memory storage backend as mentioned here. So your vault config looks something like this:
$cat config.hcl
disable_mlock = true
storage "inmem" {}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
But with this data/secrets are not persistent.
Another way is to add a file path to the storage, so that all the secrets which are encrypted stored at the mentioned path.
so now your config changes to
storage "file" {
path = "ANY-PATH"
}
POINTS TO BE NOTED HERE:
Path defined should have permissions to write/read data/secrets
This could be any path that is inside the container, just to avoid dependency on persistence volume.
But what is the problem with this model? When the container restarts, all the data will be lost as the container doesn't store data.
No High Availability – the Filesystem backend does not support high
availability.
So what should be the ideal solution? Anything that makes our data highly available, which is achieved by using dedicated backend storage using a database.
For simplicity, let us take PostgreSQL as backend storage.
storage "postgresql" {
connection_url = "postgres://user123:secret123!#localhost:5432/vault"
}
so now config looks something like this:
$ cat config.hcl
disable_mlock = true
storage "postgresql" {
connection_url = "postgres://vault:vault#vault-postgresql:5432/postgres?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
So choosing backend storage helps you to persist your data even if the container restarts.
As you are specifically looking for a solution in openshift, create a postgresSQL container using template provided and make vault point it to it using the service name as explanied in the above config.hcl
Hope this helps!
I'm running NodeJS Express app,
Currently have dev env, test env, and prod env.
However, the DB connection settings are in the code, is there a secure and best practice way to store DB config and all other configs in JSON file format by declaring them in a module (separately for each env or all in one module to be exported, maybe have a default.JSON, Dev.JSON, Prod.JSON...etc) for each environment then require them accordingly by setting the correct configuration for the correct environment in app.js.
I would like to achieve this without depending on any 3rd party package like .env or ncof.
Most of the main NodeJs hosting providers uses a simple environment variable. You can use this :
process.env.NODE_ENV
For defining it by yourself, for exemple 'development' on your local, you can do :
NODE_ENV=developpment node yourapp.js
With this, I suggest you to use a config tool, like nconf (there are some good competitors). You can do like this for example :
nconf
.argv() // Takes arguments from CLI
.file('./env.' + process.env.NODE_ENV + '.json') // takes from specific env file
.file('package', './package.json'); // takes from package.json
Here priority is from the most important to the least :
1) argv
2) specific environment file
3) package.json
You can require file based on the environment.
const env = 'test'; // This value can be taken from config or .env
const configs = require(`../path/${env}`);
console.log('DB Config', configs.DB_PATH);
Depending on your environment you can load the file. And value for environment can be retrieved from .env or any other config.
To make a server-server interact with a Google API in development I need to put my service-account JSON key in the root of myapp, set
ENV["GOOGLE_APPLICATION_CREDENTIALS"] = "api-test-key.json"
and all is good.
But in production on Heroku there are config vars and the google_auth credentials_loader.rb insists on a file.
I managed to put the JSON's contents in a heroku config var nicely and can get it by calling
puts ENV["GOOGLE_APPLICATION_CREDENTIALS"]
What to do?
There is this question where they figured out a workaround but that is for oauth2 not for service-account type JSON:
How to upload a json file with secret keys to Heroku
We config our GoogleAPI credentials via the application.yml file ... found in the root/config/application.yml file.
In it we have;
AUTH_URI: https://accounts.google.com/o/oauth2/auth
CLIENT_SECRET: xxxxxxx
TOKEN_URI: https://accounts.google.com/o/oauth2/token
CLIENT_EMAIL: xxxx
REDIRECT_URIS: http://localhost:3000/signin/connect
CLIENT_X509_CERT_URL: https://www.googleapis.com/robot/v1/metadata/x509/901506026811-q0am03i585627ptu5r38o7cpkt8pk98l#developer.gserviceaccount.com
CLIENT_ID: xxx
AUTH_PROVIDER_X509_CERT_URL: https://www.googleapis.com/oauth2/v1/certs
JAVASCRIPT_ORIGINS: http://localhost:3000
PLUS_LOGIN_SCOPE: https://www.googleapis.com/auth/plus.login https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/plus.circles.read
You could maybe load in your config settings here?