I am trying to deploy my application with Heroku from my Github.
I placed my dotenv values in the config vars but one value refer to a json file, that heroku can not access
GCS_KEYFILE = file.json
And this is the json :
{
"type": "service_account",
"project_id": "",
"private_key_id": "",
"private_key": "=\n-----END PRIVATE KEY-----\n",
"client_email": "",
"client_id": "",
"auth_uri": "",
"token_uri": "",
"auth_provider_x509_cert_url": "",
"client_x509_cert_url": "
}
How can I make heroku access the file?
Update
I tried the answer below, that didn't worked for me. In the config vars I tried to add the json. Now I get the following error message :
2020-04-14T14:40:53.370477+00:00 app[web.1]: Error: Could not authenticate request
2020-04-14T14:40:53.370493+00:00 app[web.1]: ENOENT: no such file or directory, open '/app/{
type": "service_account",
"project_id": "",
"private_key_id": "",
"private_key": "=\n-----END PRIVATE KEY-----\n",
"client_email": "",
"client_id": "",
"auth_uri": "",
"token_uri": "",
"auth_provider_x509_cert_url": "",
"client_x509_cert_url": "
2020-04-14T14:40:53.370500+00:00 app[web.1]: at /app/node_modules/gcs-resumable-upload/build/src/index.js:235:19
2020-04-14T14:40:53.370501+00:00 app[web.1]: at /app/node_modules/google-auto-auth/index.js:27:9
2020-04-14T14:40:53.370501+00:00 app[web.1]: at /app/node_modules/google-auto-auth/index.js:233:9
This is the error message when I try the answer below :
Error: You have to specify credentials key file for Google Cloud Storage to work.
The code I try to deploy can be found here
Heroku does not give you a filesystem so this is not trivial.
It looks there is a workaround using heroku-google-application-credentials-buildpack, but I have solved this in a different way: I package my application in a Docker image (which includes the JSON file wherever it expects it) and push/deploy that to Heroku.
The json file (which includes sensitive information) is not on my code repository and it is only pushed inside the image when the image is built.
Hope this helps.
Related
So as far as I know I need to ignore the node_modules folder
and create a new database in PHPMyAdmin in Cpanel and place the credentials in the .env file
then upload the zip file to the folder of the subdomain then extract it
export the database from local and import from Cpanel
create a new nodejs application in Cpanel as follow:
run npm install
and when I test it using postman as follows:
Method: POST
URL: {url}/createUser
payload: json
I receive: 503 Service Unavailable
but when I change the URL to local it works
this is my project structure:
and this is my package.json:
{
"name": "",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"bcrypt": "^5.1.0",
"express": "^4.18.2",
"jsonwebtoken": "^8.5.1",
"mysql": "^2.18.1"
},
"devDependencies": {
"dotenv": "^16.0.3",
"nodemon": "^2.0.20"
}
}
.env file:
DB_HOST = 127.0.0.1
DB_USER = {{user}} <- private
DB_PASSWORD = {{password}} <- private
DB_DATABASE = {{database}} <- private
DB_PORT = 3306
PORT = 3000
In localhost, I'm using this command to run the API: nodemon dbServer
so what seems to be the problem here?
First Please Remove .env File From Your Project
And Copy and Paste all variable with value on text editor
and Then Click on add Environment Variable in cPanel
After That Fill Each Variable and Value one By One
And Please Also Add this code in package.json file
"scripts": {
"test": "nodemon app.js",
"start": "node app.js"
},
Then Start Your App
Please, Don't Forget To Select Script in Run JS Script and Chose Option Start
Now Your App Will Be Install and Work on That Particular Domain
I'm trying to build custom image for AWS EKS managed node group, Note: my custom image (ubuntu) already has MFA and private key based authentication enabled.
I cloned github repository to build eks related changes from the below url.
git clone https://github.com/awslabs/amazon-eks-ami && cd amazon-eks-ami
Next i made few changes to run the make file
cat eks-worker-al2.json
{
"variables": {
"aws_region": "eu-central-1",
"ami_name": "template",
"creator": "{{env `USER`}}",
"encrypted": "false",
"kms_key_id": "",
"aws_access_key_id": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_access_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"aws_session_token": "{{env `AWS_SESSION_TOKEN`}}",
"binary_bucket_name": "amazon-eks",
"binary_bucket_region": "eu-central-1",
"kubernetes_version": "1.20",
"kubernetes_build_date": null,
"kernel_version": "",
"docker_version": "19.03.13ce-1.amzn2",
"containerd_version": "1.4.1-2.amzn2",
"runc_version": "1.0.0-0.3.20210225.git12644e6.amzn2",
"cni_plugin_version": "v0.8.6",
"pull_cni_from_github": "true",
"source_ami_id": "ami-12345678",
"source_ami_owners": "00012345",
"source_ami_filter_name": "template",
"arch": null,
"instance_type": null,
"ami_description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
"cleanup_image": "true",
"ssh_interface": "",
"ssh_username": "nandu",
"ssh_private_key_file": "/home/nandu/.ssh/template_rsa.ppk",
"temporary_security_group_source_cidrs": "",
"security_group_id": "sg-08725678910",
"associate_public_ip_address": "",
"subnet_id": "subnet-01273896789",
"remote_folder": "",
"launch_block_device_mappings_volume_size": "4",
"ami_users": "",
"additional_yum_repos": "",
"sonobuoy_e2e_registry": ""
After adding user and private key build getting failed with below error.
logs
amazon-ebs: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain.
for me just changue region for aws o delete aws region in packer.
I am unable to create an OVA using packer in virtualbox with id_rsa.From the host machine I am able to ssh to the vbox host using same private key. The error is as given
"Error waiting for SSH: ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain". Using "ssh_password" the OVA is created successfully. But
my objective is to create an OVA using private key.
{
"builders": [{
"type": "virtualbox-ovf",
"source_path": "/root/Documents/OVA_idrsa.ova",
"ssh_username": "support",
"ssh_private_key_file": "id_rsa",
"ssh_pty": "true",
"ssh_port": 22,
"vrdp_bind_address": "0.0.0.0",
"guest_additions_mode": "disable",
"virtualbox_version_file": "",
"headless": true,
"ssh_skip_nat_mapping": "true",
"boot_wait": "120s",
"ssh_wait_timeout": "1000s",
"shutdown_command": ""
}]
}
I have tried using the ssh_password instead. It was successfull. But with private_key file the issue is recurrent.
{
"builders": [{
"type": "virtualbox-ovf",
"source_path": "/root/Documents/OVA_idrsa.ova",
"ssh_username": "support",
"ssh_private_key_file": "id_rsa",
"ssh_pty": "true",
"ssh_port": 22,
"vrdp_bind_address": "0.0.0.0",
"guest_additions_mode": "disable",
"virtualbox_version_file": "",
"headless": true,
"ssh_skip_nat_mapping": "true",
"boot_wait": "120s",
"ssh_wait_timeout": "1000s",
"shutdown_command": ""
}]
}
Error:
"Error waiting for SSH: ssh: handshake failed: ssh: unable to
authenticate, attempted methods [none publickey], no supported methods
remain"
I am working on an old web service where I generate the rest endpoints documentation that comply with OAS standards using a custom tool. Using this OAS json file I can deploy the API to Azure API Managements services through the portal and it all works fine. However, I need to automate this process and hence need to use ARM templates to deploy all web services to Azure APIM. I have been looking into the examples provided https://learn.microsoft.com/en-us/azure/templates/microsoft.apimanagement/service/apis but just can't seem to wrap my head around how to use a local OAS.json file or a file in github.
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
}
},
"variables": {
"apiManagementServiceName": "price-capture"
},
"resources": [
{
"apiVersion": "2018-01-01",
"type": "Microsoft.ApiManagement/service/apis",
"name": "[variables('apiManagementServiceName')]",
"properties": {
"displayName": "Service display Name",
"apiRevision": "1",
"description": "API description",
//need help since it's not a swagger url
//wondering if there is a way to ref a local file like the option
//provided in the portal when we register api's manually.
"serviceUrl": "----",
"path": "----",
"protocols": [
"https"
],
"isCurrent": true,
"apiVersion": "v1",
"apiVersionDescription": "apiVersionDescription"
}
}
]
}
You can deploy and configure an entire API on API Management via ARM templates, but you cannot use a local file to provide the OpenApi/Swagger.
In your case the OpenApi/Swagger needs to be publicly accessible so the resource manager can read from it, so if the Github URL is freely accessible it should work.
I typically store the OpenApi/Swagger to a storage account and use the SAS token to access it from the ARM template.
You can check out this blog for details on automating API deployment in APIM:
https://blog.eldert.net/api-management-ci-cd-using-arm-templates-linked-template/
You can deploy the API using an Azure Resource Manager template of type Microsoft.ApiManagement/service/apis, and to use an Open API / swagger definition you need to specify the contentValue and and contentFormat parameters of the template
{
"name": "awesome-api-management/petstore",
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2018-06-01-preview",
"properties": {
"path": "petstore"
"contentValue": "petstore swagger file contents here", // or it's URL
"contentFormat": "swagger-json", // or swagger-link-json if externally available
}
}
I don't think it's possible to deploy the APIs configs via templates.
I've been trying to figure this out myself but I'm pretty sure you can't include the actual APIs you want in the service.
From what I can tell, you can't do that with the GIT repo either because that needs authentication that is manually created in the portal
I think the only thing you can automate with the ARM template is the actual API Management service and then you need to use the Azure API to add and configure the APIs on it.
However, I have yet to figure out how to do that myself.
I actually have a service ticket open to get help on that.
The API has changed slightly so this works:
The yaml file (calculatorApiFile) needs to be uploaded first to a blob storage, but this can be done as part of the deployment pipeline
{
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2019-01-01",
"name": "[concat(parameters('service_name'), '/b12b1d5ab8204cg6b695e3e861fdd709')]",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service', parameters('service_name'))]"
],
"properties": {
"displayName": "Calculator",
"apiRevision": "1",
"description": "A simple Calculator ",
"path": "calc",
"value": "[concat(parameters('containerUri'), parameters('calculatorApiFile'), parameters('containerSasToken'))]",
"format": "openapi-link",
"protocols": [
"https"
],
"isCurrent": true
}
}
I figured out the answer ..all I had to do was write an azure function that fetches the oas.yaml file from a private github repository.
"variables":{
"swagger_json":"[concat(parameters('url_of_azurefunctionwithaccesskey'),'&&githuburi='parameter('raw_url'),'&githubaccesstoken=',parameter('personalaccesstoken')]"
},
"resources": [
{
"type": "Microsoft.ApiManagement/service/apis",
"name": "[concat(parameters('apimName') ,'/' ,parameters('serviceName'))]",
"apiVersion": "2018-06-01-preview",
"properties": {
"apiRevision": "[parameters('apiRevision')]",
"path": "pricecapture",
"contentValue": "[variables('swagger_json')]",
"contentFormat": "openapi-link"
}
}]
The Azure function that I had to write was something like this:
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using System.IO;
using System.Text;
public static async Task<HttpResponseMessage> Run(HttpRequest req, ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
var gitHubUri = req.Query["githuburi"];
var gitHubAccessToken = req.Query["githubaccesstoken"];
var encoding = Encoding.ASCII;
if (string.IsNullOrEmpty(gitHubUri))
{
var errorcontent = new StringContent("please pass the raw file content URI (raw.githubusercontent.com) in the request URI string", Encoding.ASCII);
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.BadRequest,
Content = errorcontent
};
}
else if (string.IsNullOrEmpty(gitHubAccessToken))
{
var errorcontent = new StringContent("please pass the GitHub personal access token in the request URI string", Encoding.ASCII);
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.BadRequest,
Content = errorcontent
};
}
else
{
var strAuthHeader = "token " + gitHubAccessToken;
var client = new HttpClient();
client.DefaultRequestHeaders.Add("Accept", "application/vnd.github.v3.raw");
client.DefaultRequestHeaders.Add("Authorization", strAuthHeader);
var response = await client.GetAsync(gitHubUri);
return response;
}
}
If you load your YAML into a variable, that can be passed to the ARM template and be passed as the value:
deploy.bat:
SETLOCAL EnableDelayedExpansion
set API_DEPLOYMENT=<deployment name>
set API_GROUP=<deployment group>
set API=<api file path.yml>
set OPENAPI=
for /f "delims=" %%x in ('type %API%') do set "OPENAPI=!OPENAPI!%%x\n"
call az deployment group create -n %API_DEPLOYMENT% -g %API_GROUP% --mode Complete -f deploy.json -p openApi="!OPENAPI!"
ENDLOCAL
deploy.json (note the use of replace)
...
{
"type": "Microsoft.ApiManagement/service/apis",
"apiVersion": "2020-12-01",
"name": "[variables('apiName')]",
"properties": {
"path": "[variables('service')]",
"apiType": "http",
"displayName": "[variables('apiDisplayName')]",
"format": "openapi",
"value": "[replace(parameters('openApi'), '\\n', '\n')]"
},
...
},
...
I created a new Node Package to start sharing a project that I'm working on, but I'm having a bit of trouble getting my require statement to work.
Project: https://github.com/kcjonson/indigo
The issue that I'm having is that requiring my module this way:
var indigo = require('indigo');
Does not work, but requiring it by more explicit path like:
var indigo = require('indigo/lib/indigo');
works just fine.
I assume this is an issue with my package.json file which is as follows:
{
"author": {
"name": "Kevin Jonson",
"email": "kcjonson#gmail.com",
"url": "http://kevinjonson.com"
},
"name": "indigo",
"description": "Node.js Facade for Perceptive Home Automations Indigo home automation servers python REST API",
"version": "0.0.7",
"repository": {
"type": "git",
"url": "git://github.com/kcjonson/indigo.git" },
"directories": {
"lib": "./lib"
},
"main:": "lib/indigo.js",
"license": "MIT",
"private": false
}
I've successfully added it to NPM and running npm install on the project that is using it does download the correct latest version and places it in the node_modules directory as expected.
Any help would be appreciated, thanks in advance.
I am not sure if that is going to work but still worth a try I guess.
Try removing 'directories'