How to connect to IPFS node started programmatically using ipfs-core from a java server? - ipfs

I am programmatically starting an IPFS node using JS ipfs-core(npm package) with a custom repository using a different storage backend(similar to S3). Now once the node is started in the AWS instance, I want to send requests to the node using a remote client written in Java.
java-ipfs-http-client can connect to the API port. But, the API and gateway service does not get initiated when the node is started. The Java server will be running on a different machine.
Is it possible to access the ipfs node started using ipfs-core programmatically from a java server running on a different instance?

Found the solution.
When we initialize node programmatically, we need to manually start API/Gateway in the following way.
import * as IPFS from 'ipfs-core'
import { HttpApi } from 'ipfs-http-server'
import { HttpGateway } from 'ipfs-http-gateway'
async function startIpfsNode () {
const ipfs = await IPFS.create()
const httpApi = new HttpApi(ipfs)
await httpApi.start()
const httpGateway = new HttpGateway(ipfs)
await httpGateway.start()
}
startIpfsNode()
This will start the ipfs node along with the API and Gateway
The configuration of API and Gateway port can be changed programmatically in the following way
const ipfs = IPFS.create()
await ipfs.config.set('Addresses.API', '/ip4/127.0.0.1/tcp/5002');
await ipfs.config.set('Addresses.Gateway', '/ip4/127.0.0.1/tcp/9090');
Once the API is started, the IPFS node can accessed from a Java Program using java-ipfs-http-client

Related

gcloud functions deploy issue

I try to deploy one gcloud function with the below Github link to back up the datastore.
https://github.com/portsoc/cloud-simple-datastore-backup/blob/master/index.js
After updating the variant BUCKET_NAME with my cloud storage bucket name, I run it under gcloud shell with the command: node index.js and it will backup the datastore successfully.
but when I continue to run the below command to deploy it:
gcloud functions deploy main
--runtime nodejs12 --trigger-http --allow-unauthenticated
--region=asia-southeast2
After a while, it will give me the below error:
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
Click to view the error screenshot
Any suggestion on this?
Cloud Functions have a specific set of signatures that must be used.
I'm less familiar with JavaScript|Node.JS but I think the function you reference is intended to be invoked as you do node index.js (or similar) and this is incompatible with Cloud Functions.
Please review Write Cloud Functions to understand that signature type that you will need. You will probably have to tweak the authentication in the example to better meet your needs too.
Almost certainly you don't want --allow-unauthenticated either.
After changing the upload code below, I can deploy it.
const { GoogleAuth } = require("google-auth-library");
// fill in your bucket name here:
const BUCKET_NAME = "gs://testinbbk10";
exports.myfunction = async (req,res) =>{
try {
const auth = new GoogleAuth({
scopes: "https://www.googleapis.com/auth/cloud-platform",
});
const client = await auth.getClient();
const projectId = await auth.getProjectId();
console.log(`Project ID is ${projectId}`);
const res2 = await client.request({
method: "POST",
url: `https://datastore.googleapis.com/v1/projects/${projectId}:export`,
data: {
outputUrlPrefix: BUCKET_NAME,
},
});
console.error("RESPONSE:");
console.log(res2.data);
} catch (error) {
console.error("ERROR");
console.error(error);
}
}
but when I try to access the provided link: deploy link, it will show me the error: could not handle the request.
I am confused about how to properly deploy this to the google cloud function? Just want to deploy one simply google cloud function to backup datastore in the google cloud.

How to access environment variables from JSON file?

I am using firebase authentication in my nextjs app. I have stored my service account credentials in a file called secret.json. I wanna hide those credentials in my next.config.js file. How can I access those credentials in the secret.json file? Maybe this will be the same approach not only for nextjs apps but also for other apps. What is the common way to achieve that or is there any specific way for nextjs app?
You might consider storing your private key as an environment variable, which Next.js has built-in support for. You can then avoid the risks of exposing your secrets in next.config.js and services like Heroku and Vercel make it easy & secure to store your env vars in production.
To initialize Firebase on your server, you need just 3 things from your secret.json file:
project_id
client_email
private_key - store this as an env var (e.g., FIRESTORE_PRIVATE_KEY)
You can then use the firebase-admin package to initialize Firebase on your server:
import { cert, initializeApp } from 'firebase-admin/app'
const serviceAccount = {
projectId: 'my-project',
clientEmail: 'myServiceAccount#my-project.iam.gserviceaccount.com',
privateKey: process.env.FIRESTORE_PRIVATE_KEY,
}
const credential = cert(serviceAccount)
initializeApp({ credential })
Saving the private_key as its own env var will also avoid problems arising from attempting to save/parse the entire Firestore json as an env var (e.g., ENAMETOOLONG error) and not require you to do any string manipulation.

PupeeteerSharp Does Not Work in ServiceFabric Stateless Service

I am developing web crawler which could render Javascript websites and so I decided to use PupeeteerSharp, a .NET port of popular Node.JS headless Chrome browser Pupeeteer API. I am running Service Fabric's local development cluster on Windows 10 development machine and have one stateless service in my solution.
I've created Data folder under Service project's PackageRoot folder and put .local-chromium folder contents there (contains chrome.exe executable) so it deploys as independent data package of service.
I've also placed this XML config line in ServiceManifest.xml file:
<DataPackage Name="Data" Version="1.0.0" />
So far it looks good and headless browser content is copied to SFCluster Data package directory properly.
Then in my Stateless Service code I try to call Pupeeteer chromium executable as follows:
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = true,
ExecutablePath = _chromiumPath // #$"{context.CodePackageActivationContext.GetDataPackageObject("Data").Path}\.local-chromium\Win64-706915\chrome-win\chrome.exe"
});
using (var page = (await browser.NewPageAsync()))
{
Response renderResponse;
try
{
renderResponse = await page.GoToAsync(webPage.AbsoluteUri, timeout);
if (renderResponse.Status != System.Net.HttpStatusCode.OK)
{
return new RenderResult(RenderStatus.OtherFailure);
}
// other code
}
catch (TimeoutException)
{
return new RenderResult(RenderStatus.Timeouted);
}
In this line: using (var page = (await browser.NewPageAsync())) my code (Thread) simply hangs without returning, in Debug console I see many thread exits, but no exception occurs. I was previously getting System.IO.FileNotFoundException when I was fixing some other errors regarding appropriate copying of chromium folder contents, but now these errors are gone so it seems that code find .exe but somehow cannot start headless mode of PupeeterSharp.
Does that mean that I cannot simply run external .exe chromium binary with Service Fabric's Native Application Model? Should I use Docker and Linux containers instead?

deploy hashicorp vault without persistent storage in openshift

How to deploy the hashicorp vault in openshift with out using persistent volumes(PV)?
In the openshift cluster as a normal user(not a cluster admin),need to deploy the vault server. I followed the URL but it has persistent volumes (/vault/file) in vault.yaml file in it, which requires permission for my account to create persistent container but I do not have enough permission for my account. so i removed the pv mount paths in the vault-config.json like below, but I am seeing the below error.
{"backend":
{"file":
{"path": "/tmp/file"}
},
...
...
}
Is it possible to create the vault server without PV, like using the local file path (/tmp/file) as backend storage as a normal user?
What is the alternative way to deploy vault in openshift without PV to deploy hashicorp vault?
Below is the error when run with pv,
--> Scaling vault-1 to 1
--> FailedCreate: vault-1 Error creating: pods "vault-1-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
error: update acceptor rejected vault-1: pods for rc 'dev-poc-environment/vault-1' took longer than 600 seconds to become available
How to deploy the hashicorp vault in openshift with out using
persistent volumes(PV)?
You can use In-Memory storage backend as mentioned here. So your vault config looks something like this:
$cat config.hcl
disable_mlock = true
storage "inmem" {}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
But with this data/secrets are not persistent.
Another way is to add a file path to the storage, so that all the secrets which are encrypted stored at the mentioned path.
so now your config changes to
storage "file" {
path = "ANY-PATH"
}
POINTS TO BE NOTED HERE:
Path defined should have permissions to write/read data/secrets
This could be any path that is inside the container, just to avoid dependency on persistence volume.
But what is the problem with this model? When the container restarts, all the data will be lost as the container doesn't store data.
No High Availability – the Filesystem backend does not support high
availability.
So what should be the ideal solution? Anything that makes our data highly available, which is achieved by using dedicated backend storage using a database.
For simplicity, let us take PostgreSQL as backend storage.
storage "postgresql" {
connection_url = "postgres://user123:secret123!#localhost:5432/vault"
}
so now config looks something like this:
$ cat config.hcl
disable_mlock = true
storage "postgresql" {
connection_url = "postgres://vault:vault#vault-postgresql:5432/postgres?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 0
tls_cert_file = "/etc/service/vault-server/vault-server.crt"
tls_key_file = "/etc/service/vault-server/vault-server.key"
}
ui = true
max_lease_ttl = "7200h"
default_lease_ttl = "7200h"
api_addr = "http://127.0.0.1:8200"
So choosing backend storage helps you to persist your data even if the container restarts.
As you are specifically looking for a solution in openshift, create a postgresSQL container using template provided and make vault point it to it using the service name as explanied in the above config.hcl
Hope this helps!

generateDataKey error Signature expired on AWS KMS?

I am working with my client so I cloned git repo and built application which use AWS KMS to generate data key.
All is works well on live server but when I got failed on my local environment.
Here is code snippet and result of error.
const AWS = require('aws-sdk');
AWS.config.update({region:'eu-central-1'});
const kms = new AWS.KMS({ apiVersion: '2014-11-01' });
kms.generateDataKey({
KeyId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
KeySpec: 'AES_256',
}).promise()
.catch(err => {
console.error('generateDataKey error', err.message, err.stack);
throw err;
})
.then(data => {
console.log(data);
});
Is there a way to fix this error?
"GenerateDataKey error Signature expired...."
When you send a request signed using the AWS SigV4 protocol (to KMS or any other AWS service), the requests include a timestamp from when the signature was generated. The tolerance is 5 minutes. This mechanism is in place to make replay attacks harder (they essentially have a smaller window to be peformed). More information here.
Since the same request is working fine on your server, but failing locally, I think the clock on your local workspace is off by more than five minutes.