How to add files to ipfs using ingress url from the command line tool - ipfs-cluster-ctl? - kubernetes-ingress

I have deployed ipfs-cluster as a statefulset with 3 replicas in Azure Kubernetes cluster. Created loadbalancer service for ipfs and used the loadbalancer service ip to add files using ipfs-cluster-ctl like below,
ipfs-cluster-ctl --host /ip4/20.94.98.25/tcp/9094 add test.txt
The above command provides CID as an output. Sample output looks like,
added Qmeeyj7hldjsj9XCoLSK6dY7ZTVTt8YcjfHXAuTzhCrz test.txt
Now, I have created ingress using haproxy for the ipfs-cluster service and tried to access the added files using the ingress url, sample url looks like,
http://ipfs.testing.example.com/ipfs/Qmeeyj7hldjsj9XCoLSK6dY7ZTVTt8YcjfHXAuTzhCrz
The above url works fine and showing the file content of test.txt
But now I need to use the ingress url instead of loadbalancer ip to add files using ipfs-cluster-ctl. I don't find any reference to achieve this. Can anyone please guide me to add files to ipfs using ingress url.
Thanks in Advance!

Related

Can't install namespace Helm chart

I'm trying to put together a helm chart for provisioning namespaces/projects in OpenShift.
Helm version is 3.9.3
The templates folder has YAML files for the namespace, compute quota, docker pull secret, and a rolebinding for a service account.
The testvalues.yaml file is very simple:
namespace:
name: "mytest"
team: "DevOps"
description: "Test Namespace Created with Helm"
When I try to run helm upgrade --install testnamespace ./namespaceChart --values testvalues.yaml I get an error "namespaces 'mytest' not found".
However, if I remove the quota, secret, and rolebinding files from the templates directory(leaving only namespace.yaml) and run the same command, it works fine, empty namespace is created. I then re-add the other resource yaml files, run the same command for a 3rd time, it works and adds the missing resources accordingly.
The order is supposed to create the namespace first, correct? It seems like its not creating the namespace correctly, or not waiting until it is done before trying the other resources.
I've tried adding the --create-namespace option to the command and that doesn't work either.
Is there something I'm missing? Can I target only the namespace.yaml file on the first round, then just run the command again to complete the rest?
Realized my problem while typing this question up.
My namespace yaml was using:
kind: Project
apiVersion: project.openshift.io/v1
Because that is what our current project spaces show when I inspect their YAML in the Console UI.
Once I switched to:
kind: Namespace
apiVersion: v1
Everything gets setup perfectly fine in one shot. I'm guessing this is because Helm doesn't recognize the "Project" kind as the same as a namespace and doesn't place it at the top of the creation order, thus the "not found" error because it is actually seeing the quota as the first resource to build.

Using aws cli without a homedirectory

I need to use aws cli on an OpenShift Cluster that is quite restricted - it looks like the homedirectory is set to /, while the user in the container does not have permissions to write to /.
The only directory that is writeable from that user is /tmp. Now I need to use aws cli from within a pod of this OpenShift cluster. I came across the environment variables AWS_CONFIG_FILE and AWS_SHARED_CREDENTIALS_FILE. So I would place each a credentials file and a config file to /tmp.
When running aws configure list-profiles with this setup, only the one profile from AWS_SHARD_CREDENTIALS_FILE is listed. Not the one from AWS_CONFIG_FILE.
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
Do you have an idea why these files might not be respected by the aws executable? Is there a way to pass the location of these files directly to the cli as parameter or s.th.?
Instead of configuring files for the AWS CLI, I would assume you could set the following 2 environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and issue your CLI commands immediately.
bruno#pop-os ~> export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
bruno#pop-os ~> export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bruno#pop-os ~> aws cloudformation list-stacks --region us-east-2
{
"StackSummaries": []
}
To answer on:
So it looks to me like AWS_CONFIG_FILE is not respected by aws cli.
The AWS CLI does respect this.
You can specify a non-default location for the config file by setting
the AWS_CONFIG_FILE environment variable to another local path.

What is a good way to deploy secret Java key stores in an OpenShift environment?

We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface.
These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems.
Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system.
As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC).
Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution.
Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
It turns out that I misunderstood how secrets work. They are indeed key-values pairs that you can mount as files. The value can however be any base64 encoded binary that will be mapped as the file contents. So the solution is to first encode the contents of the JKS file to base64:
cat keystore.jks| base64
Then you can put this into your secret definition:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
keystore.jks: "<base 64 from previous command here>"
Finally you can mount this into your docker container by referencing it in the deployment configuration:
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
items:
- key: keystore.jks
path: keystore.jks
This will mount the secret volume secrets at /mnt/secrets and makes the entry with the name keystore.jks available as file keystore.jks under /mnt/secrets.
I'm not sure if this is really a good way of doing this, but it is at least working here.
You can add and mount the secrets like stated by Jan Thomä, but it's easier like this, using the oc commandline tool:
./oc create secret generic crnews-keystore --from-file=keystore.jks=$HOME/git/crnews-service/src/main/resources/keystore.jks --from-file=truststore.jks=$HOME/git/crnews-service/src/main/resources/truststore.jks --type=opaque
This can then be added via UI: Applications->Deployments->-> "Add config files"
where you can choose what secret you want to mount where.
Note, that the name=value pairs (e.g. truststore.jks=) will be used like filename=base64decoded-Content.
My generated base64 was multiline and I was getting the same error.
Trick is, use -w0 argument in base64 so that the whole encode is in 1 line!
base64 -w0 ssl_keystore.jks > test
Above will create a file named test and will contain the base64 in one line, copy paste like this in a secret:
apiVersion: v1
kind: Secret
metadata:
name: staging-ssl-keystore-jks
namespace: staging-space
type: Opaque
data:
keystore.jsk: your-base64-in-one-line
Building upon what both #Frischling and #Jan-Thomä said, and in agreement with Frischling as his way was easier and took care of both the trust cert keystores, after adding the keystores as a secret, under Applications->Deployments->[your deployments name] Click the environment link and add the following system properties:
Name: JAVA_OPTS_APPEND and
Value -Djavax.net.ssl.keyStorePassword=changeme -Djavax.net.ssl.keyStore=/mnt/keystores/your_cert_key_store.jks -Djavax.net.ssl.trustStorePassword=changeme -Djavax.net.ssl.trustStore=/mnt/keystores/your_ca_key_store.jks
This effectively will as indicated, append the keystore file paths, passwords to the java options used by the application, for example JBoss/WildFly or Tomcat.

AWS Elastic Beanstalk application folder on EC2 instance after deployed?

My context
I'm having errors in my deployment using AWS EB with my Flask application.
Now I'm inside the EC2 instance via eb ssh and need to explore the deployed source code of the application.
My problem
Where is the deployed application folder?
The source code is zipped and placed in the following directory:
/opt/elasticbeanstalk/deploy/appsource/source_bundle
There is no file extension but it is in the zip file format:
[ec2-user#ip ~]$ file /opt/elasticbeanstalk/deploy/appsource/source_bundle
/opt/elasticbeanstalk/deploy/appsource/source_bundle: Zip archive data, at least v1.0 to extract
Find for a specific/unique filename in source code folder, we will find the location of our application folder which, in AWS EB, to be
/opt/python/current
/opt/python/bundle/2/app
p.s.
Search for YOUR_FILE.py
find / -name YOUR_FILE.py -print

how to use google api keys based on heroku application name

I've created a few different "environments" for my app that is hosted on heroku so I have:
appName-staging.heroku.com
appName-production.heroku.com
I want to use different google api keys for these applications, how do I do this?
i've created a google.yml file that looks like:
development:
api_key: 'ABCXYZ'
production:
api_key: 'DEFXYZ'
so I use ABCSZY when developing locally, and DEFXYZ for appName-production.heroku.com
question is, how do i get appName-staging.heroku.com to use a different key?
since every application deployed to Heroku is considered to be in "production", both
appName-staging.heroku.com and appName-production.heroku.com use the same key.
You could add a heroku config variable to each environment, allowing you to identify each one from within the app.
Something along the lines of:
$ heroku config:add APP_NAME_ENV=production --app appName-production
$ heroku config:add APP_NAME_ENV=staging --app appName-staging
Then you could grab the current environment from within your app using:
ENV['APP_NAME_ENV']
And if you've got your YAML file as a hash called something like GOOGLE_KEYS, the following would return the correct key for a given environment:
GOOGLE_KEYS[ENV['APP_NAME_ENV']]
The previous answer definitely works but doesn't account for the potential security threats which come with checking files which include private keys into source control. Having your google.yml file in source control will allow anyone with access to your repo to see your private API keys.
A more secure solution would be to delete the google.yml file and create different environment variables on your staging and production servers with the same key:
$ heroku config:add GOOGLE_API_KEY=<production key> --app appName-production
$ heroku config:add GOOGLE_API_KEY=<development key> --app appName-staging
Then, when this is needed you can refer to it in code via
ENV['GOOGLE_API_KEY']
This will allow you to share code without sharing your private API keys.
Some more information on using environment variables on Heroku can be found at https://devcenter.heroku.com/articles/config-vars