OpenShift router with custom template: permission denied - openshift

I've followed the example as shown in https://docs.openshift.com/enterprise/3.2/install_config/install/deploy_router.html#using-configmap-replace-template but when the new Pod crashed after start with a "Permission denied" to open the map files during templating (generation of haproxy-config and map files).
As soon as I remove the TEMPLATE_FILE env var, a new Pod is started and everything works again - almost seems like a different account is used if custom template is set
I0405 11:03:35.627827 1 template.go:260] Starting template router (v3.9.0-alpha.4+9ab7a71)
I0405 11:03:35.630984 1 metrics.go:157] Router health and metrics port listening at 0.0.0.0:1936
I0405 11:03:35.636222 1 router.go:228] Router is including routes in all namespaces
E0405 11:03:35.837826 1 limiter.go:137] error creating config file /var/lib/haproxy/conf/os_route_http_redirect.map: open /var/lib/haproxy/conf/os_route_http_redirect.map: permission denied

Can be either one of the two options:
You're using a 3.9 router and you looked up the instructions for a 3.2 release. In that document, it tells you how to copy the original configuration file from the router that matches your release. Make sure you use the appropriate router version.
There's a bug in that release (as you're using an alpha.4 release.
I would recommend you trying with the correct version for the router, since in 3.9 HAProxy has been upgraded, and if that still doesn't work, look in the openshift/origin GitHub issues and ask there.

Related

Unable to resolve .local domains with getent even though avahi-resolve-host-name succeeds

Trying to set up a network printer with CUPS.
Followed online documentation that stated:
To discover or share printers using DNS-SD/mDNS, setup .local hostname
resolution with Avahi and restart cups.service.
Followed directions for setting up Avahi to the point where avahi-browse --all --ignore-local --resolve --terminate and avahi-resolve-host-name my-domain.local are both working.
But getent hosts my-domain.local fails to resolve. This results in CUPS failing to print because it can't find my-printer.local.
I read the mdns Github page and saw a note that made me think I didn't need a /etc/mdns.allow file.
nss-mdns has a simple configuration file /etc/mdns.allow for enabling
name lookups via mDNS in other domains than .local.
Note: The "minimal" version of nss-mdns does not read /etc/mdns.allow under any circumstances. It behaves as if the file
does not exist.
In the recommended configuration, no /etc/mdns.allow file is present.
But then I saw the last note in that section:
If, during a request, the system-configured unicast DNS (specified in
/etc/resolv.conf) reports an SOA record for the top-level local name,
the request is rejected. Example: host -t SOA local returns something
other than Host local not found: 3(NXDOMAIN). This is the unicast SOA
heuristic.
I tested that out on my machine and sure enough, I was getting something OTHER than Host local not found....
Adding a /etc/mdns.allow file with a line for .local. and for .local and now I can ping my-printer.local.

Cannot GET /api/forge/oauth/callback

Im trying to test out this demo on my own windows machine: https://github.com/Autodesk-Forge/forge-bim360-clashissue
Ive successfully started had the template running with these commands.
npm install
set FORGE_CLIENT_ID=<<YOUR CLIENT ID FROM DEVELOPER PORTAL>>
set FORGE_CLIENT_SECRET=<<YOUR CLIENT SECRET>>
set FORGE_CALLBACK_URL=<<YOUR CALLBACK URL>>
npm run nodemon
Ive added a new app within the Forge My Apps interface.
Ive added the provisions for the the BIM 360 Account interface.
I can connect to my localhost, and when i press ALLOW to try to authenticate and login to the autodesk account, i get redirected to the following website with the following error:
http://localhost:3000/api/forge/oauth/callback?code=TOAq...
Cannot GET /api/forge/oauth/callback
How can i get past this error?
It looks like a configuration mismatch. You have configured the callback to be http://localhost:3000/api/forge/oauth/callback but according to https://github.com/Autodesk-Forge/forge-bim360-clashissue/blob/master/server/endpoints/oauth.endpoints.js#L72 your server actually expects the callback on a different URL: http://localhost:3000/api/forge/callback/oauth.

Keycloak authentication with Electron App

Hi I've been stuck on this for days! I'm trying to use keycloak to authenticate my electron app after converting my react app using this guide.
When I run 'npm run electron:dev' , keycloak redirects to the login page. However, when I run 'npm run electron:prod' this fails.
Logs from keycloak server shows:
Server:server-one] 08:58:31,575 WARN [org.keycloak.events] (default task-3) type=LOGIN_ERROR, realmId=codingpedia, clientId=my-ui, userId=null, ipAddress=127.0.0.1, error=invalid_redirect_uri, redirect_uri=file:///home/mycompany/john/projects/boilerplate-javascript-electron/app/build/index.html
Notice that the redirect_uri is 'file:///...' which I believe to be the cause of it.
I've also tried to change the below but it does not resolve the problem.
// import createHistory from 'history/createBrowserHistory';
import createHistory from 'history/createHashHistory';
Why is this working in dev but not in prod? Is there something I'm missing? Thank you in advance!
It works in dev probably because the "index.html" file is located in your computer (file:///home/mycompany/john/projects/boilerplate-javascript-electron/app/build/index.html).
This stackoverflow thread tells how to properly set the redirect_uri parameter, through the admin console.
Note: make sure you can remotely access your index.html in prod, using a browser or any other client tool (HTTP GET).

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

Hosting a keystonejs app with openshift

I keep getting a 503 but no errors in the log when trying to host my keystone.js app on openshift, has anyone successfully hosted a keystone app with them? Everything works fine on localhost.
I am using a fresh install of keystone.js with no blog or cloudinary.
Your providing very little information to give you a definitive answer. What options are you passing to keystone.init()? Are you using dotenv? If so, what are you setting there? Did you set any environment variables using rhc set-env?
I ask because a common (though not by far the only) culprit of 503 errors in Node.js applications on OpenShift is a port number overriding OpenShift's. Keystone looks at process.env.PORT before it looks at process.env.OPENSHIFT_INTERNAL_PORT. So, if you have PORT set on your .env or with rhc set-env it will take precedence over OPENSHIFT_INTERNAL_PORT.
I came across a similar question on the KeystoneJS Google Group. In that other case the developer had added a MONGODB cartridge to his app, but had not set the connection string for the cartridge in Keystone.
If this is your case as well you need to set the Keystone mongo option in Keystone.init() or using Keystone.set('mongo', 'connection_sring'). When you created the cartridge you got a url and some credentials. OpenShit passes these to your application in environment variables. You can build the mongo connection string as follows:
var connectionString = process.env.OPENSHIFT_MONGODB_DB_USERNAME + ":" + process.env.OPENSHIFT_MONGODB_DB_PASSWORD + "#" + process.env.OPENSHIFT_MONGODB_DB_HOST + '/' + process.env.OPENSHIFT_APP_NAME;
keystone.set('mongo', connectionString);
or
keystone.init({
...
mongo: connectionString,
...
});
Or you can use rhc set-env to set the MONGO environment variable as follows:
rhc set-env MONGO=http://{username}:{password}#{connection url}/{dbname} -a your_app_name
The connection url above is the one you got from OpenShift when you created the cartridge. If looks like a standard MONGODB url (e.g. mongodb://127.6.85.129:27017/).
These are just my best guesses, given that your question is a bit thin on details. You may want to post some more specifics so we can more accurately assess your problem.