Hosting a keystonejs app with openshift - openshift

I keep getting a 503 but no errors in the log when trying to host my keystone.js app on openshift, has anyone successfully hosted a keystone app with them? Everything works fine on localhost.
I am using a fresh install of keystone.js with no blog or cloudinary.

Your providing very little information to give you a definitive answer. What options are you passing to keystone.init()? Are you using dotenv? If so, what are you setting there? Did you set any environment variables using rhc set-env?
I ask because a common (though not by far the only) culprit of 503 errors in Node.js applications on OpenShift is a port number overriding OpenShift's. Keystone looks at process.env.PORT before it looks at process.env.OPENSHIFT_INTERNAL_PORT. So, if you have PORT set on your .env or with rhc set-env it will take precedence over OPENSHIFT_INTERNAL_PORT.
I came across a similar question on the KeystoneJS Google Group. In that other case the developer had added a MONGODB cartridge to his app, but had not set the connection string for the cartridge in Keystone.
If this is your case as well you need to set the Keystone mongo option in Keystone.init() or using Keystone.set('mongo', 'connection_sring'). When you created the cartridge you got a url and some credentials. OpenShit passes these to your application in environment variables. You can build the mongo connection string as follows:
var connectionString = process.env.OPENSHIFT_MONGODB_DB_USERNAME + ":" + process.env.OPENSHIFT_MONGODB_DB_PASSWORD + "#" + process.env.OPENSHIFT_MONGODB_DB_HOST + '/' + process.env.OPENSHIFT_APP_NAME;
keystone.set('mongo', connectionString);
or
keystone.init({
...
mongo: connectionString,
...
});
Or you can use rhc set-env to set the MONGO environment variable as follows:
rhc set-env MONGO=http://{username}:{password}#{connection url}/{dbname} -a your_app_name
The connection url above is the one you got from OpenShift when you created the cartridge. If looks like a standard MONGODB url (e.g. mongodb://127.6.85.129:27017/).
These are just my best guesses, given that your question is a bit thin on details. You may want to post some more specifics so we can more accurately assess your problem.

Related

SQL not working when I use the network host on my phone or any other device

I am making a web-app currently and I'm using WebStorm for my front and back end. My stack is as follows: Vue3(with axios), Node.js (with Express and Coors as well as mysql and mysql2), and of course, MySQL which I am using a server from AWS.
Below is my code for allowing more than one localhost to connect to the backend Node.js.
const corsOptions = {
origin:["http://localhost:3000", "http://192.168.56.1:3000", "http://10.3.14.231:3000/"]
}
The last 2 in the array are the "Network" links I get when I npm run dev -- --host. I then get this in the terminal:
> Network: http://192.168.56.1:3000/
> Network: http://10.3.14.231:3000/
> Local: http://localhost:3000/
So far, when I type any of the links in network to my phone, it doesn't work for the SQL. The Display and front end pop up just fine, but when I make an account, Firebase will work, but nothing is sent to MySQL.
If there's any questions please ask for clarification. I don't know if this will be an issue when I actually launch it, but I can't find anything else on this problem.

SFTP using Pysftp on Openshift

I have a django application running on openshift. From the openshift server I move a file from openshift to a private server. I can do this by setting hostkeys to none and using a password, however that password will change every month so I need to use ssh keys.
I have the following on the private server: known_hosts, id_rsa, id_rsa.pub.
When I try to connect from openshift I receive the error "No Known Hostkeys."
I known since this is a dockerized application running on the cloud this might be a bit tricky to answer, but I could really use some help.
Thank you,
I have attempted to put the id_rsa.pub from the private server into a file and use hostkeys.load(id_rsa.pub) and then connect without a password.
Setup
/opt/app-root/src/.ssh/known_hosts - I have the known_hosts from the private server
/views.py -
id_rsa_pub = "known_hosts"
id_rsa_pub = settings.STATICFILES_DIRS[0] + '/' + id_rsa_pub
known_hosts = '/opt/app-root/src/.ssh/known_hosts'
cnopts = pysftp.CnOpts()
print("id_rsa_pub below:")
print(id_rsa_pub)
cnopts.hostkeys.load(known_hosts)
with pysftp.Connection(host=host, username=username,
private_key=id_rsa_pub, cnopts=cnopts) as srv:
id_rsa_pub is located in static files
The error is "pysftp.exceptions.HostKeysException: No Host Keys Found"
Alright, this was quick.
I never solved the hostkey issue, however if you use private_key=id_rsa_pub and you have a path to it on Openshift in you src somewhere, the connection will go through. Make sure to set cnopts.hostkeys = None.
Thanks

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

How to get gliderlabs/registrator running on on Bluemix

I'm trying to get gliderlabs registrator running on Bluemix, but I'm having issues as the container won't start with
O400 The plain HTTP request was sent to HTTPS port
What I think is happening is that my docker host is running on tcp://containers-api.eu-gb.bluemix.net:8443 - so the docker rest api's are https. However I suspect the gliderlabs/registrator is using http by default.
So anyone got any ideas how to get this to work ?
Steve
Looking at that package, it uses the library github.com/fsouza/go-dockerclient to access the docker remote api, specifically the NewClientFromEnv() call. Per the readme for go-dockerclient, it should pick up the env vars for https if they're there - i.e. make sure you're exporting all three env vars: DOCKER_HOST, DOCKER_TLS_VERIFY, DOCKER_CERT_PATH.
Another possibility - per reading the comments about registrator - you may wish to check that you're using gliderlabs/registrator:master instead of gliderlabs/registrator:latest. Just pulled both to check, and "latest" is 14 months old, vs 6 days for "master".

Restarting a MySQL server managed by Ambari

I have a scenario where I need to change several parameters of a hadoop cluster managed by Ambari to document performance of a particular application. The change in the configs entails a restart of the affected components.
I am using the Ambari REST API for achieving this. I figured out how to do this for all service components of hadoop. I' am not sure whether the API provides a way to restart the MySQL server that Hive uses.
I have the following questions:-
Is it the case that a mere stop and start of mysqld on the appropriate machine is enough to ensure that the required configuration changes are recognized by Ambari and the application?
I chose the 'New MySQL database' option while installing Hive via Ambari. Does this mean that restarts are reflected in Ambari only when it is carried out from the Ambari UI?
Your inputs would be highly appreciated.
Thanks!
Found a solution to the problem. I used the following commands using the Ambari REST API for changing configurations and restarting services from the backend.
Login to the host on which the ambari server is running and use the already provided config.sh script as described below.
Modifying configuration files
#!/bin/bash
CLUSTER_NAME=$1
CONFIG_FILE=$2
PROPERTY_NAME=$3
PROPERTY_VALUE=$4
/var/lib/ambari-server/resources/scripts/configs.sh -port <ambari-server-port> set localhost $1 $2 "$3" "$4"
where CONFIG_FILE can take values like tez-site, mapred-site, hadoop-site, hive-site etc. PROPERTY_NAME and PROPERTY_VALUE should be set to values relevant to the specified CONFIG_FILE.
Restarting host components
curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart MySQL server used by Hive Metastore on node3.cluster.com and HDFS client on node1.cluster.com",
"operation_level":{
"level":"HOST",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HIVE",
"component_name":"MYSQL_SERVER",
"hosts":"node3.cluster.com"
},
{
"service_name":"HDFS",
"component_name":"HDFS_CLIENT",
"hosts":"node1.cluster.com"
}
]
}' http://localhost:<ambari-server-port>/api/v1/clusters/c1/requests
Reference Links:
Restarting components
modifying configurations
Hope this helps!