is possible to recover the credentials generated during installation? In particular jboss bpm suite' s console and dashboard.
It happens that the creation last longer and i need to refresh the page for this reason i can't get the green frame with all details.
If you have rhc command-line installed, then you can simply do
rhc ssh --app <app_name> --command 'env'
Get to your application console page (through the openshift web ui), grab the ssh connection link (Remote Access) and use it to get a terminal connection open in your application space.
Then type "env" and you should see a bunch of environnement variables, your credentials should be stored in one of them.
Related
I have created a new OpenShift account for a new application I'm developing.
I have added a MongoDB cartridge for the database, and a Tomcat cartridge for the Java web application.
I now need to connect to the database from my Java web app, but I miss two authentication details:
$OPENSHIFT_MONGODB_DB_HOST
$OPENSHIFT_MONGODB_DB_PORT
As far as I know, I have to type rhc env list -a the_name_of_my_app in the console, but my application seems to have no environment variables set.
What can I do?
Apparently, the default enironment variables are visible only via ssh.
In order to see them, you have to type rhc ssh <appid-as-seen-on-openshift-console> followeb by env.
you can see environment variables by doing ssh to openshift. Also you can use openshift port forwarding feature to setup a connection locally to your database.
Openshift blog link for port forwarding
How can I check what contains every gear.
Now I'm using haproxy, mysql and a python web service.
I like to know how check where is every cartridge.
You can use the "rhc app show --gears" command to show which cartridge is installed on which gear (along with other information about them). Or you can use "rhc app show --gears ssh" to show the ssh connection information for all of your gears.
I am having issues with setting up Open shift and getting the following error after connecting to my server domain:
Command:
User$ rhc setup --server=app-domain.rhcloud.com
Result:
The server has rejected your connection attempt with an older SSL protocol.
Pass --ssl-version=sslv3 on the command line to connect to this server.
I am not sure what this is telling me to do. I tried using the instruction literally and it does not recognize the command.
Any ideas?
You should not pass rhc setup the --server flag unless you are running your own OpenShift Origin or OpenShift Enterprise broker. For OpenShift Online, just run the rhc setup command with no other options and it will setup fine. If that command messed up your express.conf file (which it should not have) you just need to delete your ~/.openshift/express.conf file then run rhc setup again without any flags. Basically you tried to point rhc to your gear as an OpenShift Online broker, which will not work.
I ended up answering this on another forum post:
The only way that this worked for me was to actually create a SSH key locally with ssh-keygen -p without rhc setup and "not" giving it a password. I then went back to OpenShift clicked add a key and pasted the contents of my rsa file.
There is obviously some kind of bug with authentication on Openshift or the installation is not right.
It would be good to find out what is going on and why does it work if I do it, this way.
As was explained in the answer to this question: https://stackoverflow.com/questions/11730590/what-are-some-of-the-tricks-to-using-openshift it should be possible to ssh into some of the other gears when using a scaled app with openshift.
Unfortunately the link mentioned there (https://openshift.redhat.com/community/faq/can-i-access-my-applications-gear) seems to be gone.
Via [my app url]/haproxy-status/ I can see the names of the other gears. They are long names like gear-[long number]-[app name]. Using that name I can no longer ssh into them when I'm ssh'ed into the main gear. ssh there just immediately returns without any error.
If I do ssh blala the same thing happened, so it looks like ssh had been replaced by a noop command on the primary gear?
When I examine the haproxy conf file, I see entries like;
server gear-[long number]-[app name] ex-std-node[number].prod.rhcloud.com:[number] check fall 2 ...
I tried ssh'ing into this ext-std-node... address as well, both from the main/primary application gear as well as from my desktop, but it didn't work in both cases.
How can I get shell access to my other gears?
This command shows how to access individual gears:
rhc app show <appname> --gears
The last column of output is the ssh URL. It is of the form $UUID#$UUID-$NAMESPACE.rhcloud.com . You can ssh into them directly, and they are also accessible via ssh from the "head" gear; they have to be, since git pushes are synchronized from the head gear to the others via ssh.
Where does hudson CI get user to run the cmd.exe ?
I'm trying to start and stop some remote services on various slaves and special credentials that are different than what hudson is using are needed. I can't find a place to override the user. I've tried running the server as various users, but it doesn't change anything.
Any other ideas?
Since you want to start and stop the services on the remote machine you need to login with these credentials on the remote machine, since I haven't found a way to start and stop a service on remote machine.
There are different ways to do that. You can create a slave that runs on the remote machines with the correct credentials. You can even create more than one slave for the same machine without any issues, than you can use different credentials for the same machine. These can then fire up the net stop and net start command.
You can also use the SSH plugin. This allows you to configure pre- and post-build ssh scripts. You 'just' need and ssh server on the windows machine. The password for the connection will be stored encrypted.
Use a commad line tool. So far I haven't found a Windows on board tool to have a scripted login to the remote machine. I would use plink for that task. plink is the scripted version of putty. Putty supports different connection types. So you can also use the build in telnet service (not recommended since telnet does not encrypt the connection). Disadvantage is that you will have the password unencrypted in the job configuration.
We had a similar problem, and I resorted to using PsExec. To my advantage, our machines exist on a separate LAN, within 2 firewalls, so I was OK with unencrypted passwords floating around. I had also explored SSH w/ Putty, which seemed to work, but not straightforward.
If someone can help with single line runas command, that could work too.
You don't say how your slaves are connected to Hudson, but I'll assume it's through the "hudson slave" service, since that's probably the most popular way to connect Windows slaves.
If so, the CMD.EXE is run with the same permissions as the user running the service. This can be checked by:
1. run services.msc
2. double-click hudson-slave service
3. go to Log On tab
By default, the slave service runs as "LocalSystem", which is the most powerful account on the system. It should be able to do whatever you need it to do. (i.e. start/stop services)