Openshift - ssh to a specific gear - openshift

if I have more than once gear I know how to check logs of an specific gear but I don't know how to connect through ssh to a specific gear.
I'm looking for something like:
rhc ssh -g gear_id
Why?
Because I'm having issues when I'm trying to execute this
f = open(os.environ['OPENSHIFT_DATA_DIR']+'/myfile.sh', 'w+')
I'm getting an error trying to access that path, and I think it's because is being executed in a different gear.
So I want to check what is the value for OPENSHIFT_DATA_DIR in every gear.
IOError: [Errno 2] No such file or directory

To see all the gears in an application:
rhc app show --app <app-name> --gears ssh
Then, to ssh into a specific gear just grab one of the ssh urls that is shown and:
ssh <SSH-URL>

Related

EB: Trigger container commands / deploy scripts on configuration change

I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?
This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.

ssh AWS, Jupyter Notebook not showing up on web browser

I am trying to use ssh connecting to AWS "Deep Learning AMI for Amazon Linux", and everything works fine except Jupyter Notebook. This is what I got:
ssh -i ~/.ssh/id_rsa ec2-user#yy.yyy.yyy.yy
gave me
Last login: Wed Oct 4 18:01:23 2017 from 67-207-109-187.static.wiline.com
=============================================================================
__| __|_ )
_| ( / Deep Learning AMI for Amazon Linux
___|\___|___|
The README file for the AMI ➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜➜ /home/ec2-user/src/README.md
Tests for deep learning frameworks ➜➜➜➜➜➜➜➜➜➜➜➜ /home/ec2-user/src/bin
=============================================================================
1 package(s) needed for security, out of 3 available
Run "sudo yum update" to apply all updates.
Amazon Linux version 2017.09 is available.
Then
[ec2-user#ip-xxx-xx-xx-xxx ~]$ jupyter notebook
[I 16:32:14.172 NotebookApp] Writing notebook server cookie secret to /home/ec2-user/.local/share/jupyter/runtime/notebook_cookie_secret
[I 16:32:14.306 NotebookApp] Serving notebooks from local directory: /home/ec2-user
[I 16:32:14.306 NotebookApp] 0 active kernels
[I 16:32:14.306 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/?token=74e2ad76eee284d70213ba333dedae74bf043cce331257e0
[I 16:32:14.306 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 16:32:14.307 NotebookApp] No web browser found: could not locate runnable browser.
[C 16:32:14.307 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=74e2ad76eee284d70213ba333dedae74bf043cce331257e0
Copying http://localhost:8888/?token=74e2ad76eee284d70213ba333dedae74bf043cce331257e0 and get
"can’t establish a connection to the server at localhost:8888." on Firefox,
"This site can’t be reached localhost refused to connect." on Chrome
Further,
jupyter notebook --ip=yy.yyy.yyy.yy --port=8888 gives
Traceback (most recent call last):
File "/usr/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.4/dist-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/lib/python3.4/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/lib/python3.4/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/lib/python3.4/dist-packages/notebook/notebookapp.py", line 1296, in initialize
self.init_webapp()
File "/usr/lib/python3.4/dist-packages/notebook/notebookapp.py", line 1120, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/lib64/python3.4/dist-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/usr/lib64/python3.4/dist-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 99] Cannot assign requested address
Note sure this will be helpful (Is it only for MXNet ? I am not familiar with MXNet) Jupyter_MXNet
localhost will only work when trying to use jupyter (or well, anything) from the machine itself. In this case, it seems you're trying to access it from another machine.
You can do that with the switch --ip=a.b.c.d, where a.b.c.d is the public address of your EC2 instance (or using 0.0.0.0 to make it listen in all interfaces.)
You can also use --port=X to define a particular port number to listen to.
Just remember that your security group must allow access from the outside into your choice of IP/Port.
For example:
jupyter notebook --ip=a.b.c.d --port=8888
Well, there are few things happening here.
ISSUE # 1 - Localhost
As Y. Hernandez said you are trying to access the URL incorrectly. You should replace localhost with the public IP address of your AWS VM (same IP you used to ssh into).
ISSUE # 2 - Jupyter needs proper configuration
But even then, this might not run because Jupyter has not been completely configured out of the box, you are using Deep Learning AMI for Amazon Linux. You need complete these following steps (of course there are multiple others of doing the same thing - and this is just one such way).
Configure Jupyter Notebook -
$ jupyter notebook --generate-config
Create certifications for our connections in the form of .pem files.
$ mkdir certs
$ cd certs
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
You’ll get asked some general questions after running that last line. Just fill them out #with some general information or keep pressing enter.
Next we need to finish editing the Jupyter Configuration file we created earlier. So change to .jupyter folder. For this, you can use nano or vi or your fav editor.
$ cd ~/.jupyter/
$ nano jupyter_notebook_config.py
Insert this code at the top of the file -
—insert begin——
c = get_config()
# Notebook config this is where you saved your pem cert
c.NotebookApp.certfile = u'/home/ubuntu/certs/mycert.pem'
# Run on all IP addresses of your instance
c.NotebookApp.ip = '*'
# Don't open browser by default
c.NotebookApp.open_browser = False
# Fix port to 8888
c.NotebookApp.port = 8888
-insert end——
save file
cd out of .jupyter folder
$ cd
You can set up a separate Notebook folder for Jupyter or from you anywhere you
choose, you can launch Jupyter
$ jupyter notebook
On your browser go to this url using your vm IP address and the token given in your terminal
http://VM-IPAddress:8888/?token=72385d6d854bb78b9b6e675f171b90afad47b3edcbaa414b
IF you get an SSL error, then you use https instead of http.
https://VM-IPAddress:8888/?token=72385d6d854bb78b9b6e675f171b90afad47b3edcbaa414b
ISSUE # 3 - Won't be able to launch Python interpreter.
if you intend to run Python 2 or 3, you need to upgrade iPython. If not, once you launch Jupyter, you will not see an option to launch Python interpreter. You will only see options for Text, Folder and Terminal.
Upgrade iPython. For this, shutdown your Jupyter and run this upgrade command.
$ sudo pip install ipython --upgrade
Relaunch Jupyter.
In Amazon DL AMIs, these occurs sometimes. Do
jupyter notebook password
set notebook password and do:
sudo jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
allow root is not necessary, but it allows copy/paste as a root user.

Framework with ID x does not exist on slave with ID y

I keep getting this error on my marathon dashboard
Framework with ID 'a5a96e8c-c3f2-4591-8eb3-43f8dc902585-0001' does not exist on slave with ID '9959ba51-f6f7-448f-99d2-289767f12179-S2'.
The path to make this error occur is to click "Sandbox" next to a task on the main marathon dashboard.
The path looks something like this
http://mesos.dev.internal/#/slaves/9959ba51-f6f7-448f-99d2-289767f12179-S2/frameworks/a5a96e8c-c3f2-4591-8eb3-43f8dc902585-0001/executors/rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3/browse
However, if I go to the slave through the slave panel, and click the framework from there, I am able to access the sandbox. The link in this case looks like the following
http://mesos.dev.internal/#/slaves/9959ba51-f6f7-448f-99d2-289767f12179-S2/browse?path=%2Ftmp%2Fmesos%2Fslaves%2Fc223b6b1-cef8-4599-8cea-b402bf20afc5-S0%2Fframeworks%2F20160108-205802-16842879-5050-1210-0001%2Fexecutors%2Frabbitmq.91b8bbf6-ceba-11e5-8047-0242ffdabb3e%2Fruns%2Fc66eb4d5-ea6d-451d-982f-6a0d29b25441
Any ideas on what I have misconfigured?
Mesos Web UI does not proxy logs through mesos-master (although it would be nice). Basically you need to be able to resolve slave's name from your browser (computer) and port 5051 needs to be open for you:
$ nc -z -w5 mesos.dev.internal 5051; echo $?
0 # port is open
It's not a good idea to leave Mesos ports open for public, so either you can:
connect via VPN
whitelist your public IP on all slaves
use CLI instead of Web UI
Using CLI is quite easy, once you set master's URI. You can install it:
pip install mesos.cli mesos.interface
Then you can list all tasks using mesos ps, or fetch stdout:
mesos tail -f rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3
and stderr:
mesos tail -f rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3 stderr
Note that the mesos-cli is no longer developed, similar features and much more you should be able to do with Mesosphere's DCOS CLI

Attempting to save a snapshot complains Application not found

I'm trying to save an App snapshot on OpenShift, however it complains that my application isn't found. When I type rhc apps my application is correctly listed, not sure what I could be doing wrong.
For example:
appname # http://appname-domain.rhcloud.com
when I run rhc snapshot save -a appname, I get:
Application 'appname' not found.
If the application is not in your default namespace, then you will need to add the -n option to your rhc snapshot save command. That could be your issue.

"No system SSH available" error on OpenShift rhc snapshot save on Windows 8

Steps To Replicate
On Windows 8.
In shell (with SSH connection active):
rhc snapshot save [appname]
Error
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
Suggested Solution
From this post:
Usage: rhc snapshot-save <application> [--filepath FILE] [--ssh path_to_ssh_executable]
Pass '--help' to see the full list of options
Question
The path to keys on PC is:
C:\Users\[name]\.ssh
How do I define this in the rhc snaphot command?
Solution
rhc snapshot save [appname] --filepath FILE --ssh "C:\Users\[name]\.ssh"
This will show the message:
Pulling down a snapshot of application '[appname]' to FILE ...
... then after a while
Pulling down a snapshot of application '[appname]' to FILE ... DONE
Update
That saved the backup in a file called "FILE" without an extension, so I'm guessing in the future I should define the filename as something like "my_app_backup.tar.gz" ie:
rhc snapshot save [appname] --filepath "my_app_backup.tar.gz" --ssh "C:\Users\[name]\.ssh"
It will save in the repo directory, so make sure you move it out of this directory before you git add, commit, push etc, otherwise you will upload your backup too.