Openshift server crashe while executing jira - openshift

Im using openshift with a DIY cartridge, i set up tomcat8 (with jdk8) and deploy JIRA 7 on it, but each jira tries to load its add-ons, the server crashes.
Here are the tomcat logs: http://pastebin.com/6NMgZ1VQ
I even tried to delete the jira_home/plugins but still the same problem.
Whats causing this problem? and any possible fix?

More than likely you are probably using a small gear to try to run Jira in Tomcat8, and you are most likely running out of memory on that small gear. Especially if you are also running a database as part of the application. You can check this by sshing into your gear and running the following commands:
oo-cgroup-read memory.failcnt
oo-cgroup-read memory.memsw.failcnt
You can learn more about these commands here.
If you are having memory issues, the only real solution is to use a larger gear size, along with possible using a scaled application to put your database on it's own gear.

Related

Openshift OKD Excessive Logging

So I installed a single host Openshift OKD v3.11 cluster. I installed it on a VM running Centos 7.8.2003.
It seems to have installed ok except that it continually streams verbose logs to /var/log/messages. Around 5 logs per second and all seem to be about throttling requests. Example of a typical log message:
******Jun 13 15:49:13 centos7 journal: I0613 14:49:13.011402 1 request.go:485] Throttling request took 196.341689ms, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-service-cert-signer/serviceaccounts/service-serving-cert-signer-sa*****
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets.
https://access.redhat.com/solutions/3348921
I assume these logs are nothing to worry about and so my main question is what is the "best"/cleanest/simplest/easiest way to ensure the Openshift cluster doesn't continue to fill up /var/log/messages but will still log any important messages there?
I would recommend looking at the root cause for this behavior. These messages indicate that there are a lot of requests coming to your API. Typically this is due to some application performing calls in a tight loop leading to this many messages. In your case check your openshift-service-cert-signer if you can see any warnings or an abnormal amount of log messages.
If you want to get rid of the throttling messages, you can increase the amount of Queries per second (QPS) for the API server: Recommended Practices for OKD Master Hosts (lower part).
The only reference I have managed to find is a question here but the access to the discussion is only available to those with deep pockets. https://access.redhat.com/solutions/3348921
I do not understand why you're saying that, as I can access that document with my free Red Hat account without any subscriptions. Have you tried with a free account as it says on the site?
Simon's answer was helpful but I've finally got to the bottom of this.
The problem was simply that the version of Docker I had installed was old. At the time of writing the latest version of Centos is 7.8.2003 and if you install that and then simply run "yum install docker" hoping that you'll get something at least reasonably new and certainly compatible with the rest of the linux installation, you'll probably be making a mistake.
The right thing to do is to follow the simple steps here:
https://docs.docker.com/engine/install/centos/
The reason I found the problem was because excessive logging of my openshift cluster wasn't the only issue. I started seeing strange behaviour of other containers. A process of trial and error narrowed down the issue to the default Centos version of docker. Once I followed the page above all my problem vanished including the original problem of /var/log/message getting hammered by openshift containers.
The main reason I decided to answer my own question was because surely someone else is going to be as impatient/thick as me and simply install Centos7 then try "yum install docker" without knowing they're about to enter a world of pain.

How to test openshift action_hooks prior to git push to Openshift server

I have been looking at Openshift docs and on Stack Overflow for a while now and I can't seem to get any answers.
I want to know what the standard pattern is for developing applications for deployment on Openshift? I am especially concerned with testing of action_hooks prior to deployment. I found this particularly troublesome when I was using a DIY cartridge recently where I had to deal with downloading dependencies in my build script prior to starting my application. As my application kept failing to start every time I made a change and pushed it (I only did this as an initial test of the Openshift service, I would never develop like this). I ended up having to ssh onto my instance and resolve the issue by trial and error (not really ideal).
I appreciate any help anyone can offer.
Thanks
The only way that I am aware of to test action hooks on OpenShift is to ssh into an application and run them manually from the command line. This way you can quickly debug & update them. Then copy them to your git repository and do a git push to deploy the new code.
The only other way I can think of would be to run OpenShift Origin (v2) locally and use that to test with.

Sikuli with jenkins setup for continuous integration

I have my test writtern in Sikuli. If I RDP into my Jenkins machine and have an active session then all sikuli test pass.
However, for overnight run, my Jenkins machine do get locked. I want to understand if anyone has encountered and solved this issue before. Thanks!
Note: I cannot leave my Jenkins slave unlocked due to security reasons.
It's a known limitation of RDP.
Two optional solutions:
install VNC Server (like UltraVNC), and run it as Windows service (make sure it is launched during Windows logon).
OR
Create a batch file that disconnects Remote Desktop, and use it instead of closing the RDP session with the regular X button. The batch command is:
%windir%\system32\tscon.exe %SESSIONNAME% /dest:console

Is it possible to use OpenShift without using rhc?

I am trying to get an application running on OpenShift but after trying to create an ssh key on Ubuntu using ssh-keygen I ran into permissions problems. This is because I find I have no need for the rhc client if it only automates this process but bloats my computer (laptop) with a ruby installation.
I find that it would be best to have an alternative for Ubuntu (Linux) users. Is it possible to make this happen or do I have to go the rhc way?
You get a long way without the rhc command line tool. Obviously you can create your ssh key yourself and add/mange it via the OpenShift website. You can also create your application there and add cartridges. When it comes to starting the app, you can usually do that by jsut pushing your git repository. Last but not least, you can ssh onto your OpenShift gear and do a lot from there, for example view the log files.
That said, the rhc client is your one stop client for all this (and more). So even if you might not need it right now and some task are in fact done easier without it, I would still recommend to install it. A lot of information/tutorials are using rhc and w/o enough experience you will not know how to achieve a certain task in a different way.

Development environment Sinatra & MySQL in 2 computers

I'm developing a Sinatra and MySQL application. I'm using as development environment a Macbook Air and an iMac. The server runs on a FreeBSD VPS running unicorn behind nginx.
I'd like to somehow automate the whole procedure, I develop in both iMac and MBA. Depends on time I have free in the office (MBA) or time I spend writing code at home (iMac). I have setup MySQL on both macs.
I manually dump and restore the database in order to be able to test my application locally before making any change to the server.
I'd like to automate the process of: Syncing the MySQL database if possible, keep the code up-to-date to all locations without using cloud storage if possible.
Best Regards,
I think there are many ways to solve this problem.
So this is just on idea how to achieve this.
Create a git repo on your server and write a small shell script which sync your db from somewhere. This script can you trigger by a git hook http://git-scm.com/book/en/Customizing-Git-Git-Hooks#Client-Side-Hooks
For your syncing script you may have a look at this -> https://github.com/xssnark/mysql-db-sync or I'm sure you find something.