Openshift Cronjob Not Running - openshift

I have tried to setup a simple cron job running on openshift but once I have pushed the file to openshift and then login and search for the file it does not seem to be there and there is not log output.
I created an application from: https://github.com/smarterclayton/openshift-go-cart
I then installed the cron 1.4 cartridge.
I created a file at .openshift/cron/minutely/awesome_job and set it as 755
I added the following contents:
#! /bin/bash
date > $OPENSHIFT_LOG_DIR/last_date_cron_ran
I pushed to the server
Logged in via ssh and run find /var/lib/openshift/53760892e0b8cdb5e9000b22 -name awesome_job for which it finds nothing.
Any ideas which might help as I am at loss why is it not working.

Make sure the execution bit is set on your cron file.

The issue was not with cron but with the golang cartridge I was using which was removing the .openshift directory.
https://github.com/smarterclayton/openshift-go-cart/issues/10

You should also put a file named "jobs.allow" under your .openshift/cron/minutely/. So your cron jobs will be executed.
For your ref: https://forums.openshift.com/daily-cron-jobs-not-getting-triggered-automatically
And the reason you can find your awesome_job vis ssh login is because it is under /var/lib/openshift/53760892e0b8cdb5e9000b22/app-root/runtime/repo/.openshift, so the command find does not search any files under folders named with . prefixed.

Related

Cannot find module "mysql" while running on heroku

I have been trying to deploy my app on heroku.
I first used the command git push heroku master
and then running the node server using heroku run node server.js
However I get this error. Can anyone help me with this?
enter image description here
I've run into this error before using ibm db2... what you have to do is make a new directory locally, and login to Heroku using git in that new directory. once you do that, copy each folder and file into your new directory from your old one, (you can cp ../olddirectory/app.js,etc, etc)including the package.json and package-lock.json. once you do all that, push it, and it should work.
This sounds so stupid, but it works. every. freaken. time. I do this with all my projects as they all use db2. let me know if you need any more help.

Attempting to save a snapshot complains Application not found

I'm trying to save an App snapshot on OpenShift, however it complains that my application isn't found. When I type rhc apps my application is correctly listed, not sure what I could be doing wrong.
For example:
appname # http://appname-domain.rhcloud.com
when I run rhc snapshot save -a appname, I get:
Application 'appname' not found.
If the application is not in your default namespace, then you will need to add the -n option to your rhc snapshot save command. That could be your issue.

Clone Openshift application in scalable

I have an application in Openshift free plan with only one gear. I want to change it to scalabe and take usage of all of 3 free gears.
I read this blog post from openshift and I found that there is a way to do it. I should clone my current application to a new one as a scalable which will use the 2 remaining gears and then I will delete the original application. Thus, the new one will have 3 free gears.
The way that blog suggest is: rhc create-app <clone> --from-app <existing> --scaling
I have the following error: invalid option --from-app
Update
After running the command gem update rhc, I don't have the error above but...A new application with the given name has created with the same starting package (Python 2.7) just like the existing one, but all the files are missing. It actually create a blank application and not a clone of the existing.
Update 2
Here is the structure of the folder:
-.git
-.openshift
-wsgi
---static
---views
---application
---main.py
-requirements.txt
-setup.py
From what we've talked on IRC, your problem was around missing SSH configuration on Windows machine:
Creating application xxx ... done
Waiting for your DNS name to be available ...done
Setting deployment configuration ... done
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
I've double checked it, and it appears to be working without any problem.
The only requirement is to have the latest rhc client and putty or any other
SSH client. I'd recommend going through this tutorial once again and double-check everything to make sure everything is working properly.
Make sure you are using the newest version of the rhc gem with "gem update rhc" to make sure that you have access to that feature from the command line.
The --from-app will essentially do a 'rhc snapshot save & snapshot restore` (amoung other things) as you can see here from the source:
if from_app
say "Setting deployment configuration ... "
rest_app.configure({:auto_deploy => from_app.auto_deploy, :keep_deployments => from_app.keep_deployments , :deployment_branch => from_app.deployment_branch, :deployment_type => from_app.deployment_type})
success 'done'
snapshot_filename = temporary_snapshot_filename(from_app.name)
save_snapshot(from_app, snapshot_filename)
restore_snapshot(rest_app, snapshot_filename)
File.delete(snapshot_filename) if File.exist?(snapshot_filename)
paragraph { warn "The application '#{from_app.name}' has aliases set which were not copied. Please configure the aliases of your new application manually." } unless from_app.aliases.empty?
end
However this will not copy over anything in your $OPENSHIFT_DATA_DIR directory so if you're storing files there, you'll need to copy them over manually.

Supervisord on Fedora configuration error

I am trying to install supervisor on my Fedora 17, using this link as reference:
https://www.digitalocean.com/community/articles/how-to-install-and-manage-supervisor-on-ubuntu-and-debian-vps
It works fine except one trivial problem (I believe).
A simple configuration for our script, saved at /etc/supervisor/conf.d/long_script.conf, would look like so (...)
My problem is, that I have only files/directories:
/etc/supervisord/
/etc/supervisord.conf
I have tried to create conf.d directory under /etc/supervisord/. I have put my config file into /etc/supervisord/conf.d/ and /etc/supervisord/ and also /etc/sueprvisor/.
Despite my effort when issuing command:
supervisortctl reread
I receive:
No config updates to processes
Anyone has a clue what I might be doing wrong? Thanks in advance.
(I'm on Fedora 20) If you look at /etc/supervisord.conf at the bottom you see
[include]
files = supervisord.d/*.ini
So on Fedora your configuration files should end in .ini instead of .conf. I had this same problem and running supervisorctl reread after this detects stuff
There is no need to create a folder specifically for your configuration file, you can specify its path with the -c /path/to/your/file or --configuration=/path/to/your/file option.
Source: http://supervisord.org/running.html

Why can't I run my Perl job in Hudson?

I tried to do this in Hudson but:
I have a script in Perl on my server (windows): lets say: d\util\demo.pl I want to have it running in Hudson. so I go to Hudson, create new job, go to: Build Execute Windows batch command and add: perl.exe d\util\demo.pl
I got this error: 'perl.exe' is not recognized as an internal or external command, operable program or batch file.
please help!
It can't find the perl.exe in the path of the agent that is running the task. Verify that perl is properly installed AND that the path where perl.exe was in stalled to is in you system path on EVERY agent that will run this task.
Can you run that command from any folder of the server?
If yes, than the Hudson server runs definitely under a different user account. Make sure that the user account Hudson is running under has all necessary environment variables set.
If not, than add the full qualified path to the perl.exe (e.g. C:\program files\perl\bin\perl.exe d:\util\demo.pl). If this doesn't help, you have to also set all environment variables (see "if yes").