Long Running Process in Openshift - like sunspot - sunspot-rails

I know there are quite a few openshift fan out there and the list of feature that openshift give you I dont blame them too
so this question is for those guys I have solr full text search engine running on the openshift
Now the I using sunspot_rails to connect to solr to create indexes
all work well for sometime and all the indexes get create appropriately but we experience that after a while the connection to solr keeping getting lost
So I'm assuming that openshift doesnot allow long running process like solr to run and would kill after some period of time
am I correct in my believe

You can use nohup to launch your script with post_start_ and post_stop_ scripts in .openshift\actionhooks
Redhat openshift - Cron Runtime - Is there a default time for how long cron executes shows how to do it with cron.

Related

Can I run a Docker container to use in another Google Cloud Build step?

I'd like to run a fresh MySQL instance in a Docker Container as a Cloud Build, and then access that MySQL DB in a later step to run Unit Tests against. Is this possible? It appears as if I can run a Docker Container in a build step, but the step doesn't complete until the Container exists. I'd like this MySQL container to remain running until after the final build step completes.
FWIW I'd like to use this on a Ruby on Rails project to run rspec tests. I currently use a CloudSQL instance to run tests against, but it's pretty slow, even though the same tests run quickly locally. Changing the machine-type for the Cloud Builder to something powerful didn't help, so I assume latency is my biggest killer, which is why I want to try a peer Container MySQL instance instead.
It turns out there are at least 2 ways to skin this cat:
Use docker-compose cloud builder to spin up multiple containers in 1 step: MySQL and a test runner. The downside here is the step will never complete unless since MySQL will run in the background and never exit. I suppose one could write a wrapper to cause it to die after a few minutes.
You can actually start a container with -d in an early build step and ensure it's on the cloudbuild docker network, and then later steps can connect to it if they're also on the cloudbuild network. Essentially the Mysql step will "complete" quickly as it just starts the server in daemon mode then continues to next build step. Later, the test runner will run tests against the fresh DB and its build step completes when tests are actually done.
I went with option 2, and my 16-min unit tests (run against CloudSQL in same region) shrunk down to 1.5mins using the dockerized MySQL server.
AFAIK, you can't do this. Each step is independent and you can't run a background step.
My solution here is to have the MySQL in the same step as your unit test, and to run MySQL as a background process, in the same step. Quite boring (because you have to install and run MySQL in your step) but I haven't better solution.
For an easier use, you can create your own custom builder for Cloud Build

Flask Internal Web App Dies after a while running on python

I have a web app meant for internal use only.
I wrote this app in Python 2.7 using Flask and flask-mysql. This app returns data from our MySQL database and returns some fields in a table in HTML. (It is very simple)
Now I am not a programmer and dont know too much on debugging, but I have this problem that the app runs perfect but left for a while it dies.(mostly at night or weekends when I guess it is not used).
Is there any way that I can keep it running, or that when it dies, it should restart.
I will paste my code if needed.
Arnoux
MySQL invalidates stale connections after 8 hours. Try adding a pool_recycle=3600 to your engine configuration. Also, make sure you are properly managing your session. See this previous thread about the MySql has gone away error.
Does this thread-local Flask-SQLAchemy session cause a "MySQL server has gone away" error?

Does crontab leave mysql connections open?

When I run report jobs (get net sales data from mysql) in crontab (bashscript) , I establish a connection to mysql and get the required data.
Does this leave mysql connections open or all the connections are closed on job completion?
UPDATE - I use bash script to connect to mysql.
In basic scenarios the MySQL connections will be closed when the mysql client within the bash script finishes running.
If you are running the mysql command line Linux client within your bash script then bash would normally wait for the mysql process to exit before continuing to the end of the script.
There are ways to persist processes beyond the life of the bash script but you haven't mentioned using those in your question.
If you are using a mysql library that has a close function - most of the MySQL libraries have this in the api then you should use it. Although the default process behaviour will probably clean up open connections for you, it helps to get in the habit of closing resources that you are not going to require again within your code as this makes it more scalable and also informs other developers of your intended behaviour.

How to execute mysql scripts in cloudfoundry

I have a .sql file (initial sql scripts). I have recently deployed application in cloudfoundry, So I want to run these scripts to make application work, Scripts will update more than 5 db tables.
Is there any other way to run the mysql scripts from the grails application on start up Or Is there any provision to run the scripts in the cloudfoundry.
you have several options here.
The first one (which I recommend), is to use something like http://liquibase.org/ (there is a Grails plugin for it: http://grails.org/plugin/liquibase). This tool will make sure that any script you give it will run prior to the app starting, without running the same script twice, etc. This is great to keep track of your database changes.
This works independently of CloudFoundry and would help anyone installing your app having an up to date schema
The second option would be to tunnel to the CloudFoundry database and run the script to the db. Have a look at http://docs.cloudfoundry.com/tools/vmc/caldecott.html or even easier with STS : http://blog.cloudfoundry.com/2012/07/31/cloud-foundry-integration-for-eclipse-now-supports-tunneling-to-services/
Yup, what ebottard said! :-) Although, personally I would opt for using the tunnel feature on VMC, but that said, I am a Ruby guy!
Be weary of the fact that there are timeouts against queries in MySQL if are bootstrapping your database with large datasets!

rails 3.1 - moving data from local data processing server to heroku production server

I'm building a rails 3.1 app that requires a load of data to be crunched and processed on a local server (using a bunch of non-rails tools and writing to mysql) and then for the refined results to be punted up to a heroku production server (front end). because the data crunching aspect of the process needs to be run in batches, my first instinct was simply to upload the results table to production using something like "heroku db:push --tables data" - but the problem is that it is sslloowww and the app is without data for about 40mins at a time. the crunching batches need to be run about 4x per day - so it looks like this approach isn't really going to work. any suggestions how to speed this process up or any alternative schemes for getting the data less obtrusively up to the production server? thanks!
Sounds like you may have to rearchitect or what about running your rails app on EC2 and ditching Heroku? I think Heroku is great if your app is simple or you can make do with the plugins that they have. But when it gets complex I think it may be too complicated.
Heroku makes it clear that you cannot access their databases from outside. What you can do however is (if you want to stay on Heroku) is to use another database (like the RDS or roll your own) and have your application connect to it. Then upload data to that database directly.