Sidekiq server is not processing scheduled jobs when started using systemd - jruby

I have a cuba application which I want to use sidekiq with.
This is how I setup the config.ru:
require './app'
require 'sidekiq'
require 'sidekiq/web'
environment = ENV['RACK_ENV'] || "development"
config_vars = YAML.load_file("./config.yml")[environment]
Sidekiq.configure_client do |config|
config.redis = { :url => config_vars["redis_uri"] }
end
Sidekiq.configure_server do |config|
config.redis = { url: config_vars["redis_uri"] }
config.average_scheduled_poll_interval = 5
end
# run Cuba
run Rack::URLMap.new('/' => Cuba, '/sidekiq' => Sidekiq::Web)
I started sidekiq using systemd. This is the systemd script which I adapted from the sidekiq.service on the sidekiq site.:
#
# systemd unit file for CentOS 7, Ubuntu 15.04
#
# Customize this file based on your bundler location, app directory, etc.
# Put this in /usr/lib/systemd/system (CentOS) or /lib/systemd/system (Ubuntu).
# Run:
# - systemctl enable sidekiq
# - systemctl {start,stop,restart} sidekiq
#
# This file corresponds to a single Sidekiq process. Add multiple copies
# to run multiple processes (sidekiq-1, sidekiq-2, etc).
#
# See Inspeqtor's Systemd wiki page for more detail about Systemd:
# https://github.com/mperham/inspeqtor/wiki/Systemd
#
[Unit]
Description=sidekiq
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target
# See these pages for lots of options:
# http://0pointer.de/public/systemd-man/systemd.service.html
# http://0pointer.de/public/systemd-man/systemd.exec.html
[Service]
Type=simple
Environment=RACK_ENV=development
WorkingDirectory=/media/temp/bandmanage/repos/fall_prediction_verification
# If you use rbenv:
#ExecStart=/bin/bash -lc 'pwd && bundle exec sidekiq -e production'
ExecStart=/home/froy001/.rvm/wrappers/fall_prediction/bundle exec "sidekiq -r app.rb -L log/sidekiq.log -e development"
# If you use the system's ruby:
#ExecStart=/usr/local/bin/bundle exec sidekiq -e production
User=root
Group=root
UMask=0002
# if we crash, restart
RestartSec=1
Restart=on-failure
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
The code calling the worker is :
raw_msg = JSON.parse(req.body.read, {:symbolize_names => true})
if raw_msg
ts = raw_msg[:ts]
waiting_period = (1000*60*3) # wait 3 min before checking
perform_at_time = Time.at((ts + waiting_period)/1000).utc
FallVerificationWorker.perform_at((0.5).minute.from_now, raw_msg)
my_res = { result: "success", status: 200}.to_json
res.status = 200
res.write my_res
else
my_res = { result: "not found", status: 404}.to_json
res.status = 404
res.write my_res
end
I am only using the default q.
My problem is that the job is not being processed at all.

After you run systemctl enable sidekiq so that it starts at boot and systemctl start sidekiq so that it starts immediately, then you should have some logs to review which will provide some detail about any failure to start:
sudo journalctl -u sidekiq
Review the logs, review the systemd docs and adjust your unit file as needed. You can find all the installed systemd documentation with apropos systemd. Some of the most useful man pages to review are systemd.service,systemd.exec and systemd.unit

Related

OpenShift upgrade error 4.11.x -> 4.12.2 Marking Degraded due to: unexpected on-disk state validating against rendered-worker

I'm administrating RHEL OpenShift cluster. I'm upgrading from 4.10.x -> 4.11.x -> 4.12.2
There are 3 masters, and 7 worker nodes.
all 3 masters updated
3 of the 8 workers updated.
Thus far the upgrade is now stuck on worker0 with:
oc logs machine-config-daemon-4bs9x -n openshift-machine-config-operator
< snip >
I0216 21:00:08.555947 3136 daemon.go:1255] Current config: rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.555986 3136 daemon.go:1256] Desired config: rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
I0216 21:00:08.555992 3136 daemon.go:1258] state: Degraded
I0216 21:00:08.566365 3136 update.go:2089] Running: rpm-ostree cleanup -r
Deployments unchanged.
I0216 21:00:08.647332 3136 update.go:2104] Disk currentConfig rendered-worker-263c6ea5fafb6f1da35a31749a1180d7 overrides node's currentConfig annotation rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.651201 3136 daemon.go:1564] Validating against pending config rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
E0216 21:00:10.291740 3136 writer.go:200] Marking Degraded due to: unexpected on-disk state validating against rendered-worker-263c6ea5fafb6f1da35a31749a1180d7: expected target osImageURL "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee", have "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17" ("b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f")
I've had this problem before and followed the RedHat solutions to run the following command. But this is now failing.
oc debug node/worker0.xx.com
sh-4.4# chroot /host
sh-4.4# rpm-ostree status
State: idle
Deployments:
* db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2
Version: 412.86.202301311551-0 (2023-01-31T15:54:05Z)
sh-4.4#
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee"
I0216 21:02:54.449270 3962714 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-821872843 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee
I0216 21:03:48.349962 3962714 rpm-ostree.go:209] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0216 21:03:49.926169 3962714 rpm-ostree.go:246] No com.coreos.ostree-commit label found in metadata! Inspecting...
I0216 21:03:49.926234 3962714 rpm-ostree.go:412] Running captured: ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo
error: error running ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo: exit status 1
error: opening repo: opendir(/run/mco-machine-os-content/os-content-821872843/srv/repo): No such file or directory
sh-4.4#
After a reboot and retry now I'm getting:
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
I0217 19:10:06.928154 1443914 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee
error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
W0217 19:10:07.176459 1443914 run.go:45] nice failed: running nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee failed: error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
: exit status 1; retrying...
^C
I tried this:
/run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
expecting this result ( from a previous upgrade problem ):
sh-4.4# chroot /host
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17"
I0208 21:50:00.408235 2962835 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-3432684387 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0208 21:50:29.727695 2962835 rpm-ostree.go:353] Running captured: rpm-ostree status --json
I0208 21:50:29.780350 2962835 rpm-ostree.go:261] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:7c252d64354d207cd7fb2a6e2404e611a29bf214f63a97345dee1846055c15d8
I0208 21:50:31.456928 2962835 rpm-ostree.go:293] Pivoting to: 411.86.202301242231-0 (b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f)
I0208 21:50:31.456966 2962835 rpm-ostree.go:325] Executing rebase from repo path /run/mco-machine-os-content/os-content-3432684387/srv/repo with customImageURL pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 and checksum b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f
I0208 21:50:31.457048 2962835 update.go:1972] Running: rpm-ostree rebase --experimental /run/mco-machine-os-content/os-content-3432684387/srv/repo:b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f --custom-origin-url pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 --custom-origin-description Managed by machine-config-operator
0 metadata, 0 content objects imported; 0 bytes content written
Staging deployment... done
Upgraded:
NetworkManager 1:1.30.0-16.el8_4 -> 1:1.36.0-12.el8_6
< snip>
zlib 1.2.11-18.el8_4 -> 1.2.11-19.el8_6
Removed:
ModemManager-glib-1.10.8-2.el8.x86_64
libmbim-1.20.2-1.el8.x86_64
libqmi-1.24.0-1.el8.x86_64
openvswitch2.16-2.16.0-108.el8fdp.x86_64
redhat-release-coreos-410.84-2.el8.x86_64
Added:
WALinuxAgent-udev-2.3.0.2-2.el8_6.3.noarch
glibc-gconv-extra-2.28-189.5.el8_6.x86_64
libbpf-0.4.0-3.el8.x86_64
openvswitch2.17-2.17.0-67.el8fdp.x86_64
redhat-release-8.6-0.1.el8.x86_64
redhat-release-eula-8.6-0.1.el8.x86_64
shadow-utils-subid-2:4.6-16.el8.x86_64
Run "systemctl reboot" to start a reboot
sh-4.4# systemctl reboot

run realize start for a Go program, outputs exec: not started

when I run realize start for a Go program, I got this error result
[14:55:13][V2-USER-API.YUMMY.ID] : Watching 159 file/s 118 folder/s
[14:55:13][V2-USER-API.YUMMY.ID] : Install started
[14:55:13][V2-USER-API.YUMMY.ID] : Install
exec: not started
I have set up my file .realize.yaml, like this
settings:
legacy:
force: false
interval: 0s
schema:
- name: v2-user-api.yummy.id
path: ./cmd/server
commands:
run:
status: true
watcher:
extensions:
- go
paths:
- ../../
ignored_paths:
- .git
- .realize
- vendor
but I got error after run realize start
This command work for me
#!/usr/bin/env bash
export GO111MODULE=off
cd ~/
go get github.com/oxequa/realize
cd /go/src/github.com/oxequa/realize && \
git fetch && \
git checkout v2.0.2 && \
go get github.com/oxequa/realize
RV=$(realize --version)
echo "Realize installed #: $RV"
export GO111MODULE=on
use realize version v2.0.2

Why is celery not working on Elastic Beans stalk?

I have an application that runs well with celery on local but when I deploy it into elastic beanstalk, celery seems to shutdown or not run my task. I am using supervisor to run celery.
This is my configuration for supervisord
I also set a global env of C_FORCE_ROOT=true
Error: 2020-12-21 04:49:56,076 INFO waiting for app, celery-worker to die [2020-12-21 04:49:57,732: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
Unrecoverable error: WorkerLostError('Could not start worker processes')
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.8/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.8/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.8/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 599, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python3.8/site-packages/celery/worker/loops.py", line 59, in asynloop
raise WorkerLostError('Could not start worker processes')
billiard.exceptions.WorkerLostError: Could not start worker processes
[supervisord]
nodaemon=true
[program:app]
command = gunicorn -b 0.0.0.0:5000 --worker-class gevent application.app:app
user=root
directory = /usr/src/app/restful
priority = 900
autostart=true
autorestart = true
stopsignal = TERM
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdin_open = true
tty=true
[program:celery-worker]
command= python -m celery worker -A application.libs.celery_config.celery --loglevel=DEBUG --uid=nobody --gid=nogroup
user=root
directory = /usr/src/app/restful
autostart=true
autorestart = false
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdin_open = true
tty=true
[program:celery-beat]
command= python -m celery beat -A application.libs.celery_config.celery --schedule=/tmp/celerybeat-schedule --loglevel=DEBUG
user=root
directory = /usr/src/app/restful
autostart=true
autorestart = false
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
stdin_open = true
tty=true

lighttpd daemon failed to start after adding python support to read from cgi-bin

getting python to run from cgi-bin causes lighttpd daemon failed to start
$HTTP["url"] =~ "^/cgi-bin/" {
alias.url += ( "/cgi-bin/" => "/var/www/cgi-bin" )
cgi.assign = (".py" => "/usr/bin/python")
}
Am I doing something wrong
I also have below added in the beginning of /etc/lighttpd/lighttpd.conf
server.modules = (
"mod_indexfile",
"mod_setenv",
"mod_access",
"mod_alias",
"mod_redirect",
"mod_cgi"
)
Look in the lighttpd error log and review the trace for what you broke.
Alternatively, run lighttpd pre-flight tests on your config:
lighttpd -tt -f /etc/lighttpd/lighttpd.conf

Better script to restart mysql on Ubuntu 8.04

When I say sudo /etc/init.d/mysql restart on Ubuntu 8.04.2 sometimes there remains a mysql_safe process eating 99% of cpu. Making the machine practically unusable.
Is there a better way to restart mysql? I thought about writing a script:
sudo /etc/init.d/mysql stop
sleep 10
sudo killall mysql_safe
sudo /etc/init.d/mysql start
But this would be a evil workaround. (And the script is just a quick shot)
I googled and found that mysql_safe is a wrapper script which starts mysqld, and makes sure it gets restarted if it should die. So there should be a better way to restart the thing.
I googled that this is a common problem in this ubuntu version. Is Debian / Ubuntu doing it wrong at this point? The /etc/init.d script looks quite sophisticated, and it deals with mysql_safe also, but my skills are not good enough to understand it fully. But this would be the best place to improve. This is a paste of the version on my machine (which is untouched):
#!/bin/bash
#
### BEGIN INIT INFO
# Provides: mysql
# Required-Start: $remote_fs $syslog mysql-ndb
# Required-Stop: $remote_fs $syslog mysql-ndb
# Should-Start: $network $named $time
# Should-Stop: $network $named $time
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start and stop the mysql database server daemon
# Description: Controls the main MySQL database server daemon "mysqld"
# and its wrapper script "mysqld_safe".
### END INIT INFO
#
set -e
set -u
${DEBIAN_SCRIPT_DEBUG:+ set -v -x}
test -x /usr/sbin/mysqld || exit 0
. /lib/lsb/init-functions
SELF=$(cd $(dirname $0); pwd -P)/$(basename $0)
CONF=/etc/mysql/my.cnf
MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
# priority can be overriden and "-s" adds output to stderr
ERR_LOGGER="logger -p daemon.err -t /etc/init.d/mysql -i"
# Safeguard (relative paths, core dumps..)
cd /
umask 077
# mysqladmin likes to read /root/.my.cnf. This is usually not what I want
# as many admins e.g. only store a password without a username there and
# so break my scripts.
export HOME=/etc/mysql/
## Fetch a particular option from mysql's invocation.
#
# Usage: void mysqld_get_param option
mysqld_get_param() {
/usr/sbin/mysqld --print-defaults \
| tr " " "\n" \
| grep -- "--$1" \
| tail -n 1 \
| cut -d= -f2
}
## Do some sanity checks before even trying to start mysqld.
sanity_checks() {
# check for config file
if [ ! -r /etc/mysql/my.cnf ]; then
log_warning_msg "$0: WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz"
echo "WARNING: /etc/mysql/my.cnf cannot be read. See README.Debian.gz" | $ERR_LOGGER
fi
# check for diskspace shortage
datadir=`mysqld_get_param datadir`
if LC_ALL=C BLOCKSIZE= df --portability $datadir/. | tail -n 1 | awk '{ exit ($4>4096) }'; then
log_failure_msg "$0: ERROR: The partition with $datadir is too full!"
echo "ERROR: The partition with $datadir is too full!" | $ERR_LOGGER
exit 1
fi
}
## Checks if there is a server running and if so if it is accessible.
#
# check_alive insists on a pingable server
# check_dead also fails if there is a lost mysqld in the process list
#
# Usage: boolean mysqld_status [check_alive|check_dead] [warn|nowarn]
mysqld_status () {
ping_output=`$MYADMIN ping 2>&1`; ping_alive=$(( ! $? ))
ps_alive=0
pidfile=`mysqld_get_param pid-file`
if [ -f "$pidfile" ] && ps `cat $pidfile` >/dev/null 2>&1; then ps_alive=1; fi
if [ "$1" = "check_alive" -a $ping_alive = 1 ] ||
[ "$1" = "check_dead" -a $ping_alive = 0 -a $ps_alive = 0 ]; then
return 0 # EXIT_SUCCESS
else
if [ "$2" = "warn" ]; then
echo -e "$ps_alive processes alive and '$MYADMIN ping' resulted in\n$ping_output\n" | $ERR_LOGGER -p daemon.debug
fi
return 1 # EXIT_FAILURE
fi
}
#
# main()
#
case "${1:-''}" in
'start')
sanity_checks;
# Start daemon
log_daemon_msg "Starting MySQL database server" "mysqld"
if mysqld_status check_alive nowarn; then
log_progress_msg "already running"
log_end_msg 0
else
/usr/bin/mysqld_safe > /dev/null 2>&1 &
# 6s was reported in #352070 to be too few when using ndbcluster
for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14; do
sleep 1
if mysqld_status check_alive nowarn ; then break; fi
log_progress_msg "."
done
if mysqld_status check_alive warn; then
log_end_msg 0
# Now start mysqlcheck or whatever the admin wants.
output=$(/etc/mysql/debian-start)
[ -n "$output" ] && log_action_msg "$output"
else
log_end_msg 1
log_failure_msg "Please take a look at the syslog"
fi
fi
# Some warnings
if $MYADMIN variables | egrep -q have_bdb.*YES; then
echo "BerkeleyDB is obsolete, see /usr/share/doc/mysql-server-5.0/README.Debian.gz" | $ERR_LOGGER -p daemon.info
fi
if [ -f /etc/mysql/debian-log-rotate.conf ]; then
echo "/etc/mysql/debian-log-rotate.conf is obsolete, see /usr/share/doc/mysql-server-5.0/NEWS.Debian.gz" | $ERR_L
fi
;;
'stop')
# * As a passwordless mysqladmin (e.g. via ~/.my.cnf) must be possible
# at least for cron, we can rely on it here, too. (although we have
# to specify it explicit as e.g. sudo environments points to the normal
# users home and not /root)
log_daemon_msg "Stopping MySQL database server" "mysqld"
if ! mysqld_status check_dead nowarn; then
set +e
shutdown_out=`$MYADMIN shutdown 2>&1`; r=$?
set -e
if [ "$r" -ne 0 ]; then
log_end_msg 1
[ "$VERBOSE" != "no" ] && log_failure_msg "Error: $shutdown_out"
log_daemon_msg "Killing MySQL database server by signal" "mysqld"
killall -15 mysqld
server_down=
for i in 1 2 3 4 5 6 7 8 9 10; do
sleep 1
if mysqld_status check_dead nowarn; then server_down=1; break; fi
done
if test -z "$server_down"; then killall -9 mysqld; fi
fi
fi
if ! mysqld_status check_dead warn; then
log_end_msg 1
log_failure_msg "Please stop MySQL manually and read /usr/share/doc/mysql-server-5.0/README.Debian.gz!"
exit -1
else
log_end_msg 0
fi
;;
'restart')
set +e; $SELF stop; set -e
$SELF start
;;
'reload'|'force-reload')
log_daemon_msg "Reloading MySQL database server" "mysqld"
$MYADMIN reload
log_end_msg 0
;;
'status')
if mysqld_status check_alive nowarn; then
log_action_msg "$($MYADMIN version)"
else
log_action_msg "MySQL is stopped."
exit 3
fi
;;
*)
echo "Usage: $SELF start|stop|restart|reload|force-reload|status"
exit 1
;;
esac
I found many hints, but I would like this resolved to a certain degree of reliability for production servers.
Edit: It seems to be exactly this unsolved bug.
Maybe it is this bug from the MySQL site.
This seems related or identical.
Some people talk of a race condition with 2 instances of mysql_safe. Others suggest commentiong out the sanity check in the startup script.
I would try to figure out what is causing the CPU issue, rather than investigate how to re-write the startup script. The startup script is fairly standard and should work well in a production environment.