nginx error message while deploying rails app - mysql

It is my first time to deploy an application.
I am working on a ruby on rails app using latest version, and following that tutorial: Deploy Ruby On Rails on Ubuntu 16.04 Xenial Xerus
everything was going right,when restarting my application using touch my_app_name/current/tmp/restart.txt, I get the attached nginx error
I tried to pick the error log from nginx using:
sudo tail -n 20 /var/log/nginx/error.log
and got the following:
[ N 2017-10-08 10:02:46.2189 29260/T6 Ser/Server.h:531 ]: [ServerThr.1] Shutdown finished
[ N 2017-10-08 10:02:46.2192 29260/T1 age/Cor/CoreMain.cpp:917 ]: Checking whether to disconnect long-running connections for process 30514, application /home/deploy/myapp/current/public (production)
[ N 2017-10-08 10:02:46.2274 29266/T3 age/Ust/UstRouterMain.cpp:430 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ N 2017-10-08 10:02:46.2279 29266/T1 age/Ust/UstRouterMain.cpp:500 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ N 2017-10-08 10:02:46.2281 29266/T5 Ser/Server.h:886 ]: [UstRouterApiServer] Freed 0 spare client objects
[ N 2017-10-08 10:02:46.2282 29266/T5 Ser/Server.h:531 ]: [UstRouterApiServer] Shutdown finished
[ N 2017-10-08 10:02:46.2313 29266/T3 Ser/Server.h:531 ]: [UstRouter] Shutdown finished
[ N 2017-10-08 10:02:46.3166 29266/T1 age/Ust/UstRouterMain.cpp:531 ]: Passenger UstRouter shutdown finished
[ N 2017-10-08 10:02:46.7083 29260/T1 age/Cor/CoreMain.cpp:1068 ]: Passenger core shutdown finished
2017/10/08 10:02:47 [info] 30632#30632: Using 32768KiB of shared memory for nchan in /etc/nginx/nginx.conf:71
[ N 2017-10-08 10:02:47.8959 30639/T1 age/Wat/WatchdogMain.cpp:1283 ]: Starting Passenger watchdog...
[ N 2017-10-08 10:02:47.9446 30642/T1 age/Cor/CoreMain.cpp:1083 ]: Starting Passenger core...
[ N 2017-10-08 10:02:47.9459 30642/T1 age/Cor/CoreMain.cpp:248 ]: Passenger core running in multi-application mode.
[ N 2017-10-08 10:02:47.9815 30642/T1 age/Cor/CoreMain.cpp:830 ]: Passenger core online, PID 30642
[ N 2017-10-08 10:02:48.0532 30648/T1 age/Ust/UstRouterMain.cpp:537 ]: Starting Passenger UstRouter...
[ N 2017-10-08 10:02:48.0571 30648/T1 age/Ust/UstRouterMain.cpp:350 ]: Passenger UstRouter online, PID 30648
[ N 2017-10-08 10:02:50.4687 30642/T8 age/Cor/SecurityUpdateChecker.h:374 ]: Security update check: no update found (next check in 24 hours)
App 30667 stdout:
App 30737 stdout:

#dstull I do not know how to thank you brother, You got the point.It was an issue with my rails app. I finished my app in the development level and I was using a theme (bootstrap theme that I bought). My app was trying to access method with nil values, since there is nothing initialized yet.

Related

How to manually recreate the bootstrap client certificate for OpenShift 3.11 master?

Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml

Server denies request due to wrong Domain coming from Fritzbox

I am trying to reach my local server via IPv6 which is failing due to certificate issues.
E.g. the nextcloud client gives following error:
$nextcloudcmd --trust --logdebug Nextcloud https://nextcloud.domain.de
10-20 12:47:43:798 [ info nextcloud.sync.accessmanager ]: 2 "" "https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json" has X-Request-ID "19a2a694-1912-4813-b3f5-2d4d5720fa80"
10-20 12:47:43:799 [ info nextcloud.sync.networkjob ]: OCC::JsonApiJob created for "https://nextcloud.domain.de" + "ocs/v1.php/cloud/capabilities" ""
10-20 12:47:43:955 [ info nextcloud.sync.account ]: "SSL-Errors happened for url \"https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\" \tError in QSslCertificate(\"3\", \"f9:8e:0f:4f:bd:4b:a3:5f\", \"hkXxG7tBu+SGaRSBZ9gRyw==\", \"<hostname>.domain.de\", \"<hostname>.domain.de\", QMap((1, \"www.fritz.nas\")(1, \"fritz.nas\")(1, \"<WiFi-Name>\")(1, \"www.myfritz.box\")(1, \"myfritz.box\")(1, \"www.fritz.box\")(1, \"fritz.box\")(1, \"<hostname>.domain.de\")), QDateTime(2019-10-19 12:32:25.000 UTC Qt::UTC), QDateTime(2038-01-15 12:32:25.000 UTC Qt::UTC)) : \"The host name did not match any of the valid hosts for this certificate\" ( \"The host name did not match any of the valid hosts for this certificate\" ) \n \tError in QSslCertificate(\"3\", \"f9:8e:0f:4f:bd:4b:a3:5f\", \"hkXxG7tBu+SGaRSBZ9gRyw==\", \"<hostname>.domain.de\", \"<hostname>.domain.de\", QMap((1, \"www.fritz.nas\")(1, \"fritz.nas\")(1, \"<WiFi-Name>\")(1, \"www.myfritz.box\")(1, \"myfritz.box\")(1, \"www.fritz.box\")(1, \"fritz.box\")(1, \"<hostname>.domain.de\")), QDateTime(2019-10-19 12:32:25.000 UTC Qt::UTC), QDateTime(2038-01-15
12:32:25.000 UTC Qt::UTC)) : \"The certificate is self-signed, and untrusted\" ( \"The certificate is self-signed, and untrusted\" ) \n " Certs are known and trusted! This is not an actual error.
10-20 12:47:43:964 [ warning nextcloud.sync.networkjob ]: QNetworkReply::ProtocolInvalidOperationError "Server replied \"400 Bad Request\" to \"GET https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\"" QVariant(int, 400)
10-20 12:47:43:964 [ info nextcloud.sync.networkjob.jsonapi ]: JsonApiJob of QUrl("https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json") FINISHED WITH STATUS "ProtocolInvalidOperationError Server replied \"400 Bad Request\" to \"GET https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\""
10-20 12:47:43:964 [ warning nextcloud.sync.networkjob.jsonapi ]: Network error: "ocs/v1.php/cloud/capabilities" "Server replied \"400 Bad Request\" to \"GET https://nextcloud.domain.de/ocs/v1.php/cloud/capabilities?format=json\"" QVariant(int, 400)
10-20 12:47:43:964 [ debug default ] [ main(int, char**)::<lambda ]: Server capabilities QJsonObject()
Error connecting to server
I wonder why Fritzbox tries to request via .domain.de instead of nextcloud.domain.de.
Can anyone point me into the right direction?
Okay got information from the Site (German: https://avm.de/service/fritzbox/fritzbox-7580/wissensdatenbank/publication/show/3525_Zugriff-auf-HTTPS-Server-im-Heimnetz-nicht-moglich#zd) which led me to following conclusion.
As you do not have NAT for IPv6 addresses and the fritzbox cannot do it as well, the IPv6 has to be from the server. Thus one solution I found is ddclient. By installing it on your GNU\Linux server it will update the IPv6 address at your DynDNS provider.
But one thing is still open. I cannot get IPv4 and IPv6 updated.

MySQL shutdown issue on Magento

We have a magento website
Our website some times it showing below error like
There has been an error processing your request
Exception printing is disabled by default for security reasons.
Error log record number: 855613014442
Based on our logs, it is showing that Mysql is going down as shown below
2019-06-24T04:44:49.542168Z 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.7.26' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)
2019-06-24T04:44:50.594943Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190624 4:44:50
2019-06-24T04:45:11.103402Z 0 [Note] Giving 0 client threads a chance to die gracefully
2019-06-24T04:45:11.103429Z 0 [Note] Shutting down slave threads
2019-06-24T04:45:11.103438Z 0 [Note] Forcefully disconnecting 0 remaining clients
2019-06-24T04:45:11.103444Z 0 [Note] Event Scheduler: Purging the queue. 0 events
2019-06-24T04:45:11.103484Z 0 [Note] Binlog end
We have increased innodb_buffer_pool_size but still i am facing same issue.
I have executed below commands in my server..check it these outputs
1)free -m
Output:
total used free shared buff/cache available
Mem: 7819 1430 4688 81 1701 6009
Swap: 0 0 0
2)dmesg | tail -30
Output:
[ 6.222373] [TTM] Initializing pool allocator
[ 6.241079] [TTM] Initializing DMA pool allocator
[ 6.255768] [drm] fb mappable at 0xF0000000
[ 6.259225] [drm] vram aper at 0xF0000000
[ 6.262574] [drm] size 33554432
[ 6.265475] [drm] fb depth is 24
[ 6.268473] [drm] pitch is 3072
[ 6.289079] fbcon: cirrusdrmfb (fb0) is primary device
[ 6.346169] Console: switching to colour frame buffer device 128x48
[ 6.347151] loop: module loaded
[ 6.357709] cirrus 0000:00:02.0: fb0: cirrusdrmfb frame buffer device
[ 6.364646] [drm] Initialized cirrus 1.0.0 20110418 for 0000:00:02.0 on minor 0
[ 6.722341] input: PC Speaker as /devices/platform/pcspkr/input/input4
[ 6.788110] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
[ 6.802845] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null)
[ 6.841332] cryptd: max_cpu_qlen set to 1000
[ 6.871200] AVX2 version of gcm_enc/dec engaged.
[ 6.873349] AES CTR mode by8 optimization enabled
[ 6.936609] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
[ 6.949717] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null)
[ 6.964446] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[ 6.984659] alg: No test for __generic-gcm-aes-aesni (__driver-generic-gcm-aes-aesni)
[ 7.084148] intel_rapl: Found RAPL domain package
[ 7.086591] intel_rapl: Found RAPL domain dram
[ 7.088788] intel_rapl: DRAM domain energy unit 15300pj
[ 7.102115] EDAC sbridge: Seeking for: PCI ID 8086:6fa0
[ 7.102119] EDAC sbridge: Ver: 1.1.2
[ 7.175339] ppdev: user-space parallel port driver
[ 10.728980] ip6_tables: (C) 2000-2006 Netfilter Core Team
[ 10.772307] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
3)ps auxw | grep mysql
Output:
mysql 5056 2.9 10.8 7009056 871240 ? Sl 12:29 0:12 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
root 5538 0.0 0.0 112708 976 pts/0 S+ 12:36 0:00 grep --color=auto mysql
Can anyone has idea how to resolve this issue.
Thanks

spring mvc + mybatis + mysql5.7 save less time for 8 hours

spring mvc + mybatis + mysql 5.7 + jdk8
I use mysql 5.7 for save json data.
I use jdk8 time API LocalDateTime now = LocalDateTime.now(); to get datetime and save to database.
But I fund the result of database are less time for about 8 hours.
My treatment process
1、Code problems, debug it, and I find the time in the object is right before it save to the database.
2、System time zone problem, the investigation of the local computer system, server system, database system and other computer systems, time zones are all East eight district no problem. (I am Chinese)
3、View console print SQL
DEBUG [ 2017-08-23 12:42:35 970 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Preparing: insert into log_user_operation (pk_id, user_code, user_name, login_ip, url, operation_type, operation_content, remark, create_time) values (?, ?, ?, ?, ?, ?, ?, ?, ?)
DEBUG [ 2017-08-23 12:42:36 005 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Parameters: null, admin(String), Admin(String), 127.0.0.1(String), http://localhost:8080/bi/login-check(String), SELECT(String), 用户登录(String), (String), 2017-08-23 12:42:32.9(Timestamp)
DEBUG [ 2017-08-23 12:42:36 016 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - <== Updates: 1
DEBUG [ 2017-08-23 12:42:36 020 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Preparing: SELECT LAST_INSERT_ID()
DEBUG [ 2017-08-23 12:42:36 021 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Parameters:
TRACE [ 2017-08-23 12:42:36 039 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.trace(BaseJdbcLogger.java:151) - <== Columns: LAST_INSERT_ID()
TRACE [ 2017-08-23 12:42:36 039 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.trace(BaseJdbcLogger.java:151) - <== Row: 47
DEBUG [ 2017-08-23 12:42:36 042 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - <== Total: 1
DEBUG [ 2017-08-23 12:42:36 047 ]: org.mybatis.spring.SqlSessionUtils.closeSqlSession(SqlSessionUtils.java:193) - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#516c8654]
DEBUG [ 2017-08-23 12:42:36 048 ]: org.springframework.jdbc.datasource.DataSourceUtils.doReleaseConnection(DataSourceUtils.java:332) - Returning JDBC Connection to DataSource
I feel that everything is OK, but the database save is wrong.
the test result of right now
The problem that has been bothering for two days is finally settled this evening.
This is due to the URL specification of the new MySQL drive package:
jdbc:mysql://localhost:3306/ss?characterEncoding=utf8&useSSL=true&serverTimezone=UTC&nullNamePatternMatchesAll=true
Modify serverTimezone=UTC to serverTimezone=Hongkong can solve the problem successfully。

Could not initialize corosync configuration API error 12

Unable to initialize corosync running inside a docker container. The corosync-cfgtool -s command yields the following:
Could not initialize corosync configuration API error 12
The /etc/corosync/corosync.conf file has the following:
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 127.0.0.1
mcastaddr: 239.255.1.1
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
The /var/log/corosync.log file shows the following:
May 02 20:13:22 corosync [MAIN ] Could not set SCHED_RR at priority 99: Operation not permitted (1)
May 02 20:13:22 corosync [MAIN ] Could not lock memory of service to avoid page faults: Cannot allocate memory (12)
May 02 20:13:22 corosync [MAIN ] Corosync Cluster Engine ('1.4.6'): started and ready to provide service.
May 02 20:13:22 corosync [MAIN ] Corosync built-in features: nss
May 02 20:13:22 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
May 02 20:13:22 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
May 02 20:13:22 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
I was running the following in a bash script:
service corosync start
service corosync status
corosync-cfgtool -s
Apparently it was running too quickly and not giving corosync enough time to initialize. Changing the script to the following seems to have worked:
service corosync start
service corosync status
sleep 5
corosync-cfgtool -s
I now see the following output from corosync-cfgtool -s:
Printing ring status.
Local node ID 16777343
RING ID 0
id = 127.0.0.1
status = ring 0 active with no faults