what this error means? [Erlang, mochiweb, MySQL] - mysql

I made a comet chat server with Erlang and Mochiweb. And I run the "./start-dev.sh" to start the server. But after about 1 month I got the following error:
=ERROR REPORT==== 26-Sep-2009::09:21:06 ===
{mochiweb_socket_server,235,
{child_error,
{badmatch,
{error,
[70,97,105,108,101,100,32,115,101,110,100,105,110,103,32,100,
97,116,97,32,111,110,32,115,111,99,107,101,116,32,58,32,
"closed"]}}}}
mysql: fetch "SELECT appKey FROM applications WHERE appID = 1" (id p1)
=CRASH REPORT==== 26-Sep-2009::09:21:10 ===
crasher:
initial call: mochiweb_socket_server:acceptor_loop/1
pid: <0.4271.23>
registered_name: []
exception error: no match of right hand side value
{error,[70,97,105,108,101,100,32,115,101,110,100,105,110,
103,32,100,97,116,97,32,111,110,32,115,111,99,
107,101,116,32,58,32,"closed"]}
in function moonwalker_web:loop/2
in call from mochiweb_http:headers/5
ancestors: [moonwalker_web,moonwalker_sup,<0.52.0>]
messages: []
links: [<0.54.0>,#Port<0.792854>]
dictionary: [{mochiweb_request_body,
<<"appID=1&appKey=keyy&userID=8048943&nickName=bill&buddies=N%3B&timestamp=1253928070154">>},
{mochiweb_request_recv,true},
{mochiweb_request_post,
[{"appID","1"},
{"appKey","key"},
{"userID","8048943"},
{"nickName",[143,229,167,144]},
{"buddies","N;"},
{"timestamp","1253928070154"}]},
{mochiweb_request_path,"/online"}]
trap_exit: false
status: running
heap_size: 2584
stack_size: 24
reductions: 1368
neighbours:
=ERROR REPORT==== 26-Sep-2009::09:21:10 ===
{mochiweb_socket_server,235,
{child_error,
{badmatch,
{error,
[70,97,105,108,101,100,32,115,101,110,100,105,110,103,32,100,
97,116,97,32,111,110,32,115,111,99,107,101,116,32,58,32,
"closed"]}}}}
And if turn the following numbers into characters
[70,97,105,108,101,100,32,115,101,110,100,105,110,103,32,100,
97,116,97,32,111,110,32,115,111,99,107,101,116,32,58,32,
"closed"]}}}}
they are
Failed sending data on socket :"closed"
Does that mean I have problems with MySQL connection or socket?
I don't know if this error has something to do with my "./start-dev.sh" or I just had some wrong settings?
And what else information do I have to provide for diagnosing?
Thanks and looking forward to your reply?

It looks like somewhere in the loop/2 function you don't handle an {error,Error} return from a function call. This causes the error which crashes the process. Without the code it is difficult to say what caused the error return.

Related

Database connection error while celery worker remains idle for 24 hours

I have a django based web application where I am using Kafka to process some orders. Now I use Celery Workers to assign a Kafka Consumer to each topics. Each Kafka Consumer is assigned to a Kafka topic in the form of a Kafka tasks. However after a day or so, when I am submitting a task I am getting the following error :
_mysql.connection.query(self, query)
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
The above exception was the direct cause of the following exception:
Below is how my tasks.py file looks like :
#shared_task
def init_kafka_consumer(topic):
try:
if topic is None:
raise Exception("Topic is none, unable to initialize kafka consumer")
logger.info("Spawning new task to subscribe to topic")
params = []
params.append(topic)
background_thread = Thread(target=sunscribe_consumer, args=params)
background_thread.start()
except Exception :
logger.exception("An exception occurred while reading message from kafka")
def sunscribe_consumer(topic) :
try:
if topic is None:
raise Exception("Topic is none, unable to initialize kafka consumer")
conf = {'bootstrap.servers': "localhost:9092", 'group.id': 'test', 'session.timeout.ms': 6000,
'auto.offset.reset': 'earliest'}
c = Consumer(conf)
logger.info("Subscribing consumer to topic "+str(topic[0]))
c.subscribe(topic)
# Read messages from Kafka
try:
while True:
msg = c.poll(timeout=1.0)
if msg is None:
continue
if msg.error():
raise KafkaException(msg.error())
else:
try:
objs = serializers.deserialize("json", msg.value())
for obj in objs:
order = obj.object
order = BuyOrder.objects.get(id=order.id) #Getting an error while accessing DB
if order.is_pushed_to_kafka :
return
order.is_pushed_to_kafka = True
order.save()
from web3 import HTTPProvider, Web3, exceptions
w3 = Web3(HTTPProvider(INFURA_MAIN_NET_ETH_URL))
processBuyerPayout(order,w3)
except Exception :
logger.exception("An exception occurred while de-serializing message")
except Exception :
logger.exception("An exception occurred while reading message from kafka")
finally:
c.close()
except Exception :
logger.exception("An exception occurred while reading message from kafka")
Is there anyway that I could check if database connection exists as soon as a task is received and if not, I can re-establish the connection?
According to https://github.com/celery/django-celery-results/issues/58#issuecomment-418413369
and comments above putting this code:
from django.db import close_old_connections
close_old_connections()
which is closing old connection and opening new one inside your task should helps.

couchbase Re-balance failing with error - Rebalance exited with reason {badmatch,failed}

I am setting up a cluster. I tried to join 3 nodes but while re-balancing. I got below error. So i extracted some info from debug.log and unable to identify the exact issue. Appreciate any help.
=========================CRASH REPORT=========================
crasher:
initial call: service_agent:-spawn_connection_waiter/2-fun-0-/0
pid: <0.18486.7>
registered_name: []
exception exit: {no_connection,"index-service_api"}
in function service_agent:wait_for_connection_loop/3 (src/service_agent.erl, line 305)
ancestors: ['service_agent-index',service_agent_children_sup,
service_agent_sup,ns_server_sup,ns_server_nodes_sup,
<0.170.0>,ns_server_cluster_sup,<0.89.0>]
messages: []
links: [<0.18481.7>,<0.18490.7>]
dictionary: []
trap_exit: false
status: running
heap_size: 987
stack_size: 27
reductions: 1195
neighbours:
[ns_server:error,2018-02-12T13:54:43.531-05:00,ns_1#xuodf9.firebrand.com:service_agent-index<0.18481.7>:service_agent:terminate:264]Terminating abnormally
[ns_server:debug,2018-02-12T13:54:43.531-05:00,ns_1#xuodf9.firebrand.com:<0.18487.7>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.18481.7>} exited with reason {linked_process_died,
<0.18486.7>,
{no_connection,
"index-service_api"}}
[error_logger:error,2018-02-12T13:54:43.531-05:00,ns_1#xuodf9.firebrand.com:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server 'service_agent-index' terminating
** Last message in was {'EXIT',<0.18486.7>,
{no_connection,"index-service_api"}}
** When Server state == {state,index,
{dict,6,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
{{[[{uuid,<<"55a14ec6b06d72205b3cd956e6de60e7">>}|
'ns_1#xuodf7.firebrand.com']],
[],
[[{uuid,<<"c5e67322a74826bef8edf27d51de3257">>}|
'ns_1#xuodf8.firebrand.com']],
[],
[[{uuid,<<"3b55f7739e3fe85127dcf857a5819bdf">>}|
'ns_1#xuodf9.firebrand.com']],
[],
[[{node,'ns_1#xuodf7.firebrand.com'}|
<<"55a14ec6b06d72205b3cd956e6de60e7">>],
[{node,'ns_1#xuodf8.firebrand.com'}|
<<"c5e67322a74826bef8edf27d51de3257">>],
[{node,'ns_1#xuodf9.firebrand.com'}|
<<"3b55f7739e3fe85127dcf857a5819bdf">>]],
[],[],[],[],[],[],[],[],[]}}},
undefined,undefined,<0.18626.7>,#Ref<0.0.5.56873>,
<0.18639.7>,
{[{<0.18646.7>,#Ref<0.0.5.56891>}],[]},
undefined,undefined,undefined,undefined,undefined}
** Reason for termination ==
** {linked_process_died,<0.18486.7>,{no_connection,"index-service_api"}}
[error_logger:error,2018-02-12T13:54:43.532-05:00,ns_1#xuodf9.firebrand.com:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: service_agent:init/1
pid: <0.18481.7>
registered_name: 'service_agent-index'
exception exit: {linked_process_died,<0.18486.7>,
{no_connection,"index-service_api"}}
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [service_agent_children_sup,service_agent_sup,ns_server_sup,
ns_server_nodes_sup,<0.170.0>,ns_server_cluster_sup,
<0.89.0>]
messages: [{'EXIT',<0.18639.7>,
{linked_process_died,<0.18486.7>,
{no_connection,"index-service_api"}}}]
links: [<0.18487.7>,<0.4805.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 28690
stack_size: 27
reductions: 6334
neighbours:
[error_logger:error,2018-02-12T13:54:43.533-05:00,ns_1#xuodf9.firebrand.com:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,service_agent_children_sup}
Context: child_terminated
Reason: {linked_process_died,<0.18486.7>,
{no_connection,"index-service_api"}}
Offender: [{pid,<0.18481.7>},
{name,{service_agent,index}},
{mfargs,{service_agent,start_link,[index]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:error,2018-02-12T13:54:43.533-05:00,ns_1#xuodf9.firebrand.com:service_rebalancer-index<0.18626.7>:service_rebalancer:run_rebalance:80]Agent terminated during the rebalance: {'DOWN',#Ref<0.0.5.56860>,process,
<0.18481.7>,
{linked_process_died,<0.18486.7>,
{no_connection,"index-service_api"}}}
[error_logger:info,2018-02-12T13:54:43.534-05:00,ns_1#xuodf9.firebrand.com:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
supervisor: {local,service_agent_children_sup}
started: [{pid,<0.20369.7>},
{name,{service_agent,index}},
{mfargs,{service_agent,start_link,[index]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
[ns_server:error,2018-02-12T13:54:43.534-05:00,ns_1#xuodf9.firebrand.com:service_agent-index<0.20369.7>:service_agent:handle_call:186]Got rebalance-only call {if_rebalance,<0.18626.7>,unset_rebalancer} that doesn't match rebalancer pid undefined
[ns_server:error,2018-02-12T13:54:43.534-05:00,ns_1#xuodf9.firebrand.com:service_rebalancer-index<0.18626.7>:service_agent:process_bad_results:815]Service call unset_rebalancer (service index) failed on some nodes:
[{'ns_1#xuodf9.firebrand.com',nack}]
[ns_server:warn,2018-02-12T13:54:43.534-05:00,ns_1#xuodf9.firebrand.com:service_rebalancer-index<0.18626.7>:service_rebalancer:run_rebalance:89]Failed to unset rebalancer on some nodes:
{error,{bad_nodes,index,unset_rebalancer,
[{'ns_1#xuodf9.firebrand.com',nack}]}}
[error_logger:error,2018-02-12T13:54:43.535-05:00,ns_1#xuodf9.firebrand.com:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
crasher:
initial call: service_rebalancer:-spawn_monitor/6-fun-0-/0
pid: <0.18626.7>
registered_name: 'service_rebalancer-index'
exception exit: {linked_process_died,<0.18486.7>,
{no_connection,"index-service_api"}}
in function service_rebalancer:run_rebalance/7 (src/service_rebalancer.erl, line 92)
ancestors: [cleanup_process,ns_janitor_server,ns_orchestrator_child_sup,
ns_orchestrator_sup,mb_master_sup,mb_master,<0.4893.0>,
ns_server_sup,ns_server_nodes_sup,<0.170.0>,
ns_server_cluster_sup,<0.89.0>]
messages: [{'EXIT',<0.18640.7>,
{linked_process_died,<0.18486.7>,
{no_connection,"index-service_api"}}}]
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 2586
stack_size: 27
reductions: 6359
neighbours:
[ns_server:error,2018-02-12T13:54:43.536-05:00,ns_1#xuodf9.firebrand.com:cleanup_process<0.18625.7>:service_janitor:maybe_init_topology_aware_service:84]Initial rebalance for `index` failed: {error,
{initial_rebalance_failed,index,
{linked_process_died,<0.18486.7>,
{no_connection,
"index-service_api"}}}}
[ns_server:debug,2018-02-12T13:54:43.536-05:00,ns_1#xuodf9.firebrand.com:menelaus_cbauth<0.4796.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"projector-cbauth",<0.5099.0>} needs_update
[ns_server:debug,2018-02-12T13:54:43.538-05:00,ns_1#xuodf9.firebrand.com:menelaus_cbauth<0.4796.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"goxdcr-cbauth",<0.479.0>} needs_update
[ns_server:debug,2018-02-12T13:54:43.539-05:00,ns_1#xuodf9.firebrand.com:menelaus_cbauth<0.4796.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"cbq-engine-cbauth",<0.5124.0>} needs_update
[ns_server:debug,2018-02-12T13:54:43.540-05:00,ns_1#xuodf9.firebrand.com:menelaus_cbauth<0.4796.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"fts-cbauth",<0.5129.0>} needs_update
This is a blocker for cluster creation at this point.
The rebalance error is coming due to index service. You can check indexer.log to see if there are any errors and the process is able to bootstrap correctly.
Please make sure the communication ports are open as mentioned here: https://developer.couchbase.com/documentation/server/current/install/install-ports.html
projector_port 9999 being blocked can lead to this.

ejabberd odbc error + unable to figure out exact source

my ejabberd server is constantly crashing and it is somewhat related to ODBC module but I am not able to understand the issue. Below are the logs. Can anyone help me interpret?
I have copy pasted a few messages below.
=ERROR REPORT==== 14-Oct-2015::00:27:51 === ** State machine <0.27422.5> terminating ** Last message in was {'$gen_sync_event', {<0.27896.5>,#Ref<0.0.10.246367>}, {sql_cmd, {sql_query,<<"SELECT 1;">>}, {1444,782471,512104}}} ** When State == session_established ** Data == {state,<0.27423.5>,odbc,30000,<<"abchost.com">>,1000, {0,{[],[]}}} ** Reason for termination = ** {function_clause,[{odbc,sql_query, [<0.27423.5>,<<"SELECT 1;">>,59000], [{file,"odbc.erl"},{line,183}]}, {ejabberd_odbc,sql_query_internal,1, [{file,"src/ejabberd_odbc.erl"}, {line,468}]}, {ejabberd_odbc,run_sql_cmd,4, [{file,"src/ejabberd_odbc.erl"}, {line,374}]}, {p1_fsm,handle_msg,10, [{file,"src/p1_fsm.erl"},{line,582}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,237}]}]}
and
00:27:51.573 [error] CRASH REPORT Process <0.27434.5> with 0 neighbours exited with reason: no function clause matching odbc:sql_query(<0.27435.5>, <<"SELECT 1;">>, 59000) line 183 in p1_fsm:terminate/8 line 760
and
00:27:53.965 [error] gen_fsm <0.27439.5> in state session_established terminated with reason: no function clause matching odbc:sql_query(<0.27442.5>, <<"SELECT 1;">>, 59000) line 183
and
=ERROR REPORT==== 14-Oct-2015::00:27:51 === ** Generic server <0.27435.5> terminating ** Last message in was {'DOWN',#Ref<0.0.10.239386>,process,<0.27434.5>, {function_clause, [{odbc,sql_query, [<0.27435.5>,<<"SELECT 1;">>,59000], [{file,"odbc.erl"},{line,183}]}, {ejabberd_odbc,sql_query_internal,1, [{file,"src/ejabberd_odbc.erl"}, {line,468}]}, {ejabberd_odbc,run_sql_cmd,4, [{file,"src/ejabberd_odbc.erl"}, {line,374}]}, {p1_fsm,handle_msg,10, [{file,"src/p1_fsm.erl"},{line,582}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,237}]}]}} ** When Server state == {state,#Port<0.2314388>,undefined,<0.27434.5>, undefined,on,false,false,off,connected, undefined,0, [#Port<0.2314379>,#Port<0.2314376>], #Port<0.2314386>,#Port<0.2314366>} ** Reason for termination == ** {stopped, {'EXIT',<0.27434.5>, {function_clause, [{odbc,sql_query, [<0.27435.5>,<<"SELECT 1;">>,59000], [{file,"odbc.erl"},{line,183}]}, {ejabberd_odbc,sql_query_internal,1, [{file,"src/ejabberd_odbc.erl"},{line,468}]}, {ejabberd_odbc,run_sql_cmd,4, [{file,"src/ejabberd_odbc.erl"},{line,374}]}, {p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,582}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,237}]}]}}}
and
00:27:51.552 [error] Supervisor odbc_sup had child [] started with {odbc,start_link_sup,undefined} at <0.27432.5> exit with reason {stopped,{'EXIT',<0.27429.5>,{function_clause,[{odbc,sql_query,[<0.27432.5>,<<"SELECT 1;">>,59000],[{file,"odbc.erl"},{line,183}]},{ejabberd_odbc,sql_query_internal,1,[{file,"src/ejabberd_odbc.erl"},{line,468}]},{ejabberd_odbc,run_sql_cmd,4,[{file,"src/ejabberd_odbc.erl"},{line,374}]},{p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,582}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}}} in context child_terminated
I think you are referring to a bug that has already been fixed in ejabberd master branch: https://github.com/processone/ejabberd/commit/7d99484859df7c33a73da92d84b5cb5bd27a244e

ESB Mule Client staring with xml-properties fails

I use Mule 3.x
I have a code that tests MuleClient connectivity.
This test is ok and works proper way:
public void testHello() throws Exception
{
MuleClient client = new MuleClient(muleContext);
MuleMessage result = client.send("http://127.0.0.1:8080/hello", "some data", null);
assertNotNull(result);
assertNull(result.getExceptionPayload());
assertFalse(result.getPayload() instanceof NullPayload);
//TODO Assert the correct data has been received
assertEquals("hello", result.getPayloadAsString());
}
But this tes is not ok - it fail with an connection exceptions:
public void testHello_with_Spring() throws Exception {
MuleClient client = new MuleClient("mule-config-test.xml");
client.getMuleContext().start();
//it fails here
MuleMessage result = client.send("http://127.0.0.1:8080/hello", "some data", null);
assertNotNull(result);
assertNull(result.getExceptionPayload());
assertFalse(result.getPayload() instanceof NullPayload);
//TODO Assert the correct data has been received
assertEquals("hello", result.getPayloadAsString());
}
The 'mule-config-test.xml' is used in both tests, the path for this file is ok, i checked.
This is error message I have in the end:
Exception stack is:
1. Address already in use (java.net.BindException) java.net.PlainSocketImpl:-2 (null)
2. Failed to bind to uri "http://127.0.0.1:8080/hello" (org.mule.transport.ConnectException)
org.mule.transport.tcp.TcpMessageReceiver:81
(http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/transport/ConnectException.html)
-------------------------------------------------------------------------------- Root Exception stack trace: java.net.BindException: Address already in
use at java.net.PlainSocketImpl.socketBind(Native Method) at
java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383) at
java.net.ServerSocket.bind(ServerSocket.java:328)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
[10-05 16:33:37] ERROR HttpConnector [main]:
org.mule.transport.ConnectException: Failed to bind to uri
"http://127.0.0.1:8080/hello" [10-05 16:33:37] ERROR ConnectNotifier
[main]: Failed to connect/reconnect: HttpConnector {
name=connector.http.mule.default lifecycle=stop this=7578a7d9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true connected=false
supportedProtocols=[http] serviceOverrides= } . Root Exception
was: Address already in use. Type: class java.net.BindException [10-05
16:33:37] ERROR DefaultSystemExceptionStrategy [main]: Failed to bind
to uri "http://127.0.0.1:8080/hello"
org.mule.api.lifecycle.LifecycleException: Cannot process event as
"connector.http.mule.default" is stopped
I think the problem is in what you're not showing: testHello_with_Spring() is probably executing while Mule is already running. The second Mule you're starting in it then port-conflicts with the other one.
Are testHello() and testHello_with_Spring() in the same test suite? If yes, seeing that testHello() relies on an already running Mule, I'd say that would be the cause of port conflict for testHello_with_Spring().

OpenShift domain status failing

So I created an account at open shift, created an app, and installed the command line tool. when I do the command rhc domain status it fails:
Loaded suite /usr/bin/rhc-chk
Started
.E
===============================================================================
Error: test_connectivity(Test1_Connectivity)
ArgumentError: too few arguments
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:204:in `sprintf'
201: message = sprintf(get_message(:errors,name),*(args.shift || ''))
202: solution = get_message(:solutions,name)
203: if solution
=> 204: message << "\n" << sprintf(solution,*(args.shift || ''))
205: end
206: message
207: end
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:204:in `error_for'
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:270:in `test_connectivity'
===============================================================================
F
===============================================================================
Failure:
You need to be able to connect to the server in order to test authentication.
<false> is not true.
test_authentication(Test2_Authentication)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:280:in `test_authentication'
277: # Checking Authentication
278: #
279: def test_authentication
=> 280: assert $connectivity, error_for(:cant_connect)
281:
282: data = {'rhlogin' => $rhlogin}
283: response = fetch_url_json("/broker/userinfo", data)
===============================================================================
..F
===============================================================================
Failure: You must have an account on the server in order to test: whether you have a valid key loaded in your agent.
test_03_remote_ssh_keys(Test3_SSH)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:317:in `require_login'
314: end
315:
316: def require_login(test)
=> 317: flunk(error_for(:no_account,test)) if $user_info.nil?
318: end
319:
320: def require_remote_keys(test)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:321:in `require_remote_keys'
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:376:in `test_03_remote_ssh_keys'
===============================================================================
F
===============================================================================
Failure: You must have an account on the server in order to test: connecting to your applications.
test_04_ssh_connect(Test3_SSH)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:317:in `require_login'
314: end
315:
316: def require_login(test)
=> 317: flunk(error_for(:no_account,test)) if $user_info.nil?
318: end
319:
320: def require_remote_keys(test)
/Library/Ruby/Gems/1.8/gems/rhc-0.94.8/bin/rhc-chk:383:in `test_04_ssh_connect'
===============================================================================
Finished in 2.403595 seconds.
7 tests, 8 assertions, 3 failures, 1 errors, 0 pendings, 0 omissions, 0 notifications
42.8571% passed
Not really understanding why it's not able to connect. I was able to use: rhc domain show, with no problems.
Anyone have any suggestions on how to fix this?
It's a bug that should get fixed in the upcoming release. Even though you see this error it shouldn't affect any other behaviour.