ERROR Session: Error creating pool to /127.0.0.1:9042 - configuration

I am trying to insert values in cassandra when I come across this error:
15/08/14 10:21:54 INFO Cluster: New Cassandra host /a.b.c.d:9042 added
15/08/14 10:21:54 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
15/08/14 10:21:54 ERROR Session: Error creating pool to /127.0.0.1:9042
com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect
at com.datastax.driver.core.Connection.<init>(Connection.java:109)
at com.datastax.driver.core.PooledConnection.<init>(PooledConnection.java:32)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:586)
at com.datastax.driver.core.SingleConnectionPool.<init>(SingleConnectionPool.java:76)
at com.datastax.driver.core.HostConnectionPool.newInstance(HostConnectionPool.java:35)
at com.datastax.driver.core.SessionManager.replacePool(SessionManager.java:271)
at com.datastax.driver.core.SessionManager.access$400(SessionManager.java:40)
at com.datastax.driver.core.SessionManager$3.call(SessionManager.java:308)
at com.datastax.driver.core.SessionManager$3.call(SessionManager.java:300)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9042
My replication factor is 1. There are 5 nodes in the Cass cluster (they're all up). rpc_address: 0.0.0.0, broadcast_rpc_address: 127.0.0.1
I would think that I should see 5 of those "INFO Cluster: New Cassandra host.." line from above for each of the 5 nodes. But instead I see 127.0.0.1, I am not sure why.
I also noticed that in the cassandra.yaml file, all the 5 nodes are listed under seed. (which I know is not advised but I did not set up this cluster)
seed_provider:
class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
seeds: "ip1, ip2, ip3, ip4, ip5"
Where ipx is the ipaddr for node x.
And under cassandra-topology.properties it just says the following and does not mention any of the 5 nodes.
# default for unknown nodes
default=DC1:r1
Can someone explain why I am seeing the ERROR Session: Error creating pool to /127.0.0.1:9042 error.
Kind of new to Cassandra.. thanks in advance!

I think the problem is your rpc_broadcast_address is set to 127.0.0.1. Is there a reason in particular you are doing this?
The java driver uses the system.peers table to look up the ip address to use to connect to hosts. If rpc_broadcast_address is set this is what will be present in system.peers and the driver will try to use it. If rpc_broadcast_address is not set, rpc_address will be used. In either case, you'll want to set one of these addresses to an address that will be accessible by your client. If you set rpc_address, you will want to remove broadcast_rpc_address.

Related

IBM MQ doesn't run as mqm on Openshift 4

Hi guys.
I've an IBM MQ image deployed on Openshift 4 and for some reason, the processes don't use the user mqm but the one randomly generated by Openshift itself.
As a result, I've a Java application that tries to connect to the queues and it fails because the authentication fails since it uses mqm as user.
The same exact image running on Openshift 3 behaves as expected. For more details:
Custom image:
FROM ibmcom/mq
ENV HOME /root
COPY config.mqsc /etc/mqm/
and, in the config.mqsc:
DEFINE CHANNEL(PASSWORD.SVRCONN) CHLTYPE(SVRCONN)
SET CHLAUTH(PASSWORD.SVRCONN) TYPE(BLOCKUSER) USERLIST('nobody') DESCR('Allow privileged users on this channel')
SET CHLAUTH('*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) DESCR('BackStop rule')
SET CHLAUTH(PASSWORD.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED)
ALTER AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS) ADOPTCTX(YES)
REFRESH SECURITY TYPE(CONNAUTH)
DEFINE QLOCAL(MYQUEUE.IN ) DEFPSIST(YES) MAXDEPTH(500000)
DEFINE QLOCAL(MYQUEUE.OUT ) DEFPSIST(YES) MAXDEPTH(500000)
DEFINE QLOCAL(CS.ERROR) DEFPSIST(YES) MAXDEPTH(500000)
ALTER QMGR CHLAUTH(DISABLED) CONNAUTH(' ')
ALTER CHANNEL('SYSTEM.DEF.SVRCONN') CHLTYPE(SVRCONN) MCAUSER('mqm')
REFRESH SECURITY TYPE(CONNAUTH)
The process running on Openshift 4 looks like
1000790+ 232 0.0 0.1 2308688 45776 ? Ssl 09:39 0:00 /opt/mqm/bin/amqzxma0 -m QM1 -x -u 1000790000
but in the Openshift 3 it looks like
1000100+ 152 0.0 0.0 2324200 33812 ? Ssl May03 0:06 /opt/mqm/bin/amqzxma0 -m QM1 -x -u mqm
Another difference are the "capabilties" and the security attributes that the MQ container has on startup.
On Openshift 3:
Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot,audit_write,setfcap
Process security attributes: system_u:system_r:container_t:s0:c0,c15
On Openshift 4:
Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot
Process security attributes: system_u:system_r:container_t:s0:c17,c28
Stacktrace produced by the application:
Caused by: org.springframework.jms.JmsSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.; nested exception is com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:286)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:185)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:507)
at org.springframework.jms.core.JmsTemplate.browseSelected(JmsTemplate.java:1029)
at org.springframework.jms.core.JmsTemplate.browse(JmsTemplate.java:991)
... 78 more
Caused by: com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:531)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:424)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:8475)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7815)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl._createConnection(JmsConnectionFactoryImpl.java:303)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:236)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6016)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:111)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:187)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:196)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:494)
... 80 more
Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203)
... 90 more
Any idea on what the issue could be?
To ensure compliance with security constraints required in a multi-tenant containerized environment, the IBM MQ certified containers, do not support the use of IDs that are defined on the operating system libraries inside a container. There is no mqm user ID or group defined in the container.
For more details read User authentication and authorization for IBM MQ in containers

Hono adapters cannot connect to enmasse

I'm currently installing hono together with enmasse on top of openshift/okd. Everything goes fine except for the connection between the adapters and enmasse. When I deploy the amqp adapter for example (happens with http and mqtt adapter as well), I'm getting following logging from the hono adapter:
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - starting attempt [#5] to connect to server [messaging-hono-default.enmasse-infra.svc:5672]
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - connecting to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Enmasse logs following:
2019-01-07 12:36:24.962160 +0000 SERVER (info) [160]: Accepted connection to 0.0.0.0:5672 from 10.128.0.1:44664
2019-01-07 12:36:24.962258 +0000 SERVER (info) [160]: Connection from 10.128.0.1:44664 (to 0.0.0.0:5672) failed: amqp:connection:framing-error No valid protocol header found
Additional info:
Hono version: 0.8.x
Enmasse version: 0.24.1
Can somebody tell me what I'm missing?
Thanks!
PS: if somebody with enough reputation could add a newly "enmasse" tag, would be nice.
I've found the solution to this problem.
First of all: the framing errors are not incoming connections from hono. I already see this logging when enmasse is installed without installing hono. I don't know where they are coming from. If somebody has an idea, please tell me.
As for the real problem: it seems I needed to allow communication between the two projects (enmasse-infra and hono). This is documented on the Openshift documentation.
TLDR
Used solution: oc adm pod-network make-projects-global enmasse-infra. I used this because the enmasse framework needs to be reachable by all projects (including hono but also ditto and our custom backend application).
Should also work (not tested): oc adm pod-network join-projects --to=enmasse-infra hono

Connecting to CloudSQL Mysql over ssl from external application

I am trying to get a sample java application to connect to a Mysql gen2 instance I have in GCP. I use SSL and the ip address is whitelisted. I have confirmed connectivity to the instance using the mysql command line and passing in the client-cert.pem, client-key.pem and the server-ca.pem. Now inorder to connect to it from the spring boot java application, I did the following:
created a p12 file from the client cert and key and added it to keystore.jks
created a truststore with the server-ca.pem file.
Added this code in the main before the connection is created:
System.setProperty("javax.net.debug", "all");
System.setProperty("javax.net.ssl.trustStore", TRUST_STORE_PATH);
System.setProperty("javax.net.ssl.trustStorePassword", "fake_password");
System.setProperty("javax.net.ssl.keyStore", KEY_STORE_PATH);
System.setProperty("javax.net.ssl.keyStorePassword", "fake_password");
For the jdbc url, I used : jdbc:mysql://1.1.1.1:3306/sampledb?useSSL=true&requireSSL=true
However I am unable to connect to the instance and see this error from the java ssl debug:
restartedMain, RECV TLSv1.1 ALERT: fatal, unknown_ca
%% Invalidated: [Session-2, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
restartedMain, called closeSocket()
restartedMain, handling exception: javax.net.ssl.SSLHandshakeException: Received fatal alert: unknown_ca
restartedMain, called close()
restartedMain, called closeInternal(true)
I also tried to run
openssl verify -CAfile server-ca.pem client-cert.pem`
and got this output:
error 20 at 0 depth lookup:unable to get local issuer certificate`
Any ideas on what I might be doing wrong?

Couchbase server connection is giving an authentication error

All I want to do is to do an upsert operation. I have a JsonDocument and I have a Couchbase server "123.456.789.1011" and a bucket inside, called "testbucket". Now, when I open the server using the IP address with port 8091, it asks me for a username and password say "uname","pwd" and, after entering, it opens. There is no any password for my bucket.
cluster = CouchbaseCluster.create("123.456.789.101");
cluster.clusterManager("testuser","testuser123");
bucket = cluster.openBucket("testbucket");
jsonObject = JsonObject.create()
.put("Order",map);
jsonDocument = JsonDocument.create("Hello",jsonObject);
jsonDocumentResponse = bucket.upsert(jsonDocument);
This is my code, but the problem is always on running the code I get an error saying that
ERROR spark.webserver.MatcherFilter -
com.couchbase.client.java.error.InvalidPasswordException: Passwords for bucket "testbucket" do not match.
at com.couchbase.client.java.CouchbaseAsyncCluster$1.call(CouchbaseAsyncCluster.java:156)
at com.couchbase.client.java.CouchbaseAsyncCluster$1.call(CouchbaseAsyncCluster.java:146)
at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$1.onError(OperatorOnErrorResumeNextViaFunction.java:77)
at rx.internal.operators.OperatorMap$1.onError(OperatorMap.java:49)
at rx.internal.operators.NotificationLite.accept(NotificationLite.java:147)
at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.pollQueue(OperatorObserveOn.java:177)
at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.access$000(OperatorObserveOn.java:65)
at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber$2.call(OperatorObserveOn.java:153)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:47)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I am new to Couchbase, and I really don;t know what to do. I Googled it but nothing is there on the web. Even their documentation is also not suggesting me anything. I hope someone on StackOverflow will surely have an answer for me. Thanks.
It would seem you need to pass a bucket password(which is different than the cluster password) in the openBucket method: http://docs.couchbase.com/sdk-api/couchbase-java-client-2.0.0/com/couchbase/client/java/Cluster.html#openBucket%28java.lang.String,%20java.lang.String%29
It looks like you're trying to connect a bucket using the cluster credentials. Try instead connect to a bucket with bucket username and an empty password:
cluster = CouchbaseCluster.create("123.456.789.101");
bucket = cluster.openBucket("testbucket", "");

Hadoop: Data node not started, Logs show "Java bind exception (port in use)"

Data node service is not started on one of my Hadoop cluster.
Data node logs has the following information...
Exception details on PC where datanode service is not started:
2015-08-12 15:51:09,331 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: localhost:0
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
at org.apache.hadoop.http.HttpSe
...........................
On successful Data Node PCs the Log looks like this
2015-08-12 15:43:57,520 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 34958
2015-08-12 15:43:57,520 INFO org.mortbay.log: jetty-6.1.26
2015-08-12 15:43:57,619 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup#localhost:34958
I have tried fixing the ports in hdfs-site.xml as explained in the link
But this did not work. Please throw some light in fixing this issue.
Thanks
"localhost:0 "
please check your /etc/hosts ,most likely this file not set well
I have uncommented the following line in /etc/hosts and everything worked fine.
127.0.0.1 localhost
This problem is due to the port is already used, hence BindException Thrown. to fix this issue follow the below steps.
1.
run netstat -np command to know the port used with process id
2.
Kill process id for the port which is already bind.