Couchbase: Error while writing object to bucket - couchbase

I have a web app, where couchbase bucket is deleted and then recreated while clearing the cache (as flush didn't help the scenario). I am using REST API with auth type sasl and proxy port 11211 for creating. The _mcache variable is re-initialized with the information in the config file after bucket recreation.
I got an error while trying to cache the object after recreation....Thought of timing issue, added sleep after recreation that didn't help either. Here's the log file snippet.
I know I am lagging something here, seeking advice.
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Releasing socket 99103fd0-e03d-4fb8-b2b3-089ce27fc241
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Are we alive? True
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Acquiring stream from pool. 192.168.70.156:11210
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.PooledSocket - Socket 99103fd0-e03d-4fb8-b2b3-089ce27fc241 was reset
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Socket was reset. 99103fd0-e03d-4fb8-b2b3-089ce27fc241
2012-11-30 11:27:19 [DEBUG] 5 Couchbase.VBucketAwareOperationFactory.VBGet - Key egfWeo2Xrr1enrI/0gxiqvsNXOe2vHkfNCoh4Lq6UFv0uqAwg+MAvcTYrGMeCBf0KTPL/wEFA7iQqbCWWYbWTw== was mapped to 124
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Releasing socket 99103fd0-e03d-4fb8-b2b3-089ce27fc241
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Are we alive? True
2012-11-30 11:27:20 [ERROR] 6 Couchbase.MessageStreamListener - The infinite loop just finished, probably the server closed the connection without errors. (?)
2012-11-30 11:27:20 [DEBUG] 6 Couchbase.MessageStreamListener - ReadMessage failed with exception: - System.IO.IOException: Remote host closed the streaming connection
at Couchbase.MessageStreamListener.ReadMessages(Uri heartBeatUrl, Uri configUrl)
at Couchbase.MessageStreamListener.ProcessPool()
2012-11-30 11:27:20 [DEBUG] 6 Couchbase.MessageStreamListener - Reached the retry limit, rethrowing. - System.IO.IOException: Remote host closed the streaming connection
at Couchbase.MessageStreamListener.ReadMessages(Uri heartBeatUrl, Uri configUrl)
at Couchbase.MessageStreamListener.ProcessPool()

Without more information it is hard to provide a complete answer, but here some ideas.
As you said it could be because the bucket/node has not come back online yet after deleting/recreating the bucket. Have you tried, at least for testing to wait "longer" ? (I know that is not a viable workaround, but it is to help identifying the source of the issue.)
I think it is important to understand why the flush does not work since it is the proper approach for what you need.
Once again, it will be great if you can provide more information, and also check that you are using the latest client library.

Related

http-outgoing: Shutdown connection

Performing a JSON POST to a URL results in 'http-outgoing: Shutdown connection' message.
For the life of me can't figure out what's up. The correct id/password are set in the headers. There are no firewall issues. I'm leaning towards it's a DNS setting of some sort. But out of ideas on what it could be..
020-06-27 16:16:33,398 - DEBUG [org.apache.http.client.protocol.RequestAuthCache:77] - - Auth cache not set in the context
2020-06-27 16:16:33,399 - DEBUG [org.apache.http.impl.conn.PoolingHttpClientConnectionManager:255] - Connection request: [route: {s}->https://foobar.com:443][total kept alive: 0; route allocated: 0 of 1
00; total allocated: 0 of 100]
2020-06-27 16:16:33,399 - DEBUG [org.apache.http.impl.conn.PoolingHttpClientConnectionManager:288] - Connection leased: [id: 22][route: {s}->https://foobar.com:443][total kept alive: 0; route allocated:
1 of 100; total allocated: 1 of 100]
2020-06-27 16:16:33,400 - DEBUG [org.apache.http.impl.execchain.MainClientExec:235] - Opening connection {s}->https://foobar.com:443
2020-06-27 16:16:33,402 - DEBUG [org.apache.http.impl.conn.DefaultHttpClientConnectionOperator:139] - Connecting to foobar.com/10.00.00.001:443
2020-06-27 16:16:33,528 - DEBUG [org.apache.http.impl.conn.DefaultManagedHttpClientConnection:96] -- http-outgoing-22: Shutdown connection
2020-06-27 16:16:33,528 - DEBUG [org.apache.http.impl.execchain.MainClientExec:129] -- Connection discarded
2020-06-27 16:16:33,528 - DEBUG [org.apache.http.impl.conn.PoolingHttpClientConnectionManager:326] - Connection released: [id: 22][route: {s}->https:/foobar.com:443][total kept alive: 0; route allocate
d: 0 of 100; total allocated: 0 of 100]
The SSL cipher on target was not supported by the JDK in the source system.
Performing a tcpdump zeroed in on that it could be SSL.
Subsequently, increasing the debug logging on SSL confirmed the issue.
Updating to latest JDK resolved the issue.

SendGrid misconfiguration on Google Cloud (535 Authentication failed)

So I've installed SendGrid on GoogleCE with Centos base following the documented instruction from Google:
[https://cloud.google.com/compute/docs/tutorials/sending-mail/using-sendgrid#before-you-begin][1]
Using the test from the command line (various accounts):
echo 'MESSAGE' | mail -s 'SUBJECT' GJ******#gmail.com
the /var/log/maillog says with several lines of 50 or so attempts in 1 second:
postfix/error[32324]: A293210062D7: to=<GJ********#gmail.com>, relay=none, delay=145998, delays=145997/1.2/0/0, dsn=4.0.0, status=deferred (delivery temporarily suspended: SASL authentication failed; server smtp.sendgrid.net[167.89.115.53] said: 535 Authentication failed: The provided authorization grant is invalid, expired, or revoked)
And the message is queued up and retried every few hours. Now, messing around, I could change the port setting from 2525 to one of the regular ports that isn't blocked by google and the email gets bounced right away to the user account in the mail test message.
I made sure to use the api key generated, the SendGrid system say no attempt have been made or bounced or whatever.
There were other errors in the maillog, actually as it tries every second, pages of them, but I change the perms in that directory so no longer, but maybe gives a clue to how it's misconfigured?
Oct 31 19:04:14 beadc postfix/pickup[15119]: fatal: chdir("/var/spool/postfix"): Permission denied
Oct 31 19:04:15 beadc postfix/master[1264]: warning: process /usr/libexec/postfix/qmgr pid 15118 exit status 1
Oct 31 19:04:15 beadc postfix/master[1264]: warning: /usr/libexec/postfix/qmgr: bad command startup -- throttling
Oct 31 19:04:15 beadc postfix/master[1264]: warning: process /usr/libexec/postfix/pickup pid 15119 exit status 1
Oct 31 19:04:15 beadc postfix/master[1264]: warning: /usr/libexec/postfix/pickup: bad command startup -- throttling
The only info I can find searching about the error is that it means a SendGrid misconfiguration.
Any ideas as to what the misconfiguration might be?
I've determined the 535 error was a port/firewall issue. Which means that the 550 error I had on the other port still exists.
Check your firewall settings on 535
[https://cloud.google.com/compute/docs/tutorials/sending-mail/][1]

Hono adapters cannot connect to enmasse

I'm currently installing hono together with enmasse on top of openshift/okd. Everything goes fine except for the connection between the adapters and enmasse. When I deploy the amqp adapter for example (happens with http and mqtt adapter as well), I'm getting following logging from the hono adapter:
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - starting attempt [#5] to connect to server [messaging-hono-default.enmasse-infra.svc:5672]
12:25:45.404 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - connecting to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqp://messaging-hono-default.enmasse-infra.svc:5672]: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
12:25:47.720 [vert.x-eventloop-thread-0] DEBUG o.e.hono.client.impl.HonoClientImpl - connection attempt failed
io.netty.channel.ConnectTimeoutException: connection timed out: messaging-hono-default.enmasse-infra.svc.cluster.local/172.30.83.158:5672
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:125)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Enmasse logs following:
2019-01-07 12:36:24.962160 +0000 SERVER (info) [160]: Accepted connection to 0.0.0.0:5672 from 10.128.0.1:44664
2019-01-07 12:36:24.962258 +0000 SERVER (info) [160]: Connection from 10.128.0.1:44664 (to 0.0.0.0:5672) failed: amqp:connection:framing-error No valid protocol header found
Additional info:
Hono version: 0.8.x
Enmasse version: 0.24.1
Can somebody tell me what I'm missing?
Thanks!
PS: if somebody with enough reputation could add a newly "enmasse" tag, would be nice.
I've found the solution to this problem.
First of all: the framing errors are not incoming connections from hono. I already see this logging when enmasse is installed without installing hono. I don't know where they are coming from. If somebody has an idea, please tell me.
As for the real problem: it seems I needed to allow communication between the two projects (enmasse-infra and hono). This is documented on the Openshift documentation.
TLDR
Used solution: oc adm pod-network make-projects-global enmasse-infra. I used this because the enmasse framework needs to be reachable by all projects (including hono but also ditto and our custom backend application).
Should also work (not tested): oc adm pod-network join-projects --to=enmasse-infra hono

Ejabberd Disco Iteam not coming

Vrsion: 17.11
Platform : ubuntu 16.04
With the mod_muc configuration, sometimes disco items does not load at all.
Here is a configuration I have used for disco items. Here is a crash log I found while crashes
mod_muc:
db_type: sql
default_room_options:
- allow_subscription: true
- mam: true
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
history_size: 100
max_rooms_discoitems: 1000
max_user_conferences: 50
max_users_presence: 50
Also, while joining same muc which was earlier available does not get connection. If I restart the server, things works well and again after certain times muc s doesn't come
Error Log:
Stopping MUC room x#conference.host.com
2018-07-27 12:57:39.972 [error] <0.32056.26> gen_fsm <0.32056.26> in state normal_state terminated with reason: bad return value: ok
2018-07-27 12:57:39.972 [error] <0.32056.26>#p1_fsm:terminate:760 CRASH REPORT Process <0.32056.26> with 0 neighbours exited with reason: bad return value: ok in p1_fsm:terminate/8 line 760
2018-07-30 05:12:12 =ERROR REPORT====
** State machine <0.9190.27> terminating
** Last event in was {route,<<>>,{iq,<<"qM1F3-119">>,set,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},[{xmlel,<<"query">>,[{<<"xmlns">>,<<"urn:xmpp:mam:2">>}],[{xmlel,<<"set">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/rsm">>}],[{xmlel,<<"max">>,[],[{xmlcdata,<<"30">>}]},{xmlel,<<"after">>,[],[]}]},{xmlel,<<"x">>,[{<<"xmlns">>,<<"jabber:x:data">>},{<<"type">>,<<"submit">>}],[{xmlel,<<"field">>,[{<<"var">>,<<"FORM_TYPE">>},{<<"type">>,<<"hidden">>}],[{xmlel,<<"value">>,[],[{xmlcdata,<<"urn:xmpp:mam:2">>}]}]}]}]}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}
** When State == normal_state
** Data == {state,<<"planet_discovery1532511384">>,
<<"conference.x.y.com">>,<<"x.y.com">>,{all,muc_create,[{allow,
[{acl,admin}]}],muc_create},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},{config,<<"Planet Discovery">>,<<>>,true,true,true,anyone,true,true,false,true,true,true,false,true,true,true,true,false,<<>>,true,[moderator,participant,visitor],true,1800,200,false,<<>>,{0,nil},true},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}|{x.y.com,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},<<"usr_name#x.y.com/1140">>,moderator,{presence,<<"qM1F3-116">>,available,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>},undefined,[],undefined,[{xmlel,<<"c">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/caps">>},{<<"hash">>,<<"sha-1">>},{<<"node">>,<<"http://www.igniterealtime.org/projects/smack">>},{<<"ver">>,<<"p801v5l0jeGbLCy09wmWvQCQ7Ok=">>}],[]},{vcard_xupdate,{<<>>,<<>>},undefined}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}]],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},nil,{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[[<<"usr_name#x.y.com/1140">>,{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}]],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[[{<<"miga8747b6">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[],[[{<<"ruba32cc6e">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[]}}},{lqueue,{{[],[]},0,unlimited},1000},[],<<>>,false,nil,none,undefined}
** Reason for termination =
** {bad_return_value,ok}
2018-07-30 05:12:12 =CRASH REPORT====
crasher:
initial call: mod_muc_room:init/1
pid: <0.9190.27>
registered_name: []
exception exit: {{bad_return_value,ok},[{p1_fsm,terminate,8,[{file,"src/p1_fsm.erl"},{line,760}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: ['mod_muc_x.y.com',ejabberd_gen_mod_sup,ejabberd_sup,<0.32330.26>]
messages: []
links: []
dictionary: [{'$internal_queue_len',0}]
trap_exit: true
status: running
heap_size: 6772
stack_size: 27
reductions: 3310
neighbours:
2018-07-30 12:41:56 =ERROR REPORT====
What ejabberd version? and how did you install it?
The syntax of your default_room_options are wrong, did you really use that config like that?
And what changes you made from a stock installation? I mean: did you setup a cluster of several nodes, did you enable other modules that may interfere with mod_muc...?
And most importantly: you have setup the max_rooms_discoitems to 10000. How many rooms does the service have? That option should be set to a small value, because requesting discoitems for 10.000 rooms means requesting information to each single room, and that means 10.000 queries, and that can have unknown consequences. Does your problem reproduce if you set a low value, like 100?

HikariCP debug output, is this normal?

I've just started using HikariCP in a swing application with hibernate. I'm maintaining and old project, so there are a lot of crazy stuff going on in there. The connection leak detection feature helped me understand the sessions would close only on certain events, for example when a user is clicking on the "Save" button. In other cases, there is a leak. I'm thinking the previous developers were trying to implement the "long conversations" unit of work, but they missed some (most) cases.
So my goal now is to find all leaks and fix them. I'm planning to use the HikariCP debug output to help me do that. I don't know if there is a wiki page on the HikariCP documentation that explains the output of debugging, but I was wondering if this output when the application is idle is normal, or there is something strange going in there that I should investigate more:
2015-09-14 01:12:51 DEBUG HikariPool - After fill pool stats HikariPool-0 (total=10, inUse=3, avail=7, waiting=0)
2015-09-14 01:13:21 DEBUG HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=3, avail=7, waiting=0)
2015-09-14 01:13:21 DEBUG HikariPool - After cleanup pool stats HikariPool-0 (total=6, inUse=3, avail=3, waiting=0)
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#4fb38272
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#417465f4
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#454be902
2015-09-14 01:13:21 DEBUG PoolUtilities - Closing connection com.mysql.jdbc.JDBC4Connection#496fcf
If this is normal behaviour, I would also like to know what these 4 connections are for, and why are they closing at that point. Thanks.
This is all normal output. These connections are likely closing due to being idle for the "idleTimeout" period, or because they reached their lifetime ("maxLifetime"). I do recommend updating to the latest version (2.4.x), which will typically provide a reason why the connection was closed in the debug log message.