http-outgoing: Shutdown connection - json

Performing a JSON POST to a URL results in 'http-outgoing: Shutdown connection' message.
For the life of me can't figure out what's up. The correct id/password are set in the headers. There are no firewall issues. I'm leaning towards it's a DNS setting of some sort. But out of ideas on what it could be..
020-06-27 16:16:33,398 - DEBUG [org.apache.http.client.protocol.RequestAuthCache:77] - - Auth cache not set in the context
2020-06-27 16:16:33,399 - DEBUG [org.apache.http.impl.conn.PoolingHttpClientConnectionManager:255] - Connection request: [route: {s}->https://foobar.com:443][total kept alive: 0; route allocated: 0 of 1
00; total allocated: 0 of 100]
2020-06-27 16:16:33,399 - DEBUG [org.apache.http.impl.conn.PoolingHttpClientConnectionManager:288] - Connection leased: [id: 22][route: {s}->https://foobar.com:443][total kept alive: 0; route allocated:
1 of 100; total allocated: 1 of 100]
2020-06-27 16:16:33,400 - DEBUG [org.apache.http.impl.execchain.MainClientExec:235] - Opening connection {s}->https://foobar.com:443
2020-06-27 16:16:33,402 - DEBUG [org.apache.http.impl.conn.DefaultHttpClientConnectionOperator:139] - Connecting to foobar.com/10.00.00.001:443
2020-06-27 16:16:33,528 - DEBUG [org.apache.http.impl.conn.DefaultManagedHttpClientConnection:96] -- http-outgoing-22: Shutdown connection
2020-06-27 16:16:33,528 - DEBUG [org.apache.http.impl.execchain.MainClientExec:129] -- Connection discarded
2020-06-27 16:16:33,528 - DEBUG [org.apache.http.impl.conn.PoolingHttpClientConnectionManager:326] - Connection released: [id: 22][route: {s}->https:/foobar.com:443][total kept alive: 0; route allocate
d: 0 of 100; total allocated: 0 of 100]

The SSL cipher on target was not supported by the JDK in the source system.
Performing a tcpdump zeroed in on that it could be SSL.
Subsequently, increasing the debug logging on SSL confirmed the issue.
Updating to latest JDK resolved the issue.

Related

Connection reset by Cloudflare when building Packer image

I am trying to build a packer image for a digital ocean droplet, however when the build process finishes, it fails to create image (from what I can tell, that is a Cloudflare IP)
Any idea why this is happening or what I can do to investigate it further?
==> digitalocean: Gracefully shutting down droplet...
==> digitalocean: Error shutting down droplet: Post https://api.digitalocean.com/v2/droplets/198964166/actions: read tcp 10.0.2.15:44558->104.16.181.15:443: read: connection reset by peer
==> digitalocean: Destroying droplet...
==> digitalocean: Deleting temporary ssh key...
Build 'digitalocean' errored: Error shutting down droplet: Post https://api.digitalocean.com/v2/droplets/198964166/actions: read tcp 10.0.2.15:44558->104.16.181.15:443: read: connection reset by peer

Ambari Admin login hung with correct credentials, throws correct error with invalid credentials

I have setup Ambari 2.7.0 on Centos 7 server with Mysql 5.7.29 as backend.
All services are running fine, but when I try to login using admin:admin credential it seems to hang and shows following logs in ambari-server.log file:
2020-03-18 06:55:28,031 INFO [MessageBroker-1] WebSocketMessageBrokerStats:113 - WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannelpool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
2020-03-18 06:55:28,678 INFO [MessageBroker-1] WebSocketMessageBrokerStats:113 - WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannelpool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
Interesting thing is, moment I enter invalid user or wrong password for admin user, it throws correct error Invalid credentials but on entering correct credentials, its hanging and logs above mentioned logs in ambari-server.log file.
Previously I used to see following logs in MySQL Log file:
[Note] Aborted connection 4 to db: '' user: '' host: '' (Got an error reading communication packets)
As suggested on some questions, I increased max_connections to 400 and max_allowed_packet close to 2% of RAM available on server to 656MB.
After increasing above mentioned configs, I dont see any Error reading communication packets in MySQL log file but I dont seem to get Ambari login working.
When I turn logging level for Ambari-Server to debug and enter random user it gives logs :
2020-03-18 11:24:16,116 DEBUG [ambari-client-thread-47] FilterChainProxy:325 - /api/v1/users/esrdf?fields=*,privileges/PrivilegeInfo/cluster_name,privileges/PrivilegeInfo/permission_name&_=1599955834198 at position 1 of 13 in additional filter chain; firing Filter: 'WebAsyncManagerIntegrationFilter'
But when I enter correct credentials admin:admin, I dont see any API even being called within Logs. What could be the reason for this?
I tried debugging this using API calls, eg: /api/v1/clusters.
When I make the API call from the server itself, it gives correct response but when the API is made from local, it gives out Recv failure: Connection was reset
Can someone point me, where I could be going wrong?
Help Appreciated!
Found the issue. It was a VPN connectivity issue that was preventing login, Ambari UI could be accessed but not login.
Login works when connected to a machine on same network.
So basically what may be happening is that once you deploy to service all your paths are a little off, so lets say you have link that says home it will work on local but will not work on server because your application URL is now host:8080/appName/ and when you refer to anything with /home instead of taking you to host:8080/appName/home it will take you to host:8080/home, at which point its a broken url.

Ejabberd Disco Iteam not coming

Vrsion: 17.11
Platform : ubuntu 16.04
With the mod_muc configuration, sometimes disco items does not load at all.
Here is a configuration I have used for disco items. Here is a crash log I found while crashes
mod_muc:
db_type: sql
default_room_options:
- allow_subscription: true
- mam: true
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
history_size: 100
max_rooms_discoitems: 1000
max_user_conferences: 50
max_users_presence: 50
Also, while joining same muc which was earlier available does not get connection. If I restart the server, things works well and again after certain times muc s doesn't come
Error Log:
Stopping MUC room x#conference.host.com
2018-07-27 12:57:39.972 [error] <0.32056.26> gen_fsm <0.32056.26> in state normal_state terminated with reason: bad return value: ok
2018-07-27 12:57:39.972 [error] <0.32056.26>#p1_fsm:terminate:760 CRASH REPORT Process <0.32056.26> with 0 neighbours exited with reason: bad return value: ok in p1_fsm:terminate/8 line 760
2018-07-30 05:12:12 =ERROR REPORT====
** State machine <0.9190.27> terminating
** Last event in was {route,<<>>,{iq,<<"qM1F3-119">>,set,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},[{xmlel,<<"query">>,[{<<"xmlns">>,<<"urn:xmpp:mam:2">>}],[{xmlel,<<"set">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/rsm">>}],[{xmlel,<<"max">>,[],[{xmlcdata,<<"30">>}]},{xmlel,<<"after">>,[],[]}]},{xmlel,<<"x">>,[{<<"xmlns">>,<<"jabber:x:data">>},{<<"type">>,<<"submit">>}],[{xmlel,<<"field">>,[{<<"var">>,<<"FORM_TYPE">>},{<<"type">>,<<"hidden">>}],[{xmlel,<<"value">>,[],[{xmlcdata,<<"urn:xmpp:mam:2">>}]}]}]}]}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}
** When State == normal_state
** Data == {state,<<"planet_discovery1532511384">>,
<<"conference.x.y.com">>,<<"x.y.com">>,{all,muc_create,[{allow,
[{acl,admin}]}],muc_create},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},{config,<<"Planet Discovery">>,<<>>,true,true,true,anyone,true,true,false,true,true,true,false,true,true,true,true,false,<<>>,true,[moderator,participant,visitor],true,1800,200,false,<<>>,{0,nil},true},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}|{x.y.com,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},<<"usr_name#x.y.com/1140">>,moderator,{presence,<<"qM1F3-116">>,available,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>},undefined,[],undefined,[{xmlel,<<"c">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/caps">>},{<<"hash">>,<<"sha-1">>},{<<"node">>,<<"http://www.igniterealtime.org/projects/smack">>},{<<"ver">>,<<"p801v5l0jeGbLCy09wmWvQCQ7Ok=">>}],[]},{vcard_xupdate,{<<>>,<<>>},undefined}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}]],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},nil,{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[[<<"usr_name#x.y.com/1140">>,{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}]],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[[{<<"miga8747b6">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[],[[{<<"ruba32cc6e">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[]}}},{lqueue,{{[],[]},0,unlimited},1000},[],<<>>,false,nil,none,undefined}
** Reason for termination =
** {bad_return_value,ok}
2018-07-30 05:12:12 =CRASH REPORT====
crasher:
initial call: mod_muc_room:init/1
pid: <0.9190.27>
registered_name: []
exception exit: {{bad_return_value,ok},[{p1_fsm,terminate,8,[{file,"src/p1_fsm.erl"},{line,760}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: ['mod_muc_x.y.com',ejabberd_gen_mod_sup,ejabberd_sup,<0.32330.26>]
messages: []
links: []
dictionary: [{'$internal_queue_len',0}]
trap_exit: true
status: running
heap_size: 6772
stack_size: 27
reductions: 3310
neighbours:
2018-07-30 12:41:56 =ERROR REPORT====
What ejabberd version? and how did you install it?
The syntax of your default_room_options are wrong, did you really use that config like that?
And what changes you made from a stock installation? I mean: did you setup a cluster of several nodes, did you enable other modules that may interfere with mod_muc...?
And most importantly: you have setup the max_rooms_discoitems to 10000. How many rooms does the service have? That option should be set to a small value, because requesting discoitems for 10.000 rooms means requesting information to each single room, and that means 10.000 queries, and that can have unknown consequences. Does your problem reproduce if you set a low value, like 100?

502 error nginx + ruby on rails application

Application details :
Rails 3.1.0
Ruby 1.9.2
unicorn 4.2.0
resque 1.20.0
nginx/1.0.14
redis 2.4.8
I am using active_admin gem, for all URL's getting response 200,
but only one URL giving 502 error on production.
rake routes :
admin_links GET /admin/links(.:format) {:action=>"index", :controller=>"admin/links"}
And its working on local(development).
localhost log : response code 200
Started GET "/admin/links" for 127.0.0.1 at 2013-02-12 11:05:21 +0530
Processing by Admin::LinksController#index as */*
Parameters: {"link"=>{}}
Geokit is using the domain: localhost
AdminUser Load (0.2ms) SELECT `admin_users`.* FROM `admin_users` WHERE `admin_users`.`id` = 3 LIMIT 1
(0.1ms) SELECT 1 FROM `links` LIMIT 1 OFFSET 0
(0.1ms) SELECT COUNT(*) FROM `links`
(0.2ms) SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM `links` LIMIT 10 OFFSET 0) subquery_for_count
CACHE (0.0ms) SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM `links` LIMIT 10 OFFSET 0) subquery_for_count
Link Load (0.6ms) SELECT `links`.* FROM `links` ORDER BY `links`.`id` desc LIMIT 10 OFFSET 0
Link Load (6677.2ms) SELECT `links`.* FROM `links`
Rendered /usr/local/rvm/gems/ruby-1.9.2-head/gems/activeadmin-0.4.2/app/views/active_admin/resource/index.html.arb (14919.0ms)
Completed 200 OK in 15663ms (Views: 8835.0ms | ActiveRecord: 6682.8ms | Solr: 0.0ms)
production log : 502 response
Started GET "/admin/links" for 103.9.12.66 at 2013-02-12 05:25:37 +0000
Processing by Admin::LinksController#index as */*
Parameters: {"link"=>{}}
NGinx error log
2013/02/12 07:36:16 [error] 32401#0: *1948 upstream prematurely closed connection while reading response header from upstream
don't know what's happening, could some buddy help me out.
You have a timeout problem.
Tackling it
HTTP/1.1 502 Bad Gateway
Indicates, that nginx had a problem to talk to its configured upstream.
http://en.wikipedia.org/wiki/List_of_HTTP_status_codes#502
2013/02/12 07:36:16 [error] 32401#0: *1948 upstream prematurely closed connection while reading response header from upstream
Nginx error log tells you Nginx was actually able to connect to the configured upstream but the process closed the connection before the answer was (fully) received.
Your development environment:
Completed 200 OK in 15663ms
Apparently you need around 15 seconds to generate the response on your development machine.
In contrast to proxy_connect_timeout, this timeout will catch a server
that puts you in it's connection pool but does not respond to you with
anything beyond that. Be careful though not to set this too low, as
your proxy server might take a longer time to respond to requests on
purpose (e.g. when serving you a report page that takes some time to
compute). You are able though to have a different setting per
location, which enables you to have a higher proxy_read_timeout for
the report page's location.
http://wiki.nginx.org/HttpProxyModule#proxy_read_timeout
On the nginx side the proxy_read_timeout is at a default of 60 seconds, so that's safe
I have no idea how ruby (on rails) works, check the error log - the timeout happens in that part of your stack

Couchbase: Error while writing object to bucket

I have a web app, where couchbase bucket is deleted and then recreated while clearing the cache (as flush didn't help the scenario). I am using REST API with auth type sasl and proxy port 11211 for creating. The _mcache variable is re-initialized with the information in the config file after bucket recreation.
I got an error while trying to cache the object after recreation....Thought of timing issue, added sleep after recreation that didn't help either. Here's the log file snippet.
I know I am lagging something here, seeking advice.
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Releasing socket 99103fd0-e03d-4fb8-b2b3-089ce27fc241
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Are we alive? True
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Acquiring stream from pool. 192.168.70.156:11210
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.PooledSocket - Socket 99103fd0-e03d-4fb8-b2b3-089ce27fc241 was reset
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Socket was reset. 99103fd0-e03d-4fb8-b2b3-089ce27fc241
2012-11-30 11:27:19 [DEBUG] 5 Couchbase.VBucketAwareOperationFactory.VBGet - Key egfWeo2Xrr1enrI/0gxiqvsNXOe2vHkfNCoh4Lq6UFv0uqAwg+MAvcTYrGMeCBf0KTPL/wEFA7iQqbCWWYbWTw== was mapped to 124
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Releasing socket 99103fd0-e03d-4fb8-b2b3-089ce27fc241
2012-11-30 11:27:19 [DEBUG] 5 Enyim.Caching.Memcached.MemcachedNode.InternalPoolImpl - Are we alive? True
2012-11-30 11:27:20 [ERROR] 6 Couchbase.MessageStreamListener - The infinite loop just finished, probably the server closed the connection without errors. (?)
2012-11-30 11:27:20 [DEBUG] 6 Couchbase.MessageStreamListener - ReadMessage failed with exception: - System.IO.IOException: Remote host closed the streaming connection
at Couchbase.MessageStreamListener.ReadMessages(Uri heartBeatUrl, Uri configUrl)
at Couchbase.MessageStreamListener.ProcessPool()
2012-11-30 11:27:20 [DEBUG] 6 Couchbase.MessageStreamListener - Reached the retry limit, rethrowing. - System.IO.IOException: Remote host closed the streaming connection
at Couchbase.MessageStreamListener.ReadMessages(Uri heartBeatUrl, Uri configUrl)
at Couchbase.MessageStreamListener.ProcessPool()
Without more information it is hard to provide a complete answer, but here some ideas.
As you said it could be because the bucket/node has not come back online yet after deleting/recreating the bucket. Have you tried, at least for testing to wait "longer" ? (I know that is not a viable workaround, but it is to help identifying the source of the issue.)
I think it is important to understand why the flush does not work since it is the proper approach for what you need.
Once again, it will be great if you can provide more information, and also check that you are using the latest client library.