CouchBase not restarting properly after disk extension - couchbase

I'm using CouchBase. We reach the disk limit three days ago. We extended the disk space but CouchBase doesn't starting properly : the web console is not accessible. The debug logs show the lies below:
crasher:
initial call: application_master:init/4
pid: <0.86.0>
registered_name: []
exception exit: {{shutdown,
{failed_to_start_child,ns_server_nodes_sup,
{shutdown,
{failed_to_start_child,start_couchdb_node,
{{badmatch,{error,duplicate_name}},
[{ns_server_nodes_sup,
'-start_couchdb_node/0-fun-0-',0,
[{file,"src/ns_server_nodes_sup.erl"},{line,129}]},
{ns_port_server,init,1,
[{file,"src/ns_port_server.erl"},{line,73}]},
{gen_server,init_it,6,
[{file,
"c:/tools/cygwin/home/ADMINI~1/OTP_SR~2/lib/stdlib/src/gen_server.erl"},
{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,
"c:/tools/cygwin/home/ADMINI~1/OTP_SR~2/lib/stdlib/src/proc_lib.erl"},
{line,239}]}]}}}}},
{ns_server,start,[normal,[]]}}
in function application_master:init/4 (c:/tools/cygwin/home/ADMINI~1/OTP_SR~2/lib/kernel/src/application_master.erl,
line 133)
ancestors: [<0.85.0>]
messages: [{'EXIT',<0.87.0>,normal}]
links: [<0.85.0>,<0.7.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 1598
stack_size: 27
reductions: 202 neighbours:
[error_logger:info,2019-11-18T20:44:58.184+01:00,ns_1#127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
I’m using Windows NT version 6.2 Server Editon. Build 9200
I extended the disk space by using a private Cloud Provider system. The disk wasn’t replace. We just extended it.
Does any one faced tis kind of issue?

Related

Puppet agent: "Error: Failed to apply catalog: Could not render to json: source sequence is illegal/malformed utf-8"

We have a Puppet server running that services a couple of hundred Windows servers. The installed Puppet agent is 6.x. On almost all of the servers 'puppet agent -t' works fine, with a few exceptions exhibiting the same issue.
When I start clean, the Puppet agent connects with the server, receives a certificate and downloads all of the facts and what not. This works. Then the agent loads the facts and after a while I get an error message:
C:\>puppet agent -t
Info: Using configured environment 'windows'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Error: Failed to apply catalog: Could not render to json: source sequence is illegal/malformed utf-8
C:\>
If I run the Puppet agent in debug mode, although I could have missed it because there's a lot of output, all it shows is that it's resolving facts, and then the above message appears and the agent run stops. The last fact (according to debug output) that is being resolved is consistently:
Debug: Facter: resolving processor facts.
Debug: Facter: fact "hardwareisa" has resolved to "x64".
Debug: Facter: fact "processorcount" has resolved to 2.
Debug: Facter: fact "physicalprocessorcount" has resolved to 1.
Debug: Facter: fact "processor0" has resolved to "Intel(R) Xeon(R) CPU E5-2643 v2 # 3.50GHz".
Debug: Facter: fact "processors" has resolved to {
count => 2,
isa => "x64",
models => [
"Intel(R) Xeon(R) CPU E5-2643 v2 # 3.50GHz"
],
physicalcount => 1
}.
Error: Failed to apply catalog: Could not render to json: source sequence is illegal/malformed utf-8
However, I'm in doubt if that is the culprit because IIRC Puppet does not really run things sequential.
I don't understand how the same thing can work on one server, but not on another, even when having the same agent version. How can I find out what is the source of the error message?
I know this is an ancient topic but the resolution in my case was confirming that each of the custom facts was encoded in UTF-8. We discovered that a single fact file was encoded differently and re-encoding it as UTF-8 fixed our issues.

Chain head is not set yet. Permit all

Installed Hyperledger sawtooth with this guide:
https://sawtooth.hyperledger.org/docs/core/releases/latest/sysadmin_guide/installation.html
[2018-11-04 02:35:13.204 DEBUG selector_events] Using selector: ZMQSelector
[2018-11-04 02:35:13.205 INFO interconnect] Listening on tcp://127.0.0.1:4004
[2018-11-04 02:35:13.205 DEBUG dispatch] Added send_message function for connection ServerThread
[2018-11-04 02:35:13.206 DEBUG dispatch] Added send_last_message function for connection ServerThread
[2018-11-04 02:35:13.206 DEBUG genesis] genesis_batch_file: /var/lib/sawtooth/genesis.batch
[2018-11-04 02:35:13.206 DEBUG genesis] block_chain_id: not yet specified
[2018-11-04 02:35:13.207 INFO genesis] Producing genesis block from /var/lib/sawtooth/genesis.batch
[2018-11-04 02:35:13.207 DEBUG genesis] Adding 1 batches
[2018-11-04 02:35:13.208 DEBUG executor] no transaction processors registered for processor type sawtooth_settings: 1.0
[2018-11-04 02:35:13.209 INFO executor] Waiting for transaction processor (sawtooth_settings, 1.0)
[2018-11-04 02:35:13.311 INFO processor_handlers] registered transaction processor: connection_id=014a2086c9ffe773b104d8a0122b9d5f867a1b2d44236acf4ab097483dbe49c2ad33d3302acde6f985d911067fe92207aa8adc1c9dbc596d826606fe1ef1d4ef, family=intkey, version=1.0, namespaces=['1cf126']
[2018-11-04 02:35:18.110 INFO processor_handlers] registered transaction processor: connection_id=e615fc881f8e7b6dd05b1e3a8673d125a3e759106247832441bd900abae8a3244e1507b943258f62c458ded9af0c5150da420c7f51f20e62330497ecf9092060, family=xo, version=1.0, namespaces=['5b7349']
[2018-11-04 02:35:21.908 DEBUG permission_verifier] Chain head is not set yet. Permit all.
[2018-11-04 02:35:21.908 DEBUG permission_verifier] Chain head is not set yet. Permit all.
Than:
ubuntu#ip-172-31-42-144:~$ sudo intkey-tp-python -vv
[2018-11-04 02:42:05.710 INFO core] register attempt: OK
Than:
ubuntu#ip-172-31-42-144:~$ intkey create_batch
Writing to batches.intkey...
ubuntu#ip-172-31-42-144:~$ intkey load
batches: 2 batch/sec: 160.14600713999351
REST-API works, too.
I did exactly all steps as shown in the guide. The older one doesn't help me, too. hyperledger sawtooth validator node permissioning issue
ubuntu#ip-172-31-42-144:~$ curl http://localhost:8008/blocks
{
"error": {
"code": 15,
"message": "The validator has no genesis block, and is not yet ready to be queried. Try your request again later.",
"title": "Validator Not Ready"
}
}
genesis was attached ?!
MARiE
As the log shows, the genesis batch is waiting on the sawtooth-setting TP. If you start that up, just like you start up intkey and xo, it will process the genesis batch and will then be able to handle your intkey transactions.

Ejabberd Disco Iteam not coming

Vrsion: 17.11
Platform : ubuntu 16.04
With the mod_muc configuration, sometimes disco items does not load at all.
Here is a configuration I have used for disco items. Here is a crash log I found while crashes
mod_muc:
db_type: sql
default_room_options:
- allow_subscription: true
- mam: true
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
history_size: 100
max_rooms_discoitems: 1000
max_user_conferences: 50
max_users_presence: 50
Also, while joining same muc which was earlier available does not get connection. If I restart the server, things works well and again after certain times muc s doesn't come
Error Log:
Stopping MUC room x#conference.host.com
2018-07-27 12:57:39.972 [error] <0.32056.26> gen_fsm <0.32056.26> in state normal_state terminated with reason: bad return value: ok
2018-07-27 12:57:39.972 [error] <0.32056.26>#p1_fsm:terminate:760 CRASH REPORT Process <0.32056.26> with 0 neighbours exited with reason: bad return value: ok in p1_fsm:terminate/8 line 760
2018-07-30 05:12:12 =ERROR REPORT====
** State machine <0.9190.27> terminating
** Last event in was {route,<<>>,{iq,<<"qM1F3-119">>,set,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},[{xmlel,<<"query">>,[{<<"xmlns">>,<<"urn:xmpp:mam:2">>}],[{xmlel,<<"set">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/rsm">>}],[{xmlel,<<"max">>,[],[{xmlcdata,<<"30">>}]},{xmlel,<<"after">>,[],[]}]},{xmlel,<<"x">>,[{<<"xmlns">>,<<"jabber:x:data">>},{<<"type">>,<<"submit">>}],[{xmlel,<<"field">>,[{<<"var">>,<<"FORM_TYPE">>},{<<"type">>,<<"hidden">>}],[{xmlel,<<"value">>,[],[{xmlcdata,<<"urn:xmpp:mam:2">>}]}]}]}]}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}
** When State == normal_state
** Data == {state,<<"planet_discovery1532511384">>,
<<"conference.x.y.com">>,<<"x.y.com">>,{all,muc_create,[{allow,
[{acl,admin}]}],muc_create},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},{config,<<"Planet Discovery">>,<<>>,true,true,true,anyone,true,true,false,true,true,true,false,true,true,true,true,false,<<>>,true,[moderator,participant,visitor],true,1800,200,false,<<>>,{0,nil},true},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}|{x.y.com,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},<<"usr_name#x.y.com/1140">>,moderator,{presence,<<"qM1F3-116">>,available,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>},undefined,[],undefined,[{xmlel,<<"c">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/caps">>},{<<"hash">>,<<"sha-1">>},{<<"node">>,<<"http://www.igniterealtime.org/projects/smack">>},{<<"ver">>,<<"p801v5l0jeGbLCy09wmWvQCQ7Ok=">>}],[]},{vcard_xupdate,{<<>>,<<>>},undefined}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}]],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},nil,{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[[<<"usr_name#x.y.com/1140">>,{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}]],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[[{<<"miga8747b6">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[],[[{<<"ruba32cc6e">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[]}}},{lqueue,{{[],[]},0,unlimited},1000},[],<<>>,false,nil,none,undefined}
** Reason for termination =
** {bad_return_value,ok}
2018-07-30 05:12:12 =CRASH REPORT====
crasher:
initial call: mod_muc_room:init/1
pid: <0.9190.27>
registered_name: []
exception exit: {{bad_return_value,ok},[{p1_fsm,terminate,8,[{file,"src/p1_fsm.erl"},{line,760}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: ['mod_muc_x.y.com',ejabberd_gen_mod_sup,ejabberd_sup,<0.32330.26>]
messages: []
links: []
dictionary: [{'$internal_queue_len',0}]
trap_exit: true
status: running
heap_size: 6772
stack_size: 27
reductions: 3310
neighbours:
2018-07-30 12:41:56 =ERROR REPORT====
What ejabberd version? and how did you install it?
The syntax of your default_room_options are wrong, did you really use that config like that?
And what changes you made from a stock installation? I mean: did you setup a cluster of several nodes, did you enable other modules that may interfere with mod_muc...?
And most importantly: you have setup the max_rooms_discoitems to 10000. How many rooms does the service have? That option should be set to a small value, because requesting discoitems for 10.000 rooms means requesting information to each single room, and that means 10.000 queries, and that can have unknown consequences. Does your problem reproduce if you set a low value, like 100?

Instance Doesnt boot correctly, hangs on - "a start job is running for LSB: Raise network Interface.."

My VM was shutdown due to end of Trial. However i have since made payment and started other instances.
GCE UI shows this system as successfully booted, however looking at the serial port it shows the following (see image)
Any ideas how to fix this ?
Screenshot of Boot Error:
[ 6.895575] ppdev: user-space parallel port driver
[ 6.951588] ip6_tables: (C) 2000-2006 Netfilter Core Team
[ 6.993046] AVX version of gcm_enc/dec engaged.
[ 6.996351] alg: No test for __gcm-aes-aesni (__driver-gcm-aes-aesni)
[ 7.001659] alg: No test for crc32 (crc32-pclmul)
[ OK ] Started LSB: start firewall.
[***] A start job is running for LSB: Raise network interf...17s / no limit)

Unable to access Google Compute Engine instance using external IP address

I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses