Ejabberd Disco Iteam not coming - mysql
Vrsion: 17.11
Platform : ubuntu 16.04
With the mod_muc configuration, sometimes disco items does not load at all.
Here is a configuration I have used for disco items. Here is a crash log I found while crashes
mod_muc:
db_type: sql
default_room_options:
- allow_subscription: true
- mam: true
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
history_size: 100
max_rooms_discoitems: 1000
max_user_conferences: 50
max_users_presence: 50
Also, while joining same muc which was earlier available does not get connection. If I restart the server, things works well and again after certain times muc s doesn't come
Error Log:
Stopping MUC room x#conference.host.com
2018-07-27 12:57:39.972 [error] <0.32056.26> gen_fsm <0.32056.26> in state normal_state terminated with reason: bad return value: ok
2018-07-27 12:57:39.972 [error] <0.32056.26>#p1_fsm:terminate:760 CRASH REPORT Process <0.32056.26> with 0 neighbours exited with reason: bad return value: ok in p1_fsm:terminate/8 line 760
2018-07-30 05:12:12 =ERROR REPORT====
** State machine <0.9190.27> terminating
** Last event in was {route,<<>>,{iq,<<"qM1F3-119">>,set,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},[{xmlel,<<"query">>,[{<<"xmlns">>,<<"urn:xmpp:mam:2">>}],[{xmlel,<<"set">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/rsm">>}],[{xmlel,<<"max">>,[],[{xmlcdata,<<"30">>}]},{xmlel,<<"after">>,[],[]}]},{xmlel,<<"x">>,[{<<"xmlns">>,<<"jabber:x:data">>},{<<"type">>,<<"submit">>}],[{xmlel,<<"field">>,[{<<"var">>,<<"FORM_TYPE">>},{<<"type">>,<<"hidden">>}],[{xmlel,<<"value">>,[],[{xmlcdata,<<"urn:xmpp:mam:2">>}]}]}]}]}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}
** When State == normal_state
** Data == {state,<<"planet_discovery1532511384">>,
<<"conference.x.y.com">>,<<"x.y.com">>,{all,muc_create,[{allow,
[{acl,admin}]}],muc_create},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<>>},{config,<<"Planet Discovery">>,<<>>,true,true,true,anyone,true,true,false,true,true,true,false,true,true,true,true,false,<<>>,true,[moderator,participant,visitor],true,1800,200,false,<<>>,{0,nil},true},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}|{x.y.com,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},<<"usr_name#x.y.com/1140">>,moderator,{presence,<<"qM1F3-116">>,available,<<"en">>,{jid,<<"usr_name">>,<<"x.y.com">>,<<"1140">>,<<"usr_name">>,<<"x.y.com">>,<<"1140">>},{jid,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>,<<"planet_discovery1532511384">>,<<"conference.x.y.com">>,<<"usr_name#x.y.com/1140">>},undefined,[],undefined,[{xmlel,<<"c">>,[{<<"xmlns">>,<<"http://jabber.org/protocol/caps">>},{<<"hash">>,<<"sha-1">>},{<<"node">>,<<"http://www.igniterealtime.org/projects/smack">>},{<<"ver">>,<<"p801v5l0jeGbLCy09wmWvQCQ7Ok=">>}],[]},{vcard_xupdate,{<<>>,<<>>},undefined}],#{ip => {0,0,0,0,0,65535,46291,27829}}}}]],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},nil,{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[[<<"usr_name#x.y.com/1140">>,{<<"usr_name">>,<<"x.y.com">>,<<"1140">>}]],[],[],[],[],[],[],[],[],[],[],[],[]}}},{dict,3,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[[{<<"usr_name">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[[{<<"miga8747b6">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[],[],[],[[{<<"ruba32cc6e">>,<<"x.y.com">>,<<>>}|{owner,<<>>}]],[]}}},{lqueue,{{[],[]},0,unlimited},1000},[],<<>>,false,nil,none,undefined}
** Reason for termination =
** {bad_return_value,ok}
2018-07-30 05:12:12 =CRASH REPORT====
crasher:
initial call: mod_muc_room:init/1
pid: <0.9190.27>
registered_name: []
exception exit: {{bad_return_value,ok},[{p1_fsm,terminate,8,[{file,"src/p1_fsm.erl"},{line,760}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
ancestors: ['mod_muc_x.y.com',ejabberd_gen_mod_sup,ejabberd_sup,<0.32330.26>]
messages: []
links: []
dictionary: [{'$internal_queue_len',0}]
trap_exit: true
status: running
heap_size: 6772
stack_size: 27
reductions: 3310
neighbours:
2018-07-30 12:41:56 =ERROR REPORT====
What ejabberd version? and how did you install it?
The syntax of your default_room_options are wrong, did you really use that config like that?
And what changes you made from a stock installation? I mean: did you setup a cluster of several nodes, did you enable other modules that may interfere with mod_muc...?
And most importantly: you have setup the max_rooms_discoitems to 10000. How many rooms does the service have? That option should be set to a small value, because requesting discoitems for 10.000 rooms means requesting information to each single room, and that means 10.000 queries, and that can have unknown consequences. Does your problem reproduce if you set a low value, like 100?
Related
ESPHOME not working for MHz19 with Lolin D1 mini
I connected the MHz19 sensor to the D1 mini and want to flash it via ESP-Home. I followed the following guide: https://esphome.io/components/sensor/mhz19.html I used the code esphome: name: co2-sensor esp8266: board: esp01_1m # Enable logging #logger: # Enable Home Assistant API api: ota: password: "xxxxxx" wifi: ssid: !secret wifi_ssid password: !secret wifi_password # Enable fallback hotspot (captive portal) in case wifi connection fails ap: ssid: "Co2-Sensor Fallback Hotspot" password: "xxxxx" captive_portal: uart: rx_pin: GPIO3 tx_pin: GPIO1 baud_rate: 9600 sensor: - platform: mhz19 co2: name: "CO2 Value" temperature: name: "MH-Z19 Temperature" update_interval: 60s automatic_baseline_calibration: false But cannot flash, I get the following error ======================== [SUCCESS] Took 305.85 seconds ======================== INFO Successfully compiled program. esptool.py v3.2 Serial port /dev/ttyUSB0 Connecting...................................... A fatal error occurred: Failed to connect to ESP8266: No serial data received. For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200. esptool.py v3.2 Serial port /dev/ttyUSB0 Connecting...................................... A fatal error occurred: Failed to connect to ESP8266: No serial data received. For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting I can however flash it and it comes online if I disconnect the sensor, of course it publishes no data. So I assume it's something to do with UART. I also tried disabling the logging, which did nothing.
It seems the UART pins used by default for debug logging, even though I thought I had disabled the logging option. I used pin 4 and 5 and it worked. https://esphome.io/components/uart.html#uart Note that the value I got was 5000 ppm at first, when it was plugged into the Pi on which I'm running HA. When connecting to another PSU I got 'normal' looking values. I assume it simply did not get enough power from the PI.
CouchBase not restarting properly after disk extension
I'm using CouchBase. We reach the disk limit three days ago. We extended the disk space but CouchBase doesn't starting properly : the web console is not accessible. The debug logs show the lies below: crasher: initial call: application_master:init/4 pid: <0.86.0> registered_name: [] exception exit: {{shutdown, {failed_to_start_child,ns_server_nodes_sup, {shutdown, {failed_to_start_child,start_couchdb_node, {{badmatch,{error,duplicate_name}}, [{ns_server_nodes_sup, '-start_couchdb_node/0-fun-0-',0, [{file,"src/ns_server_nodes_sup.erl"},{line,129}]}, {ns_port_server,init,1, [{file,"src/ns_port_server.erl"},{line,73}]}, {gen_server,init_it,6, [{file, "c:/tools/cygwin/home/ADMINI~1/OTP_SR~2/lib/stdlib/src/gen_server.erl"}, {line,304}]}, {proc_lib,init_p_do_apply,3, [{file, "c:/tools/cygwin/home/ADMINI~1/OTP_SR~2/lib/stdlib/src/proc_lib.erl"}, {line,239}]}]}}}}}, {ns_server,start,[normal,[]]}} in function application_master:init/4 (c:/tools/cygwin/home/ADMINI~1/OTP_SR~2/lib/kernel/src/application_master.erl, line 133) ancestors: [<0.85.0>] messages: [{'EXIT',<0.87.0>,normal}] links: [<0.85.0>,<0.7.0>] dictionary: [] trap_exit: true status: running heap_size: 1598 stack_size: 27 reductions: 202 neighbours: [error_logger:info,2019-11-18T20:44:58.184+01:00,ns_1#127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] I’m using Windows NT version 6.2 Server Editon. Build 9200 I extended the disk space by using a private Cloud Provider system. The disk wasn’t replace. We just extended it. Does any one faced tis kind of issue?
SendGrid misconfiguration on Google Cloud (535 Authentication failed)
So I've installed SendGrid on GoogleCE with Centos base following the documented instruction from Google: [https://cloud.google.com/compute/docs/tutorials/sending-mail/using-sendgrid#before-you-begin][1] Using the test from the command line (various accounts): echo 'MESSAGE' | mail -s 'SUBJECT' GJ******#gmail.com the /var/log/maillog says with several lines of 50 or so attempts in 1 second: postfix/error[32324]: A293210062D7: to=<GJ********#gmail.com>, relay=none, delay=145998, delays=145997/1.2/0/0, dsn=4.0.0, status=deferred (delivery temporarily suspended: SASL authentication failed; server smtp.sendgrid.net[167.89.115.53] said: 535 Authentication failed: The provided authorization grant is invalid, expired, or revoked) And the message is queued up and retried every few hours. Now, messing around, I could change the port setting from 2525 to one of the regular ports that isn't blocked by google and the email gets bounced right away to the user account in the mail test message. I made sure to use the api key generated, the SendGrid system say no attempt have been made or bounced or whatever. There were other errors in the maillog, actually as it tries every second, pages of them, but I change the perms in that directory so no longer, but maybe gives a clue to how it's misconfigured? Oct 31 19:04:14 beadc postfix/pickup[15119]: fatal: chdir("/var/spool/postfix"): Permission denied Oct 31 19:04:15 beadc postfix/master[1264]: warning: process /usr/libexec/postfix/qmgr pid 15118 exit status 1 Oct 31 19:04:15 beadc postfix/master[1264]: warning: /usr/libexec/postfix/qmgr: bad command startup -- throttling Oct 31 19:04:15 beadc postfix/master[1264]: warning: process /usr/libexec/postfix/pickup pid 15119 exit status 1 Oct 31 19:04:15 beadc postfix/master[1264]: warning: /usr/libexec/postfix/pickup: bad command startup -- throttling The only info I can find searching about the error is that it means a SendGrid misconfiguration. Any ideas as to what the misconfiguration might be?
I've determined the 535 error was a port/firewall issue. Which means that the 550 error I had on the other port still exists. Check your firewall settings on 535 [https://cloud.google.com/compute/docs/tutorials/sending-mail/][1]
Ejabberd: cannot enable mod_mam
I'm running Ejabberd 17.01 on Ubuntu 16.4, and I need to log all messages in a database. With the module mod_mam this should be straightforward task. What I've done so far according to the official documentation: Creating a MySQL database Importing the schema that comes with the Ejabberd installation Making the follow changes in the ejabberd.yml conf file Code snippet: auth_method: sql default_db: sql ... ## MySQL server: ## sql_type: mysql sql_server: "localhost" sql_database: "ejabberd" sql_username: "someuser" sql_password: "somepassword" ... In principle, this seems to work. When I create new users, or users create their own accounts, I can see the additions to the corresponding MySQL table. That means that the database is used and can be accessed, I presume. But now I want to enable mod_mam. According to the documentations and other sources on the Web, this should look something like this: modules: mod_mam: % <-- I had "mom_mam" here. Didn't notice the typo in the error message. iqdisc: no_queue db_type: sql default: always However, with this, Ejabberd fails to start throwing the following error: 2017-01-27 08:44:50.242 [critical] <0.39.0>#gen_mod:start_module:162 Problem starting the module mom_mam for host <<"xxx.xxx.xxx.xxx">> options: [{iqdisc,no_queue},{db_type,sql},{default,always}] error: undef [{mom_mam,start, [<<"xxx.xxx.xxx.xxx">>, [{iqdisc,no_queue},{db_type,sql},{default,always}]], []}, {gen_mod,start_module,3,[{file,"src/gen_mod.erl"},{line,154}]}, {lists,foreach,2,[{file,"lists.erl"},{line,1337}]}, {ejabberd_app,start,2,[{file,"src/ejabberd_app.erl"},{line,77}]}, {application_master,start_it_old,4, [{file,"application_master.erl"},{line,273}]}] 2017-01-27 08:44:50.242 [critical] <0.39.0>#gen_mod:maybe_halt_ejabberd:170 ejabberd initialization was aborted because a module start failed. I cannot find anything online, it just seems to work usually.
ERROR Session: Error creating pool to /127.0.0.1:9042
I am trying to insert values in cassandra when I come across this error: 15/08/14 10:21:54 INFO Cluster: New Cassandra host /a.b.c.d:9042 added 15/08/14 10:21:54 INFO Cluster: New Cassandra host /127.0.0.1:9042 added INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster 15/08/14 10:21:54 ERROR Session: Error creating pool to /127.0.0.1:9042 com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect at com.datastax.driver.core.Connection.<init>(Connection.java:109) at com.datastax.driver.core.PooledConnection.<init>(PooledConnection.java:32) at com.datastax.driver.core.Connection$Factory.open(Connection.java:586) at com.datastax.driver.core.SingleConnectionPool.<init>(SingleConnectionPool.java:76) at com.datastax.driver.core.HostConnectionPool.newInstance(HostConnectionPool.java:35) at com.datastax.driver.core.SessionManager.replacePool(SessionManager.java:271) at com.datastax.driver.core.SessionManager.access$400(SessionManager.java:40) at com.datastax.driver.core.SessionManager$3.call(SessionManager.java:308) at com.datastax.driver.core.SessionManager$3.call(SessionManager.java:300) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9042 My replication factor is 1. There are 5 nodes in the Cass cluster (they're all up). rpc_address: 0.0.0.0, broadcast_rpc_address: 127.0.0.1 I would think that I should see 5 of those "INFO Cluster: New Cassandra host.." line from above for each of the 5 nodes. But instead I see 127.0.0.1, I am not sure why. I also noticed that in the cassandra.yaml file, all the 5 nodes are listed under seed. (which I know is not advised but I did not set up this cluster) seed_provider: class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: seeds: "ip1, ip2, ip3, ip4, ip5" Where ipx is the ipaddr for node x. And under cassandra-topology.properties it just says the following and does not mention any of the 5 nodes. # default for unknown nodes default=DC1:r1 Can someone explain why I am seeing the ERROR Session: Error creating pool to /127.0.0.1:9042 error. Kind of new to Cassandra.. thanks in advance!
I think the problem is your rpc_broadcast_address is set to 127.0.0.1. Is there a reason in particular you are doing this? The java driver uses the system.peers table to look up the ip address to use to connect to hosts. If rpc_broadcast_address is set this is what will be present in system.peers and the driver will try to use it. If rpc_broadcast_address is not set, rpc_address will be used. In either case, you'll want to set one of these addresses to an address that will be accessible by your client. If you set rpc_address, you will want to remove broadcast_rpc_address.