ejabberd MUC-Sub trouble - ejabberd

I am running an ejabberd (ver 18.3.0) server with the following config for mod_muc:
mod_muc:
host: "conference.#HOST#"
...
default_room_options:
allow_subscription: true
persistent: true
mam: true
I am trying to access the muc rooms on this server from various clients (ios with xmppframework, js with node-xmpp-client - crafting the xmpp commands manually). The clients are able to receive messages - ONLY IF I send a presence message after connecting.
However, without sending a presence, I don't receive any messages on the client (even though the subscribe is successful). My understanding was that the presence message is not required to receive messages (for mucsub).
Any help is highly appreciated!
Additional details in response to Badlop's reply
I compared the xml messages on my client/server to the ones you posted. Continuing the user1/user2 example you used, I see a success for user2's subscribe:
<iq xmlns="jabber:client"
lang="en"
to="rk3#localhost/abcd"
from="tr21#conference.localhost"
type="result" id="D7550060-E2AE-4369-878C-261A02BA48A2">
<subscribe xmlns="urn:xmpp:mucsub:0" nick="rk3n">
<event node="urn:xmpp:mucsub:nodes:messages"/>
<event node="urn:xmpp:mucsub:nodes:presence"/>
</subscribe>
</iq>
Also, a query of the muc service from user2 results in the below:
<iq xmlns="jabber:client"
lang="en"
to="rk3#localhost/abcd"
from="conference.localhost" type="result" id="B28A237A-5D54-4AE2-821A-195272B05A88">
<subscriptions xmlns="urn:xmpp:mucsub:0">
<subscription jid="tr21#conference.localhost"/>
</subscriptions>
</iq>
However, when I send a groupchat message from user1:
<message
from="rk1#localhost" to="tr21#conference.localhost"
type="groupchat">
<body> hi there777hi there778</body>
</message>
User2 is still not receiving the above message.
I turned on logging level 5 on ejabberd server and can see that the server is trying to send the above message to User2 (rk3). However, the last log I see for this message on the server is below (I don’t see any ‘Send XML on stream’ log for this message).
2018-05-14 16:28:57.808 [debug] <0.646.0>#ejabberd_sm:do_route:656 processing message to bare JID:
#message{
id = <<>>,type = normal,lang = <<>>,
from =
#jid{
user = <<"tr21">>,server = <<"conference.localhost">>,resource = <<>>,
luser = <<"tr21">>,lserver = <<"conference.localhost">>,lresource = <<>>},
to =
#jid{
user = <<"rk3">>,server = <<"localhost">>,resource = <<>>,
luser = <<"rk3">>,lserver = <<"localhost">>,lresource = <<>>},
subject = [],body = [],thread = undefined,
sub_els =
[#ps_event{
items =
#ps_items{
xmlns = <<>>,node = <<"urn:xmpp:mucsub:nodes:messages">>,
items =
[#ps_item{
xmlns = <<>>,id = <<"15241958194312511749">>,
sub_els =
[#message{
id = <<>>,type = groupchat,lang = <<"en">>,
from =
#jid{
user = <<"tr21">>,server = <<"conference.localhost">>,
resource = <<"rk1">>,luser = <<"tr21">>,
lserver = <<"conference.localhost">>,lresource = <<"rk1">>},
to =
#jid{
user = <<"rk3">>,server = <<"localhost">>,resource = <<>>,
luser = <<"rk3">>,lserver = <<"localhost">>,lresource = <<>>},
subject = [],
body = [#text{lang = <<>>,data = <<"hi there777hi there778">>}],
thread = undefined,
sub_els =
[#mam_archived{
by =
#jid{
user = <<"tr21">>,server = <<"conference.localhost">>,
resource = <<>>,luser = <<"tr21">>,
lserver = <<"conference.localhost">>,lresource = <<>>},
id = <<"1526283878998040">>},
#stanza_id{
by =
#jid{
user = <<"tr21">>,server = <<"conference.localhost">>,
resource = <<>>,luser = <<"tr21">>,
lserver = <<"conference.localhost">>,lresource = <<>>},
id = <<"1526283878998040">>}],
meta =
#{ip => {172,17,0,1},
mam_archived => true,stanza_id => 1526283878998040}}],
node = <<>>,publisher = <<>>}],
max_items = undefined,subid = <<>>,retract = undefined},
purge = undefined,subscription = undefined,delete = undefined,
create = undefined,configuration = undefined}],
meta = #{stanza_id => 1526283879010097}}
I am probably missing something very basic (w.r.t user / nick / muc room etc) but have no idea what.
Can you please give me the steps you used to create user1/user2, register their nicks etc on the server (using ejabberdctl)?

My understanding was that the presence message is not required to receive messages (for mucsub).
You're right. There is something strange. So, I've tried myself, and provide you the exact stanzas sent and received, so you can compare, maybe you see something relevant.
You can also try to send those stanzas manually using the XML console of a desktop Jabber client, like Gajim, Psi or Tkabber, so you don't have to write code for this testing.
I configure the module like you do. Then user1 joins room2 (so it gets created).
And user2 subscribes to the room:
<iq to='room2#conference.localhost'
type='set'
id='E6E10350-76CF-40C6-B91B-1EA08C332FC7'>
<subscribe xmlns='urn:xmpp:mucsub:0'
nick='mynick'
password='roompassword'>
<event node='urn:xmpp:mucsub:nodes:messages' />
<event node='urn:xmpp:mucsub:nodes:affiliations' />
<event node='urn:xmpp:mucsub:nodes:subject' />
<event node='urn:xmpp:mucsub:nodes:config' />
</subscribe>
</iq>
<iq xml:lang='es'
to='user2#localhost/tka1'
from='room2#conference.localhost'
type='result'
id='E6E10350-76CF-40C6-B91B-1EA08C332FC7'>
<subscribe nick='mynick'
xmlns='urn:xmpp:mucsub:0'>
<event node='urn:xmpp:mucsub:nodes:messages'/>
<event node='urn:xmpp:mucsub:nodes:affiliations'/>
<event node='urn:xmpp:mucsub:nodes:subject'/>
<event node='urn:xmpp:mucsub:nodes:config'/>
</subscribe>
</iq>
Immediately after that, user1 sends a message to the room, and user2 receives it, without having send any presence stanza.
<message to='user2#localhost/tka1'
from='room2#conference.localhost'>
<event xmlns='http://jabber.org/protocol/pubsub#event'>
<items node='urn:xmpp:mucsub:nodes:messages'>
<item id='1625407893684208871'>
<message xml:lang='es'
to='user2#localhost'
from='room2#conference.localhost/user1'
type='groupchat'
id='53:939858'
xmlns='jabber:client'>
<archived by='room2#conference.localhost'
id='1526291787755131'
xmlns='urn:xmpp:mam:tmp'/>
<stanza-id by='room2#conference.localhost'
id='1526291787755131'
xmlns='urn:xmpp:sid:0'/>
<body>hi allll</body>
</message>
</item>
</items>
</event>
</message>
Just to be sure, user2 queries the MUC service the list of his subscriptions, and MUC returns room2, and another one he was also susbscribed:
<iq
to='conference.localhost'
type='get'
id='E6E10350-76CF-40C6-B91B-1EA08C332FC7'>
<subscriptions xmlns='urn:xmpp:mucsub:0' />
</iq>
<iq xml:lang='es'
to='user2#localhost/tka1'
from='conference.localhost'
type='result'
id='E6E10350-76CF-40C6-B91B-1EA08C332FC7'>
<subscriptions xmlns='urn:xmpp:mucsub:0'>
<subscription jid='room2#conference.localhost'/>
<subscription jid='room3#conference.localhost'/>
</subscriptions>
</iq>

Related

Sent a message to group from ejabberd server

Sent a message to group from ejabberd server but i get
Hook user_receive_packet crashed when running
mod_mam:user_receive_packet
send_message(Type, From, To, Subject, Body, StaticNumber) ->
CodecOpts = ejabberd_config:codec_options(),
try xmpp:decode(
#xmlel{name = <<"message">>,
attrs = [{<<"to">>, To },
{<<"from">>,From},
{<<"type">>, Type},
{<<"id">>, p1_rand:get_string()}],
children =
[#xmlel{name = <<"subject">>,
children = [{xmlcdata, Subject}]},
#xmlel{name = <<"groupcontent">>,
attrs = [{<<"sendername">>, <<"Admin">>},
{<<"acknowStatus">>, <<"0">>},{<<"fromadmin">>, StaticNumber}],
children = []},
#xmlel{name = <<"body">>,
children = [{xmlcdata, Body}]}]},
?NS_CLIENT, CodecOpts) of
#message{from = JID} = Msg ->
State = #{jid => JID},
ejabberd_hooks:run_fold(user_send_packet, JID#jid.lserver, {Msg, State}, []),
ejabberd_router:route(Msg)
catch _:{xmpp_codec, Why} ->
{error, xmpp:format_error(Why)}
end.
function call :
send_message("normal",
list_to_binary("123456789#xmpp.designcafe.com"),
list_to_binary("6ff3d0a4-c281-41bd-a262-c65bd767014d#mix.xmpp.designcafe.com"),
list_to_binary("text"), <<"test">>, <<"123456789">>);
I could not fix above issue
send_message("normal",
Instead of "normal", you must provide groupchat as a binary, that is:
send_message(<<"groupchat">>,
What that change it works for me using ejabberd 22.05. It's important that From is an existing account, and it joined the MIX Channel. Of course the MIX Channel must exist too.

Can't map Samba share from CentOS to Win10 - error 67 "The network name cannot be found"

I've just followed a guide on installing Samba, adding a samba user and configuring the smb.conf file
[global]
workgroup = SAMBA
security = user
passdb backend = tdbsam
printing = cups
printcap name = cups
load printers = yes
cups options = raw
[homes]
comment = Home Directories
valid users = %S, %D%w%S
browseable = No
read only = No
inherit acls = Yes
[printers]
comment = All Printers
path = /var/tmp
printable = Yes
create mask = 0600
browseable = No
[print$]
comment = Printer Drivers
path = /var/lib/samba/drivers
write list = #printadmin root
force group = #printadmin
create mask = 0664
directory mask = 0775
[Stuff]
path = /mystuff
guest ok = no
available = yes
valid users = livingroom
read only = no
browsable = yes
writeable = yes
On my Win10 machine I can \192.168.100.6 and get prompted for login (username livingroom) which it accepts. I then see two folders in explorer - 'livingroom' and 'Stuff'
However when I double-click either of them it will try for a while before eventually failing with Error code: 0x80070043 - The network name cannot be found.
Any ideas why its saying the network name cannot be found when i'm using the IP address to access it?

Confluent kafka Python client Avro producer.producer() executes without error but no data in topic

My producer isnt throwing any errors but data is not being sent to the destination topic. Can you recommend any techniques to debug this situation.
I have call to a Confluent Python Avro Producer inside a synchronous loop to send data to a topic like so:
self.producer.produce(topic=test2, value=msg_dict)
After this call I have a piece of code like so to flush the queue:
num_messages_in_queue = self.producer.flush(timeout = 2.0)
print(f"flushed {num_messages_in_queue} messages from producer queue in iteration {num_iterations} ")
this executes without any error. But also there is no callback fired after this code executes. My producer is initiated as follows:
def __init__(self,broker_url=None,topic=None,schema_registry_url=None,schema_path=None):
try:
with open(schema_path, 'r') as content_file:
schema = avro.loads(content_file.read())
except Exception as e:
print(f"Error when trying to read avro schema file : {schema_path}")
self.conf = {
'bootstrap.servers': broker_url,
'on_delivery': self.delivery_report,
'schema.registry.url': schema_registry_url,
'acks': -1, #This guarantees that the record will not be lost as long as at least one in-sync replica remains alive.
'enable.idempotence': False, #
"error_cb":self.error_cb
}
self.topic = topic
self.schema_path = schema_path
self.producer = AvroProducer(self.conf,default_key_schema=schema, default_value_schema=schema)
My callback method is as follows:
def delivery_report(self, err, msg):
print(f"began delivery_report")
if err is None:
print(f"delivery_report --> Delivered msg.value = {msg.value()} to topic= {msg.topic()} offset = {msg.offset} without err.")
else:
print(f"conf_worker AvroProducer failed to deliver message {msg.value()} to topic {self.topic}. got error= {err}")
After this code is executed, I look at my topic on the schema registry container like so:
docker exec schema_registry_container kafka-avro-console-consumer --bootstrap-server kafka:29092 --topic test2 --from-beginning
I see this output:
[2020-04-03 15:48:38,064] INFO Registered kafka:type=kafka.Log4jController MBean
(kafka.utils.Log4jControllerRegistration$)
[2020-04-03 15:48:38,742]
INFO ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [kafka:29092]
check.crcs = true
client.dns.lookup = default
client.id =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = console-consumer-49056
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class >> org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class >>org.apache.kafka.common.serialization.ByteArrayDeserializer
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2020-04-03 15:48:38,887] INFO Kafka version : 2.1.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-03 15:48:38,887] INFO Kafka commitId : bda8715f42a1a3db (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-03 15:48:39,221] INFO Cluster ID: KHKziPBvRKiozobbwvP1Fw (org.apache.kafka.clients.Metadata)
[2020-04-03 15:48:39,224] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Discovered group coordinator kafka:29092 (id: 2147483646 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2020-04-03 15:48:39,231] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Revoking previously assigned partitions []
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2020-04-03 15:48:39,231] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] (Re-)joining group >(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2020-04-03 15:48:42,264] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Successfully joined group with generation 1
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2020-04-03 15:48:42,267] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Setting newly assigned partitions [test2-0] >(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2020-04-03 15:48:42,293] INFO [Consumer clientId=consumer-1, groupId=console-consumer-49056] Resetting offset for partition test2-0 to offset 0. >(org.apache.kafka.clients.consumer.internals.Fetcher)
So the answer is so trivial that its embarassing!
But it does point to the fact that in a multilayered infrastructure, a single value incorrectly set, can result in a silent failure which can be very tedious to track down.
So the issue came from incorrect param setting my in my docker-compose.yml file, where the env variable for broker_url was not set.
The application code needed this variable to reference the kafka broker.
However there was no exception thrown for this missing param and it was silently failing.

CAS authentication with Shiro for Zeppelin

I am unsuccessfully trying to get the shiro.ini in Zeppelin to use cas.
I followed these instructions
http://shiro.apache.org/cas.html
casFilter = org.apache.shiro.cas.CasFilter
casFilter.failureUrl = /error.html
casRealm = org.apache.shiro.cas.CasRealm
casRealm.defaultRoles = USER
casRealm.casServerUrlPrefix = https://ticketserver.com
casRealm.casService = https://tickettranslater.com/j_spring_cas_security_check
casSubjectFactory = org.apache.shiro.cas.CasSubjectFactory
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.subjectFactory = $casSubjectFactory
securityManager.realms = $casRealm
### If caching of user is required then uncomment below lines
#cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
#securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
#securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/interpreter/** = authc, roles[USER]
/api/configurations/** = authc, roles[USER]
/api/credential/** = authc, roles[SOMEOTHER]
/api/login = casFilter
/** = authc
#/** = anon
#/** = authc
the casService is what should translate the ticket to a user.
the casServerUrlPrefix is where once gets the tickes.
If I put for shiro.loginUrl = https://ticketserver.com?service=https://tickettranslater.com/j_spring_cas_security_check
It works except for the fact that the Origin header gets los along the way and the login fails.
both tickeserver.com and tickertranslater are in the network and they work for plenty of other applications.
How do I set up the shiro.ini so the cas login chain is correctly handled?
This configuration works with Apache Zeppelin 0.6.2.
If you are already authenticated against a CAS server you will be authenticated automatically into Apache Zeppelin.
You need to compile zeppelin-web, but first is needed to add the shiro-cas Maven dependency to zeppelin-web/pom.xml:
<dependencies>
<dependency>
<groupId>org.apache.shiro</groupId>
<artifactId>shiro-cas</artifactId>
<version>1.2.3</version>
</dependency>
</dependencies>
Then configure the file conf/shiro.ini with this:
[main]
casFilter = org.apache.shiro.cas.CasFilter
casFilter.failureUrl = /404.html
casRealm = org.apache.shiro.cas.CasRealm
casRealm.defaultRoles = ROLE_USER
casRealm.casServerUrlPrefix = http://<cas-server>:<port>/cas/p3
casRealm.casService = http://localhost:8080/api/shiro-cas
casSubjectFactory = org.apache.shiro.cas.CasSubjectFactory
securityManager.subjectFactory = $casSubjectFactory
securityManager.realms = $casRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
[urls]
/api/shiro-cas = casFilter
/api/version = anon
/** = authc

Vista UAC Issues with samba and Admin Credentials

We have Samba setup for our shared drive. I have pasted the smb.conf file below. Everything is working well accept when we try and run an EXE file using Windows Vista. When we run an EXE file it first ask for UAC control then it pops up the username and password prompt. You must then type your username and password in again before it will run.
I think the issues is that UAC is now running the application under Admin instead of the logged in user. So the first username and password that is cached is not seen by the admin user. Does anyone know of a work around for this?
smb.conf:
[global]
passdb backend = tdbsam
security = user
encrypt passwords = yes
preferred master = Yes
workgroup = Workgroup
netbios name = Omni
bind interfaces only = True
interfaces = lo eth2
;max disk size = 990000 ;some programs (like PS7) can't deal with more than 1TB
socket options = TCP_NODELAY
server string = Omni
;smb ports = 139
debuglevel = 1
syslog = 0
log level = 2
log file = /var/log/samba/%U.log
max log size = 61440
vfs objects = omnidrive recycle
recycle:repository = RecycleBin/%U
recycle:keeptree = Yes
recycle:touch = No
recycle:versions = Yes
recycle:maxsize = 0
recycle:exclude = *.temp *.mp3 *.cat
omnidrive:log = 2
omnidrive:com_log = 1
omnidrive:vscan = 1
omnidrive:versioningState = 1
omnidrive:versioningMaxFileSize = 0
omnidrive:versioningMaxRevSize = 7168
omnidrive:versioningMaxRevNum = 1000
omnidrive:versioningMinRevNum = 0
omnidrive:versioningfilesInclude = /*.doc/*.docx/*.xls/*.xlsx/*.txt/*.bmp/
omnidrive:versioningfilesExclude = /*.tmp/*.temp/*.exe/*.com/*.jarr/*.bat/.*/
full_audit:failure = none
full_audit:success = mkdir rename unlink rmdir write open close
full_audit:prefix = %u|%I|%m|%S
full_audit:priority = NOTICE
full_audit:facility = LOCAL6
;dont descend = RecycleBin
veto files = /.subversion/*.do/*.do/*.bar/*.cat/
client ntlmv2 auth = yes
[netlogon]
path = /var/lib/samba/netlogon
read only = yes
[homes]
read only = yes
browseable = no
[share1]
path = /share1
read only = no
browseable = yes
writable = yes
admin users = clinton1
public = no
create mask = 0770
directory mask = 0770
nt acl support = no
;acl map full control = no
hide unreadable = yes
store dos attributes = yes
map archive = no
map readonly = Permissions
If anyone cares; this is how I fixed the issues on vista:
I set a key to link the UAC account and the none UAC account.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
EnableLinkedConnections =(dword)1
The password prompt goes away.
I think that you can also address this by turning off UAC in Vista or Windows 7. Here's a link for doing that: Turn User Account Control on or off