ejabberd mod_register allow registration only from a specified IP - configuration

I'm trying to get ejabberd to allow for in-channel registration only from a specific IP, using mod_register's ip_access clause.
To do this, I added this line to the mod_register block in my ejabberd.cfg:
{ip_access, [{allow, "the.allowed.ip.address"}]}
... And restarted ejabberd via ejabberdctl restart. The server came back online with no warnings or errors logged in /var/log/ejabberd/ejabberd.log.
Unfortunately, with this line of code, I could still perform in-channel registration from a non-whitelisted IP using Adium as a client. I decided to amend the line above by adding:
{ip_access, [{allow, "the.allowed.ip.address"}, {deny, all}]}
... Running the risk of causing all registrations to throw a 403 "Unauthorized" status. Strangely, now, when I try to register from any IP, including the whitelisted one, I get a 503 "Service unavailable" status message.
How can I get ejabberd to allow in-channel registration from a specific IP, and that IP alone?

That's not how the restriction is supposed to be used. You can find an example (in .yaml) in ejabberd documentation for mod_register:
That's not how it works in ejabberd 15.x. You need three things to be able to limit registration from IP:
- ACL definition for IP.
- Access rule defining for which ACL you want to allow or deny access.
- mod_register configuration linking ip_access to your access rule.
For the (very old) 2.1.11, it is very different, but definitely does not use the all keyword for deny.
ejabberd 2.1.11 documentation show example as:
{acl, shortname, {user_glob, "??"}}.
%% The same using regexp:
%%{acl, shortname, {user_regexp, "^..?$"}}.
{access, register, [{deny, shortname},
{allow, all}]}.
{modules,
[
...
{mod_register, [{access, register},
{ip_access, [{allow, "127.0.0.0/8"},
{deny, "0.0.0.0/0"}]}
]},
...
]}.
As you see deny should match IP address blocks to deny.

Related

Unable to resolve .local domains with getent even though avahi-resolve-host-name succeeds

Trying to set up a network printer with CUPS.
Followed online documentation that stated:
To discover or share printers using DNS-SD/mDNS, setup .local hostname
resolution with Avahi and restart cups.service.
Followed directions for setting up Avahi to the point where avahi-browse --all --ignore-local --resolve --terminate and avahi-resolve-host-name my-domain.local are both working.
But getent hosts my-domain.local fails to resolve. This results in CUPS failing to print because it can't find my-printer.local.
I read the mdns Github page and saw a note that made me think I didn't need a /etc/mdns.allow file.
nss-mdns has a simple configuration file /etc/mdns.allow for enabling
name lookups via mDNS in other domains than .local.
Note: The "minimal" version of nss-mdns does not read /etc/mdns.allow under any circumstances. It behaves as if the file
does not exist.
In the recommended configuration, no /etc/mdns.allow file is present.
But then I saw the last note in that section:
If, during a request, the system-configured unicast DNS (specified in
/etc/resolv.conf) reports an SOA record for the top-level local name,
the request is rejected. Example: host -t SOA local returns something
other than Host local not found: 3(NXDOMAIN). This is the unicast SOA
heuristic.
I tested that out on my machine and sure enough, I was getting something OTHER than Host local not found....
Adding a /etc/mdns.allow file with a line for .local. and for .local and now I can ping my-printer.local.

How do setup mod_http_upload in ejabberd

In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.

Fiware AuthZForce error: "AZF domain not created for application"

I'm trying to protect Orion Context Broker using KeyRock idm, Wilma PEP-Proxy and AuthZForce PDP over Docker. For now, level 1 security works well and I can deny access to non logged users, but I get this error on Wilma when trying to add level 2.
AZF domain not created for application <applicationID>
Here it is my azf configuration in Wilma's config.js file:
config.azf = {
enabled: true,
protocol: 'http',
host: 'azfcontainer',
port: 8080,
custom_policy: undefined
};
And this is how I set the access control configuration on KeyRock:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://azfcontainer:8080'
ACCESS_CONTROL_MAGIC_KEY = None
I have created the custom policies on Keyrock, but AuthZForce logs don't show any request from KeyRock or Wilma, so no domain is created on the PDP. I have checked that all containers can see and reach each other and that all ports are up. I may be missing some configuration.
These are the versions I'm using:
keyrock=5.4.1
wilma=5.4
autzforce=6.0.0/5.4.1
This question is the same that “AZF domain not created for application” AuthZforce, but my problem persists even with the shown AuthZForce GE Configuration.
I found the cause of this problem that is present when the AuthZForce is not behind a PEP Proxy and therefore the variable ACCESS_CONTROL_MAGIC_KEY is not modified (None by default).
It seems horizon reads both ACCESS_CONTROL_URL and ACCESS_CONTROL_MAGIC_KEY parameters in openstack_dashboard/local/local_settings.py when it needs to connect to AuthZForce. Theoretically, the second parameter is optional (it introduces a 'X-Auth-Token' header for the PEP Proxy), but if horizon detects it is None (the default value in local_settings.py) or an empty string, the log shows a Warning and returns inmediatly from the function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py. So the communication to AuthZForce never takes place.
The easier way to solve the problem is to write some text as magic key in: openstack_dashboard/local/local_settings.py:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://authzforce_url:port'
ACCESS_CONTROL_MAGIC_KEY = '1234567890' # DO NOT LEAVE None OR EMPTY
Thus, a 'X-Auth-Token' header will be generated, but it shouldn't affect to the communication when the AuthZForce isn't behind a PEP Proxy (the header is simply ignored).
Notice: Remember to delete the cached bytecode file "openstack_dashboard/local/local_settings.pyc" when making changes to assure the new config is updated after restart horizon service.
PS: I sent a pull request to https://github.com/ging/horizon with a simple modification that fixes the problem.

ejabberd contribution mod_apns does not work

I have added mod_apns to my ejabberd server. You can find this module here.
my ejabberd.yml configuration is like this:
mod_apns:
address: "gateway.sandbox.push.apple.com"
port: 2195
certfile: "/Applications/ejabberd-15.10/conf/cert.pem"
keyfile: "/Applications/ejabberd-15.10/conf/key.pem"
password: "myPassword"
the address is sandbox since I am still in development phase. And I have tested my cert.pem and key.pem and they are valid and working.
I send my device token to ejabberd server like this:
<iq type="set" to="myEjabberdServer.com">
<register xmlns="https://apple.com/push">
<token>myDeviceTokenWithoutAnySpace</token>
</register>
</iq>
I can see my device token is saved in apns_users database.
But I still do not get notifications when my user is offline.
Am I doing anything wrong?
Does it work with gateway.sandbox.push.apple.com?
should my device token be without space and only characters?
I appreciate your help..
You have asked for an alternate approach. This alternate approach takes the process of triggering push notifications by the ejabberd server.
1. Use the mod_interact library. This will provide you an ability to transfer your messages to another url.
2. From there on you can use the direct HTTP call for push notifications

HTTP password setup on mod_register_web ejabberd

I have configured mod_register_web module in ejabberd in the following way.. added configurations in listen part
{5281, ejabberd_http, [
%%tls, %% currently https not implemented
%%{certfile, "/etc/ejabberd/certificate.pem"},
{request_handlers, [
{["register"], mod_register_web}
]}
]},
Added module in modules part
{mod_register_web, []}
then tried
http://localhost:5281/register/
and page becomes available without any authentication means anyone can access and can add users. Then i have tried to make it secure with different combinations like..
{5281, ejabberd_http, [
http_bind,
http_poll,
web_admin,
{access, configure, [{allow, admin}]} %% actually admin has password
{request_handlers, [
{["register"], mod_register_web}
]}
]},
But it is still not asking for any password. While port 5280 for admin pages, is password protected. Can anyone guide how i can apply security on mod_register_web module like whenever anyone access through IP then it should prompt for username and password.
It can be done by modifying source code (mod_register_web.erl).
Like 'ejabberd_web_admin.erl', call get_auth_admin() and check result at the process().