In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.
Related
I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.
I'm trying to protect Orion Context Broker using KeyRock idm, Wilma PEP-Proxy and AuthZForce PDP over Docker. For now, level 1 security works well and I can deny access to non logged users, but I get this error on Wilma when trying to add level 2.
AZF domain not created for application <applicationID>
Here it is my azf configuration in Wilma's config.js file:
config.azf = {
enabled: true,
protocol: 'http',
host: 'azfcontainer',
port: 8080,
custom_policy: undefined
};
And this is how I set the access control configuration on KeyRock:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://azfcontainer:8080'
ACCESS_CONTROL_MAGIC_KEY = None
I have created the custom policies on Keyrock, but AuthZForce logs don't show any request from KeyRock or Wilma, so no domain is created on the PDP. I have checked that all containers can see and reach each other and that all ports are up. I may be missing some configuration.
These are the versions I'm using:
keyrock=5.4.1
wilma=5.4
autzforce=6.0.0/5.4.1
This question is the same that “AZF domain not created for application” AuthZforce, but my problem persists even with the shown AuthZForce GE Configuration.
I found the cause of this problem that is present when the AuthZForce is not behind a PEP Proxy and therefore the variable ACCESS_CONTROL_MAGIC_KEY is not modified (None by default).
It seems horizon reads both ACCESS_CONTROL_URL and ACCESS_CONTROL_MAGIC_KEY parameters in openstack_dashboard/local/local_settings.py when it needs to connect to AuthZForce. Theoretically, the second parameter is optional (it introduces a 'X-Auth-Token' header for the PEP Proxy), but if horizon detects it is None (the default value in local_settings.py) or an empty string, the log shows a Warning and returns inmediatly from the function "policyset_update" in openstack_dashboard/fiware_api/access_control_ge.py. So the communication to AuthZForce never takes place.
The easier way to solve the problem is to write some text as magic key in: openstack_dashboard/local/local_settings.py:
# ACCESS CONTROL GE
ACCESS_CONTROL_URL = 'http://authzforce_url:port'
ACCESS_CONTROL_MAGIC_KEY = '1234567890' # DO NOT LEAVE None OR EMPTY
Thus, a 'X-Auth-Token' header will be generated, but it shouldn't affect to the communication when the AuthZForce isn't behind a PEP Proxy (the header is simply ignored).
Notice: Remember to delete the cached bytecode file "openstack_dashboard/local/local_settings.pyc" when making changes to assure the new config is updated after restart horizon service.
PS: I sent a pull request to https://github.com/ging/horizon with a simple modification that fixes the problem.
I have added mod_apns to my ejabberd server. You can find this module here.
my ejabberd.yml configuration is like this:
mod_apns:
address: "gateway.sandbox.push.apple.com"
port: 2195
certfile: "/Applications/ejabberd-15.10/conf/cert.pem"
keyfile: "/Applications/ejabberd-15.10/conf/key.pem"
password: "myPassword"
the address is sandbox since I am still in development phase. And I have tested my cert.pem and key.pem and they are valid and working.
I send my device token to ejabberd server like this:
<iq type="set" to="myEjabberdServer.com">
<register xmlns="https://apple.com/push">
<token>myDeviceTokenWithoutAnySpace</token>
</register>
</iq>
I can see my device token is saved in apns_users database.
But I still do not get notifications when my user is offline.
Am I doing anything wrong?
Does it work with gateway.sandbox.push.apple.com?
should my device token be without space and only characters?
I appreciate your help..
You have asked for an alternate approach. This alternate approach takes the process of triggering push notifications by the ejabberd server.
1. Use the mod_interact library. This will provide you an ability to transfer your messages to another url.
2. From there on you can use the direct HTTP call for push notifications
OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.
I'm trying to get ejabberd to allow for in-channel registration only from a specific IP, using mod_register's ip_access clause.
To do this, I added this line to the mod_register block in my ejabberd.cfg:
{ip_access, [{allow, "the.allowed.ip.address"}]}
... And restarted ejabberd via ejabberdctl restart. The server came back online with no warnings or errors logged in /var/log/ejabberd/ejabberd.log.
Unfortunately, with this line of code, I could still perform in-channel registration from a non-whitelisted IP using Adium as a client. I decided to amend the line above by adding:
{ip_access, [{allow, "the.allowed.ip.address"}, {deny, all}]}
... Running the risk of causing all registrations to throw a 403 "Unauthorized" status. Strangely, now, when I try to register from any IP, including the whitelisted one, I get a 503 "Service unavailable" status message.
How can I get ejabberd to allow in-channel registration from a specific IP, and that IP alone?
That's not how the restriction is supposed to be used. You can find an example (in .yaml) in ejabberd documentation for mod_register:
That's not how it works in ejabberd 15.x. You need three things to be able to limit registration from IP:
- ACL definition for IP.
- Access rule defining for which ACL you want to allow or deny access.
- mod_register configuration linking ip_access to your access rule.
For the (very old) 2.1.11, it is very different, but definitely does not use the all keyword for deny.
ejabberd 2.1.11 documentation show example as:
{acl, shortname, {user_glob, "??"}}.
%% The same using regexp:
%%{acl, shortname, {user_regexp, "^..?$"}}.
{access, register, [{deny, shortname},
{allow, all}]}.
{modules,
[
...
{mod_register, [{access, register},
{ip_access, [{allow, "127.0.0.0/8"},
{deny, "0.0.0.0/0"}]}
]},
...
]}.
As you see deny should match IP address blocks to deny.