I have installed tinyproxy in CentOS 7 machine and changed the port to 8080 in tinyproxy.conf
Wherenever I am hitting request I am getting following logs in tinyproxy.log:-
CONNECT Mar 15 08:14:42 [22148]: Connect (file descriptor 6): <IP> [<IP>]
NOTICE Mar 15 08:14:42 [22148]: Unauthorized connection from "<IP>" [<IP>].
INFO Mar 15 08:14:42 [22148]: Read request entity of 1200 bytes
My request is reaching to proxy and proxy is not forwarding it to the destination.
In the Tinyproxy config file (/etc/tinyproxy/tinyproxy.conf) you can use the Allow directive to explicitly specify the host(s) that will be connecting to the proxy. You can also comment out or remove all Allow <host> lines to allow connections from all hosts. See below description from the config file (here I've commented out Allow 127.0.0.1 and since there are no other entries all connections will be allowed):
# Allow: Customization of authorization controls. If there are any
# access control keywords then the default action is to DENY. Otherwise,
# the default action is ALLOW.
#
# The order of the controls are important. All incoming connections are
# tested against the controls based on order.
#
#Allow 127.0.0.1
Related
Hi guys.
I've an IBM MQ image deployed on Openshift 4 and for some reason, the processes don't use the user mqm but the one randomly generated by Openshift itself.
As a result, I've a Java application that tries to connect to the queues and it fails because the authentication fails since it uses mqm as user.
The same exact image running on Openshift 3 behaves as expected. For more details:
Custom image:
FROM ibmcom/mq
ENV HOME /root
COPY config.mqsc /etc/mqm/
and, in the config.mqsc:
DEFINE CHANNEL(PASSWORD.SVRCONN) CHLTYPE(SVRCONN)
SET CHLAUTH(PASSWORD.SVRCONN) TYPE(BLOCKUSER) USERLIST('nobody') DESCR('Allow privileged users on this channel')
SET CHLAUTH('*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) DESCR('BackStop rule')
SET CHLAUTH(PASSWORD.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED)
ALTER AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS) ADOPTCTX(YES)
REFRESH SECURITY TYPE(CONNAUTH)
DEFINE QLOCAL(MYQUEUE.IN ) DEFPSIST(YES) MAXDEPTH(500000)
DEFINE QLOCAL(MYQUEUE.OUT ) DEFPSIST(YES) MAXDEPTH(500000)
DEFINE QLOCAL(CS.ERROR) DEFPSIST(YES) MAXDEPTH(500000)
ALTER QMGR CHLAUTH(DISABLED) CONNAUTH(' ')
ALTER CHANNEL('SYSTEM.DEF.SVRCONN') CHLTYPE(SVRCONN) MCAUSER('mqm')
REFRESH SECURITY TYPE(CONNAUTH)
The process running on Openshift 4 looks like
1000790+ 232 0.0 0.1 2308688 45776 ? Ssl 09:39 0:00 /opt/mqm/bin/amqzxma0 -m QM1 -x -u 1000790000
but in the Openshift 3 it looks like
1000100+ 152 0.0 0.0 2324200 33812 ? Ssl May03 0:06 /opt/mqm/bin/amqzxma0 -m QM1 -x -u mqm
Another difference are the "capabilties" and the security attributes that the MQ container has on startup.
On Openshift 3:
Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot,audit_write,setfcap
Process security attributes: system_u:system_r:container_t:s0:c0,c15
On Openshift 4:
Capabilities (bounding set): chown,dac_override,fowner,fsetid,setpcap,net_bind_service,net_raw,sys_chroot
Process security attributes: system_u:system_r:container_t:s0:c17,c28
Stacktrace produced by the application:
Caused by: org.springframework.jms.JmsSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.; nested exception is com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:286)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:185)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:507)
at org.springframework.jms.core.JmsTemplate.browseSelected(JmsTemplate.java:1029)
at org.springframework.jms.core.JmsTemplate.browse(JmsTemplate.java:991)
... 78 more
Caused by: com.ibm.msg.client.jms.DetailedJMSSecurityException: JMSWMQ2013: The security authentication was not valid that was supplied for QueueManager 'QM1' with connection mode 'Client' and host name 'my-mq(1414)'.
Please check if the supplied username and password are correct on the QueueManager to which you are connecting.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:531)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:424)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:8475)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7815)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl._createConnection(JmsConnectionFactoryImpl.java:303)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:236)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6016)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:111)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:187)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:196)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:494)
... 80 more
Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2035' ('MQRC_NOT_AUTHORIZED').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203)
... 90 more
Any idea on what the issue could be?
To ensure compliance with security constraints required in a multi-tenant containerized environment, the IBM MQ certified containers, do not support the use of IDs that are defined on the operating system libraries inside a container. There is no mqm user ID or group defined in the container.
For more details read User authentication and authorization for IBM MQ in containers
In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.
I have a local SMTP email server I use for testing purposes running on my machine. It listens for SMTP on port 25. I am able to send and receive emails to it using a regular email client.
When I build a Node-RED flow that contains an e-mail output node and configure its properties with:
to: <email address>
server: localhost
port: 25
and submit a flow, I get the error:
25 Feb 16:43:24 - [error] [e-mail:<email address>] Error: 101057795:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:openssl\ssl\s23_clnt.c:794:
I am at a loss on how to proceed. Looking at the messages, it almost appears that there is some form of SSL negotiation/test at play here. Switching on trace on my SMTP server, I find the following logs each time I try and run a flow:
"TCPIP" 10708 "2016-02-25 16:43:08.294" "TCP - 127.0.0.1 connected to 127.0.0.1:25."
"DEBUG" 10708 "2016-02-25 16:43:08.298" "Creating session 22"
"SMTPD" 10708 22 "2016-02-25 16:43:08.298" "127.0.0.1" "SENT: 220 WIN7-X64 ESMTP"
"DEBUG" 9772 "2016-02-25 16:43:08.299" "Ending session 22"
It appears that the Node-RED node is sending a connection request, getting back the SMTP 220 response and then failing immediately after that.
I came across the same problem and have a nasty hack that will enable mail to go via my local exchange server's plain SMTP, with no auth.
Edit the .../61-email.js file and change it thusly:
var smtpTransport = nodemailer.createTransport({
host: node.outserver,
port: node.outport,
secure: false,
ignoreTLS: true //,
// auth: {
// user: node.userid,
// pass: node.password
// }
});
I see Dave has replied to the github issue but just to close the loop on this question.
At this time (Feb 2016) the node assumes SSL is always available and enabled, at some point we need to go back to the email node and find a simple way to expose a lot more of the nodemailer options to allow connections to a wider range of email providers both public and private.
I need to open TCP port 9997 on OpenShift so Splunk is able to listen for incoming data from fowarders on other servers.
I've set up Splunk using this guide: http://www.kelvinism.com/2013/11/free-splunk-hosting.html and but I can't figure out how to add another TCP port to the manifest.yml file. I tried the following for a new OpenShift instance but with no luck.
- Private-IP-Name: IP
Private-Port-Name: PORT_FORWARDER
Private-Port: 9997
Public-Port-Name: PROXY_PORT_FORWARDER
Options: { "ssl_to_gear": true }
Do I need to configure other parts of the cartridge to read my new port and set up some configuration elsewhere?
You will only be able to listen publicly on ports 80/443/8000/8443, no other tcp or udp ports are allowed in (except 22 for ssh/scp/sftp). The private port that you have configured is for internal access only (either on the same gear, or installed on it's own gear as part of a scaled application). Having remote agents connect to your application on port 9997 just won't work.
alternatively, you can write a very simple splunk add-on to listen on that port, that's very straight forward.
Splunk has a SDK you can implement it with variable language. Here is a framework for python. for more information, you can see a full example for UDP receiver: link to the example, it's not an english post, but you can read the code from there.
import sys
from splunklib.modularinput import *
class MyScript(Script):
def get_scheme(self):
# Returns scheme.
def validate_input(self, validation_definition):
# Validates input.
def stream_events(self, inputs, ew):
# Splunk Enterprise calls the modular input,
# streams XML describing the inputs to stdin,
# and waits for XML on stdout describing events.
# TODO: implement a socket to listen and receive the
# message then send by Event()
if __name__ == "__main__":
sys.exit(MyScript().run(sys.argv))
I just tried to start spread for communication of some of my tools that I use in for the integration for different sensor data processes.
Just after the startup spread exits with the following message:
Conf_load_conf_file: using file: spread.conf
Successfully configured Segment 0 [127.0.0.255:4803] with 2 procs:
localhost: 127.0.0.1
boron: 127.0.1.1
Finished configuration file.
Hash value for this configuration is: 913193717
Conf_load_conf_file: My proc id (129.70.129.5) is not in configuration
Exit caused by Alarm(EXIT)
As seen in the message I use the following spread.conf file to configure my local spread segment.
Spread_Segment 127.0.0.255:4803 {
localhost 127.0.0.1
boron 127.0.1.1
}
The problem seems to be that the local machine I'm working at is not appearing in the config file according to spread. Spread tries to find the acutal IP 129.70.129.5 and not localhost in the .conf file.
Changing my .conf file to:
Spread_Segment 127.0.0.255:4803 {
localhost 127.0.0.1
boron 129.70.129.5
}
or starting spread with
spread -n localhost
does the trick.