OpenShift egress router not working - openshift

i configured an egress router like described here:
https://docs.openshift.com/container-platform/3.3/admin_guide/managing_pods.html#admin-guide-controlling-egress-traffic
But it does not work.
In my understanding, the options will be resolved like this:
name: EGRESS_SOURCE <-- This is the network where the nodes live (in my case the vm where the Containers are running on)
value: 192.168.12.99
name: EGRESS_GATEWAY <-- The gateway over which the destination ip address is routable.
value: 192.168.12.1
name: EGRESS_DESTINATION <--- The destination ip of the application i want to reach. In my case its a mongoDB living in a classical VM.
value: 203.0.113.25
Am i right or do i miss something ?
How would i be able to reach the target ?
Do i need to address the source ip to access the MongoDB or do i simply address the IP of my MongoDB an the traffic gets nat'd the way over my egress router (This is how i understood the traffic flow will be btw.) ?
How can i troubleshoot this kind of problem ?
Best Regards,
Marcus

Ok, it worked by now. I created a service and adressed the ip of this service to reach my destination.
The alternative way is to address the ip of the container.
So from inside the container to reach your original destination don't use the original ip, rather use the egress pod ip or preferred use the ip of the created service.
!!Attention: The destination ip must be outside of the host/node ip range otherwise it would not work. It seems that, if you use a destination ip from your host/node range, the standard gw will get the request and i think it will discard it. !!
And i would suggest to use the egress router image from redhat, instead the origin, which is stated in the official document from redhat...
image: registry.access.redhat.com/openshift3/ose-egress-router

Related

Varnish: Multiple IPs compare to ACL using Tilde

What would happen in Varnish if multiple IPs are in an X-Forward-For header which is compared to an ACL using the tilde operator?
Dummy example:
The request has the following HTTP header:
X-Forward-For: 160.12.34.56, 10.10.10.10
The Varnish config looks like this:
acl internal {
"10.10.10.10"
}
if ((std.ip(req.http.X-Forward.For, "0.0.0.0") ~ internal)){
# THIS CODE
}
else {
# OR THIS CODE
}
Which code block is executed?
Also, does the order of the IPs matter in the X-Forward-For header?
Does it change if there are 2 X-Forward-For headers, each with one of the two IPs?
Will it work?
The short answer to your question is no, it won't work.
std.ip() expects to receive a single IP address, not a collection. The conversion will fail, and the fallback value (second argument of the function) will be returned.
Here's a quick test script that illustrates this:
vcl 4.0;
import std;
backend default none;
sub vcl_recv {
set req.http.x-f = "1.2.3.4, 5.6.7.8";
return(synth(200,std.ip(req.http.x-f,"0.0.0.0")));
}
This example will return 0.0.0.0.
Does X-Forwarded-For need multiple IP addresses?
It does make sense to ask the question if your X-Forwarded-For header needs multiple IP addresses.
The idea is to indicate to the origin server what the IP address of the original client was.
In your case there is more than 1 proxy in front of the webserver, so a natural reaction is to chain the IP addresses in the X-Forwarded-For header.
A better solution would be to figure out what the IP address of the original client was, and set that value in X-Forwarded-For.
The best way to get this done is by leveraging the PROXY protocol, which Varnish supports.
Leverage the PROXY protocol
The PROXY protocol has the capability of transporting the HTTP protocol, but additionally keep track of the connection parameters of the original client.
Varnish supports this and allows you to set an extra listening port that listens for PROXY requests.
Here's an example of how you can start varnishd with PROXY support:
varnishd -a :80 -a :8443,PROXY -f /etc/varnish/default.vcl -s malloc,256m
As you can see, port 80 is still available for regular HTTP, but port 8443 was allocated for PROXY support.
If the proxy servers in front of Varnish support PROXY, Varnish will take the value from the original client and automatically set X-Forwarded-For with that value.
This way you always know who the client was, and you can safely perform your ACL check.
Additionally, there's also a PROXY module for Varnish, that can give you information about potential TLS termination that took place in front of Varnish.

Two domain URL connect single reporting service? is it possible? how to achive this

i am facing issues in SSRS configuration:
A. i have two domain URL (https://xyz.domain1.com) and (ttps://abc.domain2.com).
B. i have certificate for each domain like
xyz.domain1.com - certificate one (*.domain1.com) -- 443
abc.domain2.com - 2nd certificate (*.domain2.com) -- 443
C. In SSRS - i have one virtual directory in web service URL
SSRS-> Webservice URL -> virtual directory name : "Report Service"
[enter image description here][1]
D. in advance setting
[enter image description here][2]
E. in Report manager URL, i am trying to bind two 443 domain but i cannot
while i bind both url and port 443 then i got this error
Microsoft.ReportingServices.WmiProvider.WMIProviderException: An SSL binding already exists for the specified IP address and port combination. The existing binding uses a different certificate from the current request. Only one certificate can be used for each IP address and port combination. To correct the problem, either use the same certificate as the existing binding, or remove the existing SSL binding and create a new binding using the certificate of the current request.
Question:
now i need to connect my report server using two different URL and unique SSL certificate each URL.
But i cant bind this two urls using 443 to connect report server.
I can bind one url and certificate then its working for one URL only.
How do i bind two URLS and certificate to one report server and make it work for two URL's
please help on this issue.
I suggest you try ignoring the error on the first URL ('Web Service URL') and proceed to bind the certs to the 'Report Manager URL' as well. You may have to manually edit the bindings in Advanced Settings, but once you get them looking right in Advanced Settings, SSRS should work.
And a second suggestion, though it looks like you already have done this: be sure the common name (CN) for the wildcard certs are *.domain1.com and *.domain2.com. SSRS will only accept host names that match the CN, and in your case, where you're binding 2 certs to same port, the CNs must be different.
Here's a related point for anyone trying to make the multiple hosts in a single subdomain case work: e.g, https://foo.localdomain/reports and https://bar.localdomain/reports.
Request your SSL cert with Common Name (CN) = *, not the server name or anything specific. Then list all the permutations of DNS names that you want to support in the Subject Alternate Name (SAN) field. The url looks funny in SSRS Configuration Manager (https:+:443), but it Works on the Wire(tm).
If you specify some non-wildcard for the CN, you'll get 'resource not found' error tryng to connect, although the SSL handshake will work.
To achieve the objective you need a Multi-Domain SSL or Wildcard SSL certificate, for example:
Multi-Domain SSL(Multiple Domains)
xyz.domain1.com
abc.domain2.com
Wildcard SSL(Sub-domains)
xyz.domain1.com
abc.domain1.com
Reference:
Multiple Domain (UCC) SSL
Secure multiple domains and
sub-domains on one certificate

Can I use "new ServerSocket(0)" with openshift

I have developed an application that allows multiple players to play together on line at various games such shifumi, poker, chess and so on. It works very well on my localhost. I would like to publish it. So I decided to use openshift to do this.
But there is a problem.
It seems it come from this statement : new ServerSocket(0). I do this inside the doPost method of an HttpServlet.
Could you tell me I don't have the permission to do this (new ServerSocket(0)) inside an openshift server?
I think you have a couple of issues going on here.
The first is that when you call new ServerSocket(0), it is going to try to find a socket that it can bind to, probably on either 0.0.0.0 (all ip addresses/interfaces) or 127.0.0.1, neither of which is allowed on OpenShift.
According to the documentation (located here: http://download.java.net/jdk7/archive/b123/docs/api/java/net/ServerSocket.html) you can use one of the overloaded methods to provide an ip address to bind to, which should be your OPENSHIFT__IP (where could be jbosseap, jbossas, wildfly, jbossews, etc).
ServerSocket(int port, int backlog, InetAddress bindAddr)
Your second issue is a bit more complicated, basically what ports you can bind to. OpenShift allows user code to bind to ports 15000-20000, depending on what ports are not being used by other applications or services. However, none of those ports are open to the public internet, they are all internal ports for internal communications, so if you are trying to let a client connect to them, it won't work. The only ports that are publicly available are 80/443/8000/8443, and your application must bind to port 8080 on your OPENSHIFT__IP to be able to be reached using your app-domain.rhcloud.com public url. You can check out this article to read more about how all of the binding and routing works: https://developers.openshift.com/en/managing-port-binding-routing.html
Hopefully that answers the question about why that piece of code is not working.

CNAME value instead of HOST value

If we have a customer with a cname record, sub1.notourserver.com, pointing to something like abcdefg.ourserver.com, we read the host as sub1.notoursever.com. Is it possible for us to somehow get the value abcdefg.ourserver.com from this request? We have a subdomain route setup, but it is not picking up on it because the host does not match our SERVER_NAME config setting.
HTTP does not provide that information, and so neither can Flask/Werkzeug. You need to use in Flask/Werkzeug the (sub)domain names actually used by clients.
If you really can not do that, you need to hack a WSGI middleware that maintains an explicit mapping (or makes DNS requests) and patches environ['HTTP_HOST'].

Does a certificate have to be valid to mail using CDOSYS and SMTPS?

Due to a limitation on our SMTP provder's side, we're having to use System.Web.Mail (deprecated), which is a wrapper around CDOSSYS.
Because we'd like to avoid having to change multiple configurations if we switch providers at a later date, we set up an internal alias for our providers FQDN.
So, mailrelay.ourdomain.com -> mailrelay.provider.com.
When I try to connect to either our alias or the provider's IP, a COM error bubbles up: "The transport failed to connect to the server." If I connect to the provider's true FQDN, everything works as expected.
I've looked in Wireshark, and I can see the certificate being requested, but not much happens after that.
I'm wondering if anyone knows if CDOSSYS checks to make sure the requested host name matches the FQDN on the certificate and fails if it doesn't match.
I've tried searching for an answer to this question, but I can't seem to find it.
I can't find a definitive answer, but from what I can tell, yes, CDOSYS does require a certification to match an SMTP server's FQDN when using SSL.