Subnet Mask /23 till second octect 255.255.254.0 - ping

I am currently getting subnet Mask /23 till second octect 255.255.254.0.
My ip address is 10.11.17.111, and broadcast ip is 10.11.17.254.
So while pinging the server, is it correct, if we are pining on
10.11.16.255. Since it is not broadcast ip.

What I think you're asking is if whether or not you can use 10.11.16.255 for a host, given your subnet mask. Please try re-wording your question if that's not the case.
With your netmask, it is totally acceptable to use 10.11.16.255 for a host. It's not a broadcast address, nor a network ID. It's the same as any other IP in this case.

Related

Can non-rfc1918 ranges in peered VPCs be overlapping if they are not exported?

If two VPCs are peered and both use non-RFC1918 subnetworks, can those subnetworks have overlapping CIDR ranges? Does it depend on whether import-subnet-routes-with-public-ip / export-subnet-routes-with-public-ip are used?
https://cloud.google.com/vpc/docs/vpc-peering says "A subnet CIDR range in one peered VPC network cannot overlap with a static route in another peered network. This rule covers both subnet routes and static routes." This mentions nothing about whether the routes are exported or not. Neither does https://cloud.google.com/vpc/docs/vpc-peering#interaction-subnet-subnet. So it seems that subnetworks cannot be overlapping. However, its not explicitly called out, and so is unclear.
Yes, non-rfc1918 ranges in a peered VPCs can be overlapped if both sides of the peering did not import the non-rfc1918 subnet.
i.e
import_subnet_routes_with_public_ip: false
All the ranges defined here in this documentation will be treated as non-public IP.
Non-public IP addresses from this documentation are always exchanged regardless of the value of allow_subnet_routes_with_public_ip. If those ranges overlap, then VPC peering is not allowed.
If allow_subnet_routes_with_public_ip is false, then the user can have overlapping public IP subnets, but those won't be exchanged.
If allow_subnet_routes_with_public_ip is true, then the user cannot have overlapping public IP subnets, otherwise the peering will be rejected.
If a non-public IP subnet. say 198.18.0.0/15, is expanded to 198.18.0.0/14 into the public range, then it will be treated as a public IP range subnet, and rules (2) and (3) above will apply to it.

Varnish: Multiple IPs compare to ACL using Tilde

What would happen in Varnish if multiple IPs are in an X-Forward-For header which is compared to an ACL using the tilde operator?
Dummy example:
The request has the following HTTP header:
X-Forward-For: 160.12.34.56, 10.10.10.10
The Varnish config looks like this:
acl internal {
"10.10.10.10"
}
if ((std.ip(req.http.X-Forward.For, "0.0.0.0") ~ internal)){
# THIS CODE
}
else {
# OR THIS CODE
}
Which code block is executed?
Also, does the order of the IPs matter in the X-Forward-For header?
Does it change if there are 2 X-Forward-For headers, each with one of the two IPs?
Will it work?
The short answer to your question is no, it won't work.
std.ip() expects to receive a single IP address, not a collection. The conversion will fail, and the fallback value (second argument of the function) will be returned.
Here's a quick test script that illustrates this:
vcl 4.0;
import std;
backend default none;
sub vcl_recv {
set req.http.x-f = "1.2.3.4, 5.6.7.8";
return(synth(200,std.ip(req.http.x-f,"0.0.0.0")));
}
This example will return 0.0.0.0.
Does X-Forwarded-For need multiple IP addresses?
It does make sense to ask the question if your X-Forwarded-For header needs multiple IP addresses.
The idea is to indicate to the origin server what the IP address of the original client was.
In your case there is more than 1 proxy in front of the webserver, so a natural reaction is to chain the IP addresses in the X-Forwarded-For header.
A better solution would be to figure out what the IP address of the original client was, and set that value in X-Forwarded-For.
The best way to get this done is by leveraging the PROXY protocol, which Varnish supports.
Leverage the PROXY protocol
The PROXY protocol has the capability of transporting the HTTP protocol, but additionally keep track of the connection parameters of the original client.
Varnish supports this and allows you to set an extra listening port that listens for PROXY requests.
Here's an example of how you can start varnishd with PROXY support:
varnishd -a :80 -a :8443,PROXY -f /etc/varnish/default.vcl -s malloc,256m
As you can see, port 80 is still available for regular HTTP, but port 8443 was allocated for PROXY support.
If the proxy servers in front of Varnish support PROXY, Varnish will take the value from the original client and automatically set X-Forwarded-For with that value.
This way you always know who the client was, and you can safely perform your ACL check.
Additionally, there's also a PROXY module for Varnish, that can give you information about potential TLS termination that took place in front of Varnish.

OpenShift egress router not working

i configured an egress router like described here:
https://docs.openshift.com/container-platform/3.3/admin_guide/managing_pods.html#admin-guide-controlling-egress-traffic
But it does not work.
In my understanding, the options will be resolved like this:
name: EGRESS_SOURCE <-- This is the network where the nodes live (in my case the vm where the Containers are running on)
value: 192.168.12.99
name: EGRESS_GATEWAY <-- The gateway over which the destination ip address is routable.
value: 192.168.12.1
name: EGRESS_DESTINATION <--- The destination ip of the application i want to reach. In my case its a mongoDB living in a classical VM.
value: 203.0.113.25
Am i right or do i miss something ?
How would i be able to reach the target ?
Do i need to address the source ip to access the MongoDB or do i simply address the IP of my MongoDB an the traffic gets nat'd the way over my egress router (This is how i understood the traffic flow will be btw.) ?
How can i troubleshoot this kind of problem ?
Best Regards,
Marcus
Ok, it worked by now. I created a service and adressed the ip of this service to reach my destination.
The alternative way is to address the ip of the container.
So from inside the container to reach your original destination don't use the original ip, rather use the egress pod ip or preferred use the ip of the created service.
!!Attention: The destination ip must be outside of the host/node ip range otherwise it would not work. It seems that, if you use a destination ip from your host/node range, the standard gw will get the request and i think it will discard it. !!
And i would suggest to use the egress router image from redhat, instead the origin, which is stated in the official document from redhat...
image: registry.access.redhat.com/openshift3/ose-egress-router

PostgreSQL Allow Connections From a MAC Address

Is there a configuration directive in PostgreSQL 9.1 to allow connections from a client by writing its MAC address instead of writing its IP address into 'pg_hba.conf'?
For instance; instead of doing this;
host all all 192.168.2.1/32 trust
I'd like to write this;
host all all 00:08:C7:1B:8C:02 trust
No.
As the docs say:
This field can contain either a host name, an IP address range, or one
of the special key words mentioned below.

Does a certificate have to be valid to mail using CDOSYS and SMTPS?

Due to a limitation on our SMTP provder's side, we're having to use System.Web.Mail (deprecated), which is a wrapper around CDOSSYS.
Because we'd like to avoid having to change multiple configurations if we switch providers at a later date, we set up an internal alias for our providers FQDN.
So, mailrelay.ourdomain.com -> mailrelay.provider.com.
When I try to connect to either our alias or the provider's IP, a COM error bubbles up: "The transport failed to connect to the server." If I connect to the provider's true FQDN, everything works as expected.
I've looked in Wireshark, and I can see the certificate being requested, but not much happens after that.
I'm wondering if anyone knows if CDOSSYS checks to make sure the requested host name matches the FQDN on the certificate and fails if it doesn't match.
I've tried searching for an answer to this question, but I can't seem to find it.
I can't find a definitive answer, but from what I can tell, yes, CDOSYS does require a certification to match an SMTP server's FQDN when using SSL.