Storing a list of IP addresses in my network with its TCP state transitions - libpcap

After sniffing packets using libpcap, I want to store a list of IP addresses (connections) in my network and for each connection I want to store its TCP state transitions.
Is it possible using linked list? If so, how?

yes it's possible
libpcap is an open source C library that give access to your NIC in promiscuous mode to capture the packages
know? for know, i just remember these:
- you can developed
- use tcpdump and write a condition to capture and export to a file
- use wireshark and write a condition to capture and export to a file

The snifex.c is a good start (http://www.tcpdump.org/pcap.html)
In the callback function:
got_packet(u_char *args, const struct pcap_pkthdr *header, const u_char *packet);
the pointer *ip points to the start of the ip header.
ip = (struct sniff_ip*)(packet + SIZE_ETHERNET);
There you can find the IP address information. Note that you have to separate the calls to inet_ntoa as it appears to use a static buffer. So you should use two printf calls if you are planning to show the output in the cmd or file. That is why they are in separate lines in snifex.c:
printf(" From: %s\n", inet_ntoa(ip->ip_src));
printf(" To: %s\n", inet_ntoa(ip->ip_dst));
For the TCP information you can use the tcp pointer
tcp = (struct sniff_tcp*)(packet + SIZE_ETHERNET + size_ip);
Use the TCP flags tcp->th_flags to find out the connection state, e.g. tcp->th_flags==TH_SYN
Once you know the flags you have to check the RFC793 TCP/IP state diagram to determine the state of the TCP protocol.
In terms of implementations, you could use a hash array for each 4-tuple (srcIP, dstIP, srcPort, dstPort) in order to have an O(1) (best case). Note that there may be cases in which you see midstream traffic, or half-open TCP connections etc. The RFC describes in detail how to handle them.
Finally, if you do not wish to implement the TCP protocol you can use the Libnids library which emulates the IP stack of Linux 2.0.x, offers IP defragmentation and TCP stream assembly.

Related

Varnish: Multiple IPs compare to ACL using Tilde

What would happen in Varnish if multiple IPs are in an X-Forward-For header which is compared to an ACL using the tilde operator?
Dummy example:
The request has the following HTTP header:
X-Forward-For: 160.12.34.56, 10.10.10.10
The Varnish config looks like this:
acl internal {
"10.10.10.10"
}
if ((std.ip(req.http.X-Forward.For, "0.0.0.0") ~ internal)){
# THIS CODE
}
else {
# OR THIS CODE
}
Which code block is executed?
Also, does the order of the IPs matter in the X-Forward-For header?
Does it change if there are 2 X-Forward-For headers, each with one of the two IPs?
Will it work?
The short answer to your question is no, it won't work.
std.ip() expects to receive a single IP address, not a collection. The conversion will fail, and the fallback value (second argument of the function) will be returned.
Here's a quick test script that illustrates this:
vcl 4.0;
import std;
backend default none;
sub vcl_recv {
set req.http.x-f = "1.2.3.4, 5.6.7.8";
return(synth(200,std.ip(req.http.x-f,"0.0.0.0")));
}
This example will return 0.0.0.0.
Does X-Forwarded-For need multiple IP addresses?
It does make sense to ask the question if your X-Forwarded-For header needs multiple IP addresses.
The idea is to indicate to the origin server what the IP address of the original client was.
In your case there is more than 1 proxy in front of the webserver, so a natural reaction is to chain the IP addresses in the X-Forwarded-For header.
A better solution would be to figure out what the IP address of the original client was, and set that value in X-Forwarded-For.
The best way to get this done is by leveraging the PROXY protocol, which Varnish supports.
Leverage the PROXY protocol
The PROXY protocol has the capability of transporting the HTTP protocol, but additionally keep track of the connection parameters of the original client.
Varnish supports this and allows you to set an extra listening port that listens for PROXY requests.
Here's an example of how you can start varnishd with PROXY support:
varnishd -a :80 -a :8443,PROXY -f /etc/varnish/default.vcl -s malloc,256m
As you can see, port 80 is still available for regular HTTP, but port 8443 was allocated for PROXY support.
If the proxy servers in front of Varnish support PROXY, Varnish will take the value from the original client and automatically set X-Forwarded-For with that value.
This way you always know who the client was, and you can safely perform your ACL check.
Additionally, there's also a PROXY module for Varnish, that can give you information about potential TLS termination that took place in front of Varnish.

How to use option Arbitrtaion=WaitExternal in MySQL Cluster?

I'm currently reading MySQL Reference Manual and notice that there an option of NDB config -- Arbitrtaion=WaitExternal. The question is how to use this option and how to implement an external cluster manager?
The Arbitration parameter also makes it possible to configure arbitration in
such a way that the cluster waits until after the time determined by Arbitrat-
ionTimeout has passed for an external cluster manager application to perform
arbitration instead of handling arbitration internally. This can be done by
setting Arbitration = WaitExternal in the [ndbd default] section of the config.ini
file. For best results with the WaitExternal setting, it is recommended that
ArbitrationTimeout be 2 times as long as the interval required by the external
cluster manager to perform arbitration.
A bit of git annotate and some searching of original design docs says the following:
When the arbitrator is about to send an arbitration message to the arbitrator it will instead issue the following log message:
case ArbitCode::WinWaitExternal:{
char buf[8*4*2+1];
sd->mask.getText(buf);
BaseString::snprintf(m_text, m_text_len,
"Continuing after wait for external arbitration, "
"nodes: %s", buf);
break;
}
So e.g.
Continuing after wait for external arbitration, nodes: 1,2
The external clusterware should check for this message
at the same interval as the ArbitrationTimeout.
When it discovers this message, the external cluster ware
should kill the data node that it decides to lose the
arbitration.
This kill will be noted by the NDB data nodes and will
decide the matter which node is to survive.

OpenShift egress router not working

i configured an egress router like described here:
https://docs.openshift.com/container-platform/3.3/admin_guide/managing_pods.html#admin-guide-controlling-egress-traffic
But it does not work.
In my understanding, the options will be resolved like this:
name: EGRESS_SOURCE <-- This is the network where the nodes live (in my case the vm where the Containers are running on)
value: 192.168.12.99
name: EGRESS_GATEWAY <-- The gateway over which the destination ip address is routable.
value: 192.168.12.1
name: EGRESS_DESTINATION <--- The destination ip of the application i want to reach. In my case its a mongoDB living in a classical VM.
value: 203.0.113.25
Am i right or do i miss something ?
How would i be able to reach the target ?
Do i need to address the source ip to access the MongoDB or do i simply address the IP of my MongoDB an the traffic gets nat'd the way over my egress router (This is how i understood the traffic flow will be btw.) ?
How can i troubleshoot this kind of problem ?
Best Regards,
Marcus
Ok, it worked by now. I created a service and adressed the ip of this service to reach my destination.
The alternative way is to address the ip of the container.
So from inside the container to reach your original destination don't use the original ip, rather use the egress pod ip or preferred use the ip of the created service.
!!Attention: The destination ip must be outside of the host/node ip range otherwise it would not work. It seems that, if you use a destination ip from your host/node range, the standard gw will get the request and i think it will discard it. !!
And i would suggest to use the egress router image from redhat, instead the origin, which is stated in the official document from redhat...
image: registry.access.redhat.com/openshift3/ose-egress-router

Python 3.4 Sockets sendall function

import socket
def functions():
print ("hello")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.137.1', 20000)
sock.bind(server_address)
sock.listen(1)
conn, addr = sock.accept()
print ('Connected by', addr)
sock.listen(1)
conn.sendall(b"Welcome to the server")
My question is how to send a function to the client,
I know that conn.sendall(b"Welcome to the server") will data to the client.
Which can be decoded.
I would like to know how to send a function to a client like
conn.sendall(function()) - this does not work
Also I would like to know the function that would allow the client to receive the function I am sending
I have looked on the python website for a function that could do this but I have not found one.
The functionality requested by you is principally impossible unless explicitly coded on client side. If this were possible, one could write a virus which easily spreads into any remote machine. Instead, this is client right responsibility to decode incoming data in any manner.
Considering a case client really wants to receive a code to execute, the issue is that code shall be represented in a form which, at the same time,
is detached from server context and its specifics, and can be serialized and executed at any place
allows secure execution in a kind of sandbox, because a very rare client will allow arbitrary server code to do anything at the client side.
The latter is extremely complex topic; you can read any WWW browser security history - most of closed vulnerabilities are of issues in such sandboxing.
(There are environments when such execution is allowed and desired; e.g. Erlang cookie-based peering cluster. But, in such cluster, side B is also allowed to execute anything at side A.)
You should start with searching an execution environment (high-level virtual machine) which conforms to your needs in functionality and security. For Python, you'd look at multiprocessing module: its implementation of worker pools doesn't pass the code itself, but simplifies passing data for execution requests. Also, passing of arbitrary Python data without functions is covered with marshal and pickle modules.

PostgreSQL Allow Connections From a MAC Address

Is there a configuration directive in PostgreSQL 9.1 to allow connections from a client by writing its MAC address instead of writing its IP address into 'pg_hba.conf'?
For instance; instead of doing this;
host all all 192.168.2.1/32 trust
I'd like to write this;
host all all 00:08:C7:1B:8C:02 trust
No.
As the docs say:
This field can contain either a host name, an IP address range, or one
of the special key words mentioned below.