How can we add flows to switches to ping hosts in OpenFlow Manager when we have duplicate paths? - ping

I'm working with Mininet and OpendayLight controller, my topologie is like this :
h1 _ s1 ____ s2 _ h2
| |
+ s3 +
(s1, s2, and s3 are connected with each other, h1 is connected to s1, and h2 to s2).
I'm using Openflow Manager app, and and I want to ping h1 and h2, how can I add flows to each switch to do that?
(I've tried to specify the output port for each switch depending on a specific path that i want, but it doesn't work.)

Related

How to get external IPs of specific instance group on GCE - Google Compute Engine?

$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list
This command currently works to get ALL the ips that are active but if I have multiple instance groups lets say one is called: Office, and the other is called Home
How do I get just the instance IPs in instance group "Office" only
Unfortunately there is no easy way to do it. Ideally it should be part of gcloud instance-groups list-instances API, but it does not return IP addresses, just instance names.
So far, I've managed to get the desired response by executing 2 different commands.
To get names of all instances
instances=$(gcloud beta compute instance-groups list-instances <Enter Your Instance Group Name Here> | awk -v ORS=, '{if(NR>1)print $1}')
To get External IPs
gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list --filter="name=( $instances )"
A breakdown / explanation of 1st Command:
gcloud beta compute instance-groups list-instances <Enter Your Instance Group Name Here> will return all instances in that Instance Group
awk -v ORS=, will replace all lines with , and returns a single comma separated string
'if(NR>1) will exclude first line of response which is NAME
print $1 will get only the 1st column which
are instance names
instances=$(<Entire Gcloud Command with awk) will capture the response in variable
2nd Command should be self explanatory.
It will be great if someone can combine these 2 commands into a single command.

Apache "server-status" (mod_status) output as in JSON or XML format?

Apache "mod_status":
We know, by using "mod_status", we can check Apache's current status. It returns a lot of information, something like in this sample page (provided by Apache):
https://www.apache.org/server-status
What i need to do:
I need to parse and then process this results, especially the detailed connections section given by ExtendedStatus flag (inside httpd.conf). The section looks something like:
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-24 23433 0/94/338163 _ 208.04 2 0 0.0 1.85 22068.75 221.254.46.37
0-24 23433 0/99/337929 _ 208.93 1 1141 0.0 2.23 19373.00 197.89.161.5
0-24 23433 0/94/337834 _ 206.04 4 0 0.0 3.46 22065.36 114.31.251.82
0-24 23433 0/95/338139 _ 198.94 2 7 0.0 2.74 21101.66 122.252.253.242
0-24 23433 0/111/338215 _ 206.21 3 0 0.0 3.89 19496.71 186.5.109.211
My Question:
Is it possible to get this page (information) via a structured data format, like JSON? (Because i need to parse them via PHP. And then do some further stuffs later.)
I cannot just use some easy ways, like Javascript DOM Parsers (like: jQuery). Because i need the script to be running in the Server's Linux Commandline (locally) itself. Not via any fancy client Browsers from outside.
So, parsing this via Javascript (JQuery, etc) is almost not a choice. I better receive a structured data. So i can parse from PHP way easily. Trigger the PHP Script via Terminal, like:
# php /www/docroots/parse-server-status.php
Or, at least:
# curl -I http://localhost/parse-server-status.php
Question:
Any idea how to get the JSON or XML out of Apache's Server Status (mod_status), please?
Thanks all.
I don't think there is a way to get json in the standard apache mod_status.
But there was a discussion on the developer list about this topic.
In short: There is an other script that you have to install on your server. And you need mod_lua on the server. Here is the project page:
https://github.com/Humbedooh/server-status
After installing that lua script, you could get the json files.
Daniel installed a sample script here:
HTML view: http://httpd.apache.org/server-status
JSON: http://httpd.apache.org/server-status?view=json
Extended JSON:
http://httpd.apache.org/server-status?view=json&extended=true (LOT OF
DATA :p)
In JavaScript/jQuery (ES6) we can get Apache machine readable status with ?auto and parse the content via Regular Expressions:
$.get('http://localhost/parse-server-status.php?auto', (d) => {
const o = {};
const host = d.substring(0, d.indexOf('\n'));
Array.from(d.replace(host, '').matchAll(/^([\w\s]+)\:\s(.*)+/gm)).forEach(l => o[l[1].replace(/\s/, '')] = l[2]);
console.log(host, o);
});

Find function's start offset in ELF

Suppose I have function fn somewhere within the .text section of an ELF64 executable. Is there a way to know at which offset (in bytes) from the start of the ELF file the fn function is located? Note that I don't need to know at which VA it was relocated at linking time, but its position within the ELF file.
Generally yes, if you can parse the ELF file directly or combine output from tools like objdump and readelf.
More specific: You can get the offset and virtual address of your .text section with 'readelf -S file' - write those down.
Further you can list symbols with 'readelf -s file', as long your executable is not stripped, and your function is visible (not static or in an anonymous namespace) then you should find your function and the virtual address of it.
Thus you can calculate the offset via
fn symbol offset = fn symbol VA - .text VA + .text offset
Thats assuming you want to do it "offline" with common tools. Its more difficult if you dont have access to the unstripped ELF file, and since only a part of the ELF File remains in memory, probably not possible without adding some information with "offline" tricks.
simply use objdump -F option
user#phoenix-amd64:~$ objdump -D -F /opt/phoenix/i486/heap-xxx -D | grep main
08048630 <__libc_start_main#plt> (File Offset: 0x630):
8048679: e8 b2 ff ff ff call 8048630 <__libc_start_main#plt> (File
Offset: 0x630)
080487d5 <main> (File Offset: 0x7d5):
The answer by Norbert Lange works for the functions that are listed in the symbol table of the ELF file. But static functions will not be present there, so even if e.g. GDB could find them (by using DWARF debug info), readelf -s won't.
In this case, you can use GDB. For example, let's find the offset of xfce_displays_helper_normalize_crtc in /usr/bin/xfsettingsd (that was my actual use case, thus this obscure choice of an example).
$ gdb -q -ex 'p &xfce_displays_helper_normalize_crtc' -ex q xfsettingsd
Reading symbols from xfsettingsd...
Reading symbols from /usr/lib/debug/.build-id/b2/2ad9713642253d4d7a6f94acf0174ccfe3d487.debug...
$1 = (void (*)(XfceRRCrtc *, XfceDisplaysHelper *)) 0x11e80 <xfce_displays_helper_normalize_crtc>
Note that here we only load the file with GDB, don't let it start. And then use p command (print in full form) to get the address. So in my case, the function is at offset 0x11e80.
In some cases GDB will resolve the offset to virtual address even before we start or starti the program. This happens, in particular, on x86-32. In this case we can simply subtract the virtual address of the file image, given by readelf -l:
$ readelf -l /bin/sleep | grep ' VirtAddr \|\<LOAD *0x[0-9a-f]\+\>'
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
LOAD 0x000000 0x08048000 0x08048000 0x05230 0x05230 R E 0x1000
In the example above, the virtual address of the file image is 0x8048000, which would have to be subtracted from virtual address of the function if GDB happens to output it instead of the offset.

SSL Certs acceptance using AutoIT

Is it possible to accept SSL Certificates in Chrome/Firefox using AutoIT tool?
https://www.autoitscript.com/site/autoit/
Thanks!
The short answer is yes. You have 3 options depending on what you want to do. They are:
Use the FF.au3 and the Chrome.au3 UDFs plus other AutoIt automations. (Hard)
Use iUIAutomation plus other AutoIt automations. (A little less hard)
If you just need to get some certificate info you can use this script. (pretty easy)
If you go with option 3 you will need to download this UDF and update the WinINetConstants.Au3 file on line 5 from:
Global Const $AU3_UNICODE = Number($AU3_VERSION[2] & "." & $AU3_VERSION[3]) >= 2.13 Or #AutoItUnicode
To
Global Const $AU3_UNICODE = Number($AU3_VERSION[2] & "." & $AU3_VERSION[3]) >= 2.13 Or #AutoItVersion

How can I invoke a shell or Perl script from iptables?

We're using CentOS and would like to ban several Asian countries from accessing the entire server. Almost every IP we check which has tried to hack into our server is allocated to an Asian country (Russia, China, Pakistan, etc.)
We have an IP to country MySQL database we can efficiently query and would like to try something like:
-A INPUT -p tcp -m tcp --dport 80 -j /path/to/perlscript.pl
The script would need the IP passed in as an argument, then it would return either an ACCEPT or DROP target?
Thanks for the answers, here's my follow up.
Do you know if it is possible though? Having a rule point to a script which returns a target? (ACCPET/DROP)
Not entirely sure how ipset works, will have to experiment I guess, but it looks like it creates a single rule. How would it handle Russia for example, which has over 6000 ranges assigned to it? And we want to add probably 20 - 40 countries in total, so we could end up needing to add in excess of 100,000 ranges. Wouldn't the overhead of a single MySQL query be less taxing?
SELECT country FROM ip_countries WHERE $VAR{ip} >= range1 && $VAR{ip} <= range2
The database we use is freely available here : http://software77.net/geo-ip/
It represents IPs in the database by converting the IP to a number using this formula :
$VAR{numberedIP} = $octs[3] + ($octs[2] * 256) + ($octs[1] * 256 * 256) + ($octs[0] * 256 * 256 * 256);
It will store the start of the range in the "range1" column, and the end of the range in the "range2" column.
So you can see how we'd look up an IP using the above query. Literally takes less than a hundredth of a second to get a result and it's quite accurate. We have one website on a dedicated server, quite low traffic. But as with all servers I have ever checked, this one is hit daily by hackers' robots, checking email accounts, FTP accounts etc. And just about every web server I've ever worked on is compromised sooner or later. In our case, 99.99% of traffic from Asian countries has criminal intent attached to it.
We'd like this to run via iptables so that all ports are covered, not just HTTP for example by using directives in say .htaccess.
Do you think ipset would still be faster and more efficient?
It would be far too slow to launch perl for every matching packet. The right tool for this sort of thing is ipset, and there is much more information and documentation available on the ipset man page.
In CentOS you can install it with yum. Naturally, all of these commands and the script need to run as root:
# yum install ipset
Next install the kernel modules (you'll want this to happen at boot as well):
# modprobe -v ipset ip_set_hash_netport
And then use a script like the following to populate an ipset and block IP's from its ranges using iptables:
#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect('... your DSN ...',...);
# I have no knowledge of your schema, but if you can pull the
# address range in the form: AA.BB.CC.DD/NN
my $ranges = $dbh->selectcol_arrayref(
q{SELECT cidr FROM your_table WHERE country_code IN ('CN',...)});
`ipset create geoblock hash:netport`;
for (#$ranges) {
# to match on port 80:
`ipset add geoblock $_,80`;
}
`iptables -I INPUT -m set --set geoblock src -j DROP`;
If you would like to block all ports rather than just 80, use the ip_set_hash_net module instead of ip_set_hash_netport, change hash:netport to hash:net, and remove ,80 from the ipset command.