how to capture bitorrent infohash id in network using tcpdump or any other open scource tool? - mysql

i am working on a project where we need to collect the bitorrent infohash id running in our small ISP network. using port mirroring we can pass the all wan traffic to a server and run tcpdump tools or any other tool to find the infohash id download by bitorrent client. for example
tcpflow -p -c -i eth1 tcp | grep -oE '(GET) .* HTTP/1.[01].*'
this code is showing result like this
GET /announce?info_hash=N%a1%94%17%2c%11%aa%90%9c%0a%1a0%9d%b2%cfy%08A%03%16&peer_id=-BT7950-%f1%a2%d8%8fO%d7%f9%bc%f1%28%15%26&port=19211&uploaded=55918592&downloaded=0&left=0&corrupt=0&key=21594C0B&numwant=200&compact=1&no_peer_id=1 HTTP/1.1
now we need to capture only infohash and store it to a log or mysql database
can you please tell me which tool can do thing like this

Depending on how rigorous you want to be you'll have to decode the following protocol layers:
TCP, assemble packets of a flow. you're already doing that with tcpflow. tshark - wireshark's CLI - could do that too.
HTTP, extract the value of the GET header. A simple regex would do the job here.
URI, extracting the query string
application/x-www-form-urlencoded, info_hash key value pair extraction and handling of percent-encoding
For the last two steps I would look for tools or libraries in your programming language of choice to handle them.

Related

Postgraphile "Only `POST` requests are allowed." error

I have Postgres running locally. I can access the database locally with psql postgres:///reviewapp and with \dt I can see a few tables.
If I run npx postgraphile -c "postgres:///reviewapp" I dont get any errors in the terminal:
PostGraphile v4.12.4 server listening on port 5000 🚀
‣ GraphQL API: http://localhost:5000/graphql
‣ GraphiQL GUI/IDE: http://localhost:5000/graphiql (RECOMMENDATION: add '--enhance-graphiql')
‣ Postgres connection: postgres:///reviewapp
‣ Postgres schema(s): public
‣ Documentation: https://graphile.org/postgraphile/introduction/
‣ Node.js version: v14.15.5 on darwin x64
‣ Join Mark in supporting PostGraphile development: https://graphile.org/sponsor/
* * *
However when I go to http://localhost:5000/graphql I have an error on the screen:
{"errors":[{"message":"Only POST requests are allowed."}]}
You're visiting the /graphql endpoint which speaks GraphQL (over POST requests), but you're sending it a web request (over GET). Instead, use the /graphiql end point to view the GraphiQL GraphQL IDE - this endpoint speaks web, and will give you a nice interface for communicating with the /graphql endpoint. See this output from the PostGraphile command:
‣ GraphQL API: http://localhost:5000/graphql
‣ GraphiQL GUI/IDE: http://localhost:5000/graphiql (RECOMMENDATION: add '--enhance-graphiql')
I recommend you add the --enhance-graphiql option to the PostGraphile CLI to get an even more powerful IDE in the browser.
It is because when you type in your address into the address bar of your browser, a GET request is being sent, while your Postgraphile instance only accepts POST requests. So this is the problem. You either avoid sending GET requests, or try and ensure that Postraphile accepts GET requests as well.
A very simple solution would be to create a very simple and small website that will act as a proxy and upon load, it would send a POST request to http://localhost:5000/graphql
There is a GitHub ticket where a middleware is suggested, read this for further information: https://github.com/graphile/postgraphile/issues/442

How to do a website preview using a subdomain?

I'm trying to make a website preview with subdomain
e.g.
I've https://www.sub.example.com and CNAME to https://www.sub2.example2.com
When I do a PING command sub2.example2.com. response me, but I a navigator don't open the sub2.example2.com.
Both Domain are using a different Wildcard and I do not want to use a .htaccess
what options I have?
Don't get confused/mix things, PING behaves differently than HTTP that's why when you ping you may always get a response since you are doing the requests to the same web server or load balancer.
Regarding the HTTP request, what could be missing is a server block/virtual host to handle your request for your defined HOST: sub2.example2.com.
Once you have your vhosts's defined you could test using curl with something like this:
curl -I -H 'Host: sub.example.com' your-web-server.tld
Check the returned headers (option -I) that could give you a hint.

Questions on starting Locator using snappydata/bin> ./spark-shell.sh script

Spark v. 0.5
Here's the command I used to start a Locator:
ubuntu#ip-172-31-8-115:/snappydata-0.5-bin/bin$ ./snappy-shell locator start
Starting SnappyData Locator using peer discovery on:
0.0.0.0[10334] Starting DRDA server for SnappyData at address localhost/127.0.0.1[1527]
Logs generated in /snappydata-0.5-bin/bin/snappylocator.log
SnappyData Locator pid: 9352 status: running
It looks like it starts the DRDA server locally, with no outside interface for a client to connect to. So, I cannot reach my SnappyData Locator using this JDBC URL from an outside client host (e.g. my SquirrelSQL editor).
This does not connect:
jdbc:snappydata://MY-AWS-PUBLIC-IP-HERE:1527/
What property do I pass my ./snappy-shell.sh location start command to get the DRDA Server to start on a public IP address instead of "localhost/127.0.0.1"?
Use -client-bind-address and -client-port options. For locator also use the -peer-discovery-address and -peer-discovery-port options to specify bind address for other locators/servers/leads (that are passed to their -locators=<address>:<port>):
snappy-shell locator start -peer-discovery-address=<internal IP for peers> -client-bind-address=<public IP for clients>
See the output of snappy-shell locator --help for commonly used options.
For SnappyData releases, you may find it much easier to use the global configuration for all of the locators, servers, leads. Check configuring the cluster.
This will allow specifying all options for all JVMs of the cluster in conf/locators, conf/leads, conf/servers then starting with snappy-start-all.sh, status with snappy-status-all.sh and stop all with snappy-stop-all.sh
On a related note, we at SnappyData Inc., are developing scripts to enable users quickly launch SnappyData cluster on AWS.
If you want to try it out, below steps would guide you. We would love to hear your feedback on this.
Download its development branch git clone https://github.com/SnappyDataInc/snappydata.git -b SNAP-864 (You don't need to clone the repo for this, but I could not find a way to attach the scripts here.)
Go to ec2 directory cd snappydata/cluster/ec2
Run snappy-ec2. ./snappy-ec2 -k ec2-keypair-name -i /path/to/keypair/private/key/file launch your-cluster-name
See this README for more details.

Zabbix Trapper: Cannot get data from orabbix

I am using orabbix to monitor my db. The data from the queries executed on this db using orabbix are sent to zabbix server. However, I am not able to see the data reaching zabbix.
On my zabbix web console, I see this message on the triggers added - "Trigger expression updated. No status update so far."
Any ideas?
My update interval for the trigger is set to 30 sec.
Based on the screenshots you posted, your host is named "wfc1dev1" and you have items with keys "WFC_WFS_SYS_001" and "WFC_WFS_SYS_002". However, based on the Orabbix XML that it sends to Zabbix, the hostname and item keys are different. Here is the XML:
<req><host>V0ZDMURFVg==</host><key>V0ZDX0xFQUZfU1lTXzAwMg==</key><data>MA==</dat‌​a></req>
From this, we can deduce the host:
$ echo V0ZDMURFVg== | base64 -d
WFC1DEV
The key:
$ echo V0ZDX0xFQUZfU1lTXzAwMg== | base64 -d
WFC_LEAF_SYS_002
The data:
$ echo MA== | base64 -d
0
It can be seen that neither the host name, nor item key match those configured on Zabbix server. Once you fix that, it should work.

tcpdump throws PKTAP error

While running tcpdump without providing any interface
tcpdump -nS,
I'm getting tcpdump: cannot use data link type PKTAP error so I tried providing the Interface option in the command
tcpdump -i eth0 or even eth1
then I get the following error
tcpdump: eth1: No such device exists
(BIOCSETIF failed: Device not configured)
I even tried looking up on the Internet but i'm not getting any solution ...
Any help ??
I can't speak to your problem with PKTAP, but I can speak to the "No such device exists" - eth0 is a Linux-ism, and MacOS isn't Linux. You almost certainly want en0, en1, etc. "ifconfig -a" is your friend or, if you have it installed, "tshark -D".
Any reason on why PKTAP issue is occurring
It's probably occurring because you installed your own version of libpcap, which does not know about the DLT_PKTAP link-layer header type, and Apple's tcpdump is somehow using your version rather than their own version (Apple's version does know about it) and, therefore, failing because, when its version of tcpdump is run without a -i argument, it uses an OS mechanism to capture on all devices, and that mechanism supplies packets with DLT_PKTAP headers and the DLT_PKTAP link-layer header type.