How to set a range for DHCP IP address leasing in WiFi softAP()? - arduino-ide

I am trying to make a mesh network using ESP32 module. The WiFi.h softAPConfig() can be used to set the starting address for leasing, but it progress upwards without reusing the already leased addresses which are no more in use. So I want to limit the leasing range between two addresses.
I found this piece of code from dhcpserver.h
/* Defined in esp_misc.h */
typedef struct {
bool enable;
ip4_addr_t start_ip;
ip4_addr_t end_ip;
} dhcps_lease_t;
This is the code I compiled and uploaded into the ESP32 module
#include "WiFi.h"
char *ssid = "AirMesh";
IPAddress local_IP(192,168,1,0);
IPAddress gateway(192,168,1,1);
IPAddress subnet(255,255,255,0);
void setup()
{
Serial.begin(9600);
Serial.println();
Serial.print("Setting soft-AP configuration ... ");
Serial.println(WiFi.softAPConfig(local_IP, gateway, subnet) ? "Ready" : "Failed!");
Serial.print("Setting soft-AP ... ");
Serial.println(WiFi.softAP("ESPsoftAP_01") ? "Ready" : "Failed!");
Serial.print("Soft-AP IP address = ");
Serial.println(WiFi.softAPIP());
WiFi.softAP(ssid);
}
void loop() {}
The first device when connected gives an IP 192.168.1.1, the second device an IP 192.168.1.2, when I disconnect the first device and reconnected it and it gives an IP 192.168.1.3 (every connection use different physical addresses)
This progression keeps going
EDIT:
After digging it up, I think I found the code responsible for ranging IP leasing, but couldn't figure out what it means.
lease.start_ip.addr = static_cast<uint32_t>(local_ip) + (1 << 24);
lease.end_ip.addr = static_cast<uint32_t>(local_ip) + (11 << 24);

After trial and error methord, I managed to find the answer
Change the code in WiFiAP.cpp file (i have forked and requested a pull replacing 11 with 10 since the maximum number of connections possible for ESP32 is 10)
lease.start_ip.addr = static_cast<uint32_t>(local_ip) + (1 << 24);
lease.end_ip.addr = static_cast<uint32_t>(local_ip) + (n << 24);
Where n is number of IP addresses that must be allocated to external devices.
For ex:-
lease.end_ip.addr = static_cast<uint32_t>(local_ip) + (20 << 24);
means if the starting IP is 192.168.1.0 the DHCP will assign address starting from 192.168.1.1 to 192.168.1.20, while 192.168.1.0 (starting IP will be the address of the ESP32 module)

Doesn't the access point use the first ip of 192.168.1.1 not 1.0?
So 2-11 is 10 connections.

Related

Storing large arrays on the Ethereum blockchain

In the process of studying Ethereum smart contracts, the question arose of how to store large structures inside a contract (array or map). There is a contract
pragma solidity ^ 0.8.6;
contract UselessWorker {
uint32 public successfullyExecutedIterations = 0;
mapping (uint32 => uint32 [6]) items;
event ItemAdded (uint32 result);
function doWork (int _iterations) public {
for (int32 i = 0; i <_iterations; i ++) {
items [successfullyExecutedIterations] = [uint32 (block.timestamp), successfullyExecutedIterations, 10, 10, 10, 10];
successfullyExecutedIterations ++;
}
emit ItemAdded (successfullyExecutedIterations);
}
}
The doWork method fills the map items with arrays of numbers, called by an external script. At the same time, the more records appear in items, the faster disk space is consumed (for 1,000,000 the blockchain size is about 350 MB, for 2,000,000 about 1.1 GB, for 19,000,000 about 22 GB.This is the size of the .ethereum/net/geth/chaindata/ folder).
I am testing it on a private network, so the price of gas does not bother me. I run it with the command
geth --mine --networkid 999 --datadir ~ / .ethereum / net
--rpccorsdomain "" --allow-insecure-unlock --miner.gastarget 900000000 --rpc --ws --ws.port 8546 --ws.addr "127.0.0.1" --ws.origins
"" --ws.api "web3, eth" --rpc.gascap 800000000
According to estimates, one record in the map should take about 224 bytes (7 * 32 bytes) and for 19M records it should be about 4.2 GB.
It feels like a memory leak is taking place. Or I don't understand well how memory is allocated for storing map.
Can anyone suggest why blockchain is consuming so much disk space?

MPI - sending messages to processes that run from other functions

I have a process with rank 0 (MASTER) that is running in a function (FUNCA) that does:
...
get_moves_list(node,&moves_list,&moves_len,maximizing);
//for each rank in SLAVES
//MPI_Send a move to a SLAVE
I want the slave processes to receive messages from the MASTER, but the slave processes are running from/inside a different function (FUNCB)
void run_slave(rank) {
int move;
//MPI_Recv a move from MASTER
//Do some stuff with that move
//Then return to caller
}
Main looks like this
int main(int argc,char **argv)
{
int rank,size;
MPI_Init(NULL,NULL);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
if (rank == MASTER) {
...
//CALL FUNCA
...
} else {
run_slave(rank);
MPI_Finalize();
}
}
Is something like this possible with MPI, sending/receiving messages to processes running in different functions?
If it helps, I am trying to parallelize a minimax function (FUNCA), but the structure of the program must be used as described above.
When the program is started MASTER process initiates a game and calls minimax to get an optimal move for the maximizing player.
I have the serial version of minimax working and am currently attempting to parallelize it using MPI with no luck so far.
To make it clear, MPI is a structured communication library, not some esoteric parallel programming language extension. It simply facilitates the structured exchange of data between entities called ranks. Generally, ranks are processes running on the same computer or on separate computers linked with some kind of network, but those could also be some other kind of communicating entities. What is important is that each rank is on its own when it comes to code execution and it doesn't care where in the program the other ranks are. Even more, it doesn't care if the other ranks are running the same program. In fact, although it is typical for MPI to have all ranks run the same code, the so-called SPMD or Single Program Mulitple Data, you are free to write a separate program for a group of ranks or even for each rank, which is known as MPMD or Multiple Programs Multiple Data. MPI even facilitates the classical client-server mode and allows separate MPI jobs to connect. SPMD is simply easier to program as you need to write a single program only.
Think of MPI simply as a mediator (middleware) between your code and the system-specific APIs that enables easy interprocess communication and abstracts away things such as locating the actual endpoints of the other communicating partners (e.g., finding out the network addresses and port numbers when communication runs over TCP/IP). When you write a browser that communicates over the network with a WEB server, you don't care what code the server executes. Conversely, the server doesn't care what code your browser executes. As long as both speak the same protocol, the communication works. The same applies to MPI - as long as two ranks use the same MPI library, they can communicate.
For successful communication to happen in MPI, there are only two things needed (in a typical point-to-point data exchange):
sender: rank A is willing to send data to rank B and so it calls MPI_Send(..., B, tag, MPI_COMM_SOMETHING);
receiver: rank B is willing to receive data from rank A and so it calls MPI_Recv(..., A, tag, MPI_COMM_SOMETHING, ...);
As long as both ranks specify the same tag and communicator and the addresses in both send and receive calls match pairwise (including the ability of the receiver to specify source and tag wildcards), the exchange will happen regardless of where the actual lines of code are located.
The following is a perfectly valid MPI example:
rank_0.c
#include <stdio.h>
#include <mpi.h>
int main(void)
{
MPI_Init(NULL, NULL);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int a;
MPI_Recv(&a, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Rank %d got %d from rank 1\n", rank, a);
MPI_Finalize();
return 0;
}
rank_1.c
#include <mpi.h>
#include <stdio.h>
int main(int argc, char **argv)
{
MPI_Init(&argc, &argv);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int a = 42;
MPI_Send(&a, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
printf("Rank %d sent %d to rank 0\n", rank, a);
MPI_Finalize();
return 0;
}
Compile and run:
$ mpicc -o rank_0 rank_0.c
$ mpicc -o rank_1 rank_1.c
$ mpiexec -n 1 ./rank_0 : -n 1 ./rank_1
Rank 0 got 42 from rank 1
Rank 1 sent 42 to rank 0
As you can see, those are two completely different programs and they still happily run together in one MPI job and are able to exchange messages.
Yes, you can do this. Here is a full toy program that should demonstrate this functionality:
#include <iostream>
#include "mpi.h"
#include "unistd.h"
#define MASTER 0
int pid, pnum;
void func1(void)
{
int bcastval = 1000;
MPI_Bcast(&bcastval, 1, MPI_INT, 0, MPI_COMM_WORLD);
std::cout << pid << " sent " << bcastval << std::endl;
}
void func2(void)
{
int recv;
MPI_Bcast(&recv, 1, MPI_INT, 0, MPI_COMM_WORLD);
std::cout << pid << " received " << recv << std::endl;
}
int main(void)
{
MPI_Init(NULL,NULL);
MPI_Comm_rank(MPI_COMM_WORLD,&pid);
MPI_Comm_size(MPI_COMM_WORLD,&pnum);
if (pid == MASTER) func1();
else func2();
if (pid == MASTER) std::cout << "Done!" << std::endl;
MPI_Finalize();
}
Note that running with mpirun -np 2 ./a.out yields
0 sent 1000
1 received 1000
Done!
However, I strongly recommend avoiding this if you have complex logic within the two functions called by the master process and the others. It is easy to illustrate with this example:
#include <iostream>
#include "mpi.h"
#include "unistd.h"
#define MASTER 0
int pid, pnum;
void func1(void)
{
int bcastval = 1000;
MPI_Bcast(&bcastval, 1, MPI_INT, 0, MPI_COMM_WORLD);
std::cout << pid << " sent " << bcastval << std::endl;
MPI_Barrier(MPI_COMM_WORLD); // any blocking call
}
void func2(void)
{
int recv;
MPI_Bcast(&recv, 1, MPI_INT, 0, MPI_COMM_WORLD);
std::cout << pid << " received " << recv << std::endl;
}
int main(void)
{
MPI_Init(NULL,NULL);
MPI_Comm_rank(MPI_COMM_WORLD,&pid);
MPI_Comm_size(MPI_COMM_WORLD,&pnum);
if (pid == MASTER) func1();
else func2();
if (pid == MASTER) std::cout << "Done!" << std::endl;
MPI_Finalize();
}
Any time a blocking MPI call is made in a branch, and not all processes follow that branch, MPI will halt the program forever, since not all processes can "check in" to the call. Furthermore, if there is a lot going on in these functions, then it can be very difficult to troubleshoot as debugging output might behave unepectedly.
An anecdote about this: I work on a large multiphysics simulation code that has an MPI layer. Recently, something exactly like the above happened, and it halted the entire code for all 17 developers. Each developer found that the code halted at a different location, sometimes in external, MPI-dependent libraries. It took a long time to troubleshoot.
I would recommend that FUNCA returns the information that the master process needs to broadcast, then broadcasts outside the conditional branch.

Error in writing/Reading to I2C EEPROM + STM32F0 Discovery

I am struggling to Write or read to AT24C256 I2C EEPROM. I am using STM32F0 discovery board to read/write to EEPROM.
I am using HAL library and CUBEMX for basic structure. I have written small code to test the read and write function. On debugging the the values of Test is always '2' whereas it should be '1' if it's successful in writing into memory. Here it is :-
#define ADDR_24LCxx_Write 0x50
#define ADDR_24LCxx_Read 0x50
#define BufferSize 5
uint8_t WriteBuffer[BufferSize],ReadBuffer[BufferSize],Test;
uint16_t i;
I2C_HandleTypeDef hi2c1;
int main(void)
{
HAL_Init();
/* Configure the system clock */
SystemClock_Config();
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_I2C1_Init();
for(i=0; i<5; i++)
{
WriteBuffer[i]=i;
}
if(HAL_I2C_Mem_Write(&hi2c1, ADDR_24LCxx_Write, 0, I2C_MEMADD_SIZE_8BIT,WriteBuffer,BufferSize, 0x10) == HAL_OK)
{
Test = 1;
}
else
{
Test = 2;
}
HAL_I2C_Mem_Read(&hi2c1, ADDR_24LCxx_Read, 0, I2C_MEMADD_SIZE_8BIT,ReadBuffer,BufferSize, 0x10);
if(memcmp(WriteBuffer,ReadBuffer,BufferSize) == 0 ) /* check date */
{
Test = 3;
}
else
{
Test = 4;
}
}
You should step in the function HAL_I2C_Mem_Write to understand why it does not return HAL_OK. More particularly, you should check what it exactly returns, it would help you.
Looking at your code, I am confident that the issue is with I2C address. In the AT24C256 datasheet, they say that the I2C address is:
1 0 1 0 0 A1 A2 R/W
Assuming that you connected the pins A1 and A2 to GND, the I2C address is:
1 0 1 0 0 0 0 R/W
In hex, the I2C address is 0xA0. So, change your adress definition as follows:
#define ADDR_24LCxx 0xA0
And in the HAL functions:
HAL_I2C_Mem_Write(&hi2c1, ADDR_24LCxx, 0, I2C_MEMADD_SIZE_8BIT,WriteBuffer,BufferSize, 100)
HAL_I2C_Mem_Read(&hi2c1, ADDR_24LCxx, 0, I2C_MEMADD_SIZE_8BIT,ReadBuffer,BufferSize, 100)
Please note that I have also increased the timeout to 100ms. For testing, you don't really want to have timeout issues....

Thrift TThreadPoolServer returns "Segmentation fault; core dumped;" when handle concurrent MySQL database requests

I'm developing a simple assignment using Apache Thrift and the C++ POCO library. The assignment requires me to make a benchmark by creating multiple concurrent threads make the same request to the Thrift TThreadPoolServer.
This is my client-side code, where I creates 10 concurrent threads, all of them make the same GET request (request info from only one user D) to the server:
// define a MyWorker class which inherits Runnable
class MyWorker : public Runnable {
public:
MyWorker(int k = -1) : Runnable(), n(k) {
}
void run() {
// create connection
boost::shared_ptr<TTransport> socket(new TSocket("localhost", 9090));
boost::shared_ptr<TTransport> transport(new TBufferedTransport(socket));
boost::shared_ptr<TProtocol> protocol(new TBinaryProtocol(transport));
APIsClient client(protocol);
try {
transport->open();
int res = -1;
// make the request
res = client.get("D");
printf("Thread %d, res = %d \n", n, res);
transport->close();
} catch (TException te) {
cout << te.what() << endl;
}
}
private:
int n;
};
void handleBenchmarkTest(const std::string& name, const std::string& value) {
//TODO!
const int N = 10;
MyWorker w[N];
for (int i = 0; i < N; i++) w[i] = MyWorker(i);
Thread t[N];
for (int i = 0; i < N; i++) t[i].start(w[i]);
for (int i = 0; i < N; i++) t[i].join(); // wait for all threads to end
cout << endl << "Threads joined" << endl;
}
I implemented my server using TThreadPoolServer. This is the the handler function when the server receive a GET request:
// this function make use of POCO::Data
int getRequest(const std::string& _username) {
int res = -1;
Statement select(*mySQLsession);
std::string match("'" + _username + "'");
select << "SELECT counter FROM view_count_info WHERE username = " + match + " LIMIT 1;", into(res);
select.execute();
return res;
}
Above are all of my codes. When I run the client-side benchmark app, this is what returned:
MySQL // from the try-catch block above
MySQL // from the try-catch block above
Thread 2, res = -1 // expected result
MySQL // from the try-catch block above
Thrift: Fri Jun 26 15:54:05 2015 TSocket::read() recv() <Host: localhost Port: 9090>Connection reset by peer
No more data to read.
THRIFT_ECONNRESET
No more data to read.
No more data to read.
Thrift: Fri Jun 26 15:54:05 2015 TSocket::read() recv() <Host: localhost Port: 9090>Connection reset by peer
THRIFT_ECONNRESET
Thrift: Fri Jun 26 15:54:05 2015 TSocket::read() recv() <Host: localhost Port: 9090>Connection reset by peer
THRIFT_ECONNRESET
Threads joined
In the server, the result is:
2015-06-26 08:54:00 : > Server is running
2015-06-26 08:54:05 : handle GET request
D
2015-06-26 08:54:05 : handle GET request
D
2015-06-26 08:54:05 : handle GET request
D
-1
2015-06-26 08:54:05 : handle GET request
2015-06-26 08:54:05 : handle GET request
D
2015-06-26 08:54:05 : handle GET request
D
2015-06-26 08:54:05 : handle GET request
D
D
RUN FINISHED; Segmentation fault; core dumped; real time: 5s; user: 0ms; system: 0ms
I don't know why this happened. One more thing, when I try to change to not use MySQL request in the server side (instead I just return a random integer for each request), the app runs well without any errors or warning. So I guess the problem here is with the MySQL database. It does work if I only make 1 request at a time, but something goes wrong when there are multiple, concurrent GET requests are made.
Thanks to #JensG, I found out that the problem here is because I used a global variable mySQLsession to handle all the MySQL database requests, which causes threads conflict. Thanks you all again !

capturing user data using pcap in c

I wanted to send my own data size of 1024 bytes through a particular interface . Suppose we have two hosts , one is sending and another is receiving.
Receiving host is using pcap mechanism to receive the data from other host.As per my knowledge Pcap receives echo packets from interface.
Here i want my own data to be received. How can i achieve that ??? Im a beginner ,so please help me out how to deal with pcap.
Actually I want to receive all data into a host ,save it and later forward it to my actual destination.
Is it possible using pcap???
client:
import socket
import select
import sys
import time
import json
import os
import pickle
from time import sleep
c=1
while(c):
if os.path.exists('/home/mininet/save.txt'):
s = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
count=5
while (count):
f1=open("/home/mininet/save.txt","r")
d=f1.read()
f1.close()
if d.strip():
d=int(d)
f=open("/home/mininet/test1.txt", "rb")
l = f.read(1024)
s.sendto(l,("10.0.0.1",9999))
count=count-1
time.sleep(d)
c=0
s.close()
This is the client part and in server the corresponding program to receive these data..
First both client and server was connected with each other . Then that link is broken and placed a host in between them to monitor the traffic.
Each data should reach to that newly created host then that host redirect that data to the server. I wanted to achieve this using pcap.
Here is an example:
/*
* Use pcap_open_live() to open a packet capture device.
* Use pcap_dump() to output the packet capture data in
* binary format to a file for processing later.
*/
#include <unistd.h>
#include <stdio.h>
#include <pcap.h>
#include <netinet/in.h>
#include <sys/socket.h>
#define IFSZ 16
#define FLTRSZ 120
#define MAXHOSTSZ 256
#define PCAP_SAVEFILE "./pcap_savefile"
extern char *inet_ntoa();
int
usage(char *progname)
{
printf("Usage: %s <interface> [<savefile name>]\n", basename(progname));
exit(11);
}
int
main(int argc, char **argv)
{
pcap_t *p; /* packet capture descriptor */
struct pcap_stat ps; /* packet statistics */
pcap_dumper_t *pd; /* pointer to the dump file */
char ifname[IFSZ]; /* interface name (such as "en0") */
char filename[80]; /* name of savefile for dumping packet data */
char errbuf[PCAP_ERRBUF_SIZE]; /* buffer to hold error text */
char lhost[MAXHOSTSZ]; /* local host name */
char fltstr[FLTRSZ]; /* bpf filter string */
char prestr[80]; /* prefix string for errors from pcap_perror */
struct bpf_program prog; /* compiled bpf filter program */
int optimize = 1; /* passed to pcap_compile to do optimization */
int snaplen = 80; /* amount of data per packet */
int promisc = 0; /* do not change mode; if in promiscuous */
/* mode, stay in it, otherwise, do not */
int to_ms = 1000; /* timeout, in milliseconds */
int count = 20; /* number of packets to capture */
u_int32 net = 0; /* network IP address */
u_int32 mask = 0; /* network address mask */
char netstr[INET_ADDRSTRLEN]; /* dotted decimal form of address */
char maskstr[INET_ADDRSTRLEN]; /* dotted decimal form of net mask */
int linktype = 0; /* data link type */
int pcount = 0; /* number of packets actually read */
/*
* For this program, the interface name must be passed to it on the
* command line. The savefile name may be optionally passed in
* as well. If no savefile name is passed in, "./pcap_savefile" is
* used. If there are no arguments, the program has been invoked
* incorrectly.
*/
if (argc < 2)
usage(argv[0]);
if (strlen(argv[1]) > IFSZ) {
fprintf(stderr, "Invalid interface name.\n");
exit(1);
}
strcpy(ifname, argv[1]);
/*
* If there is a second argument (the name of the savefile), save it in
* filename. Otherwise, use the default name.
*/
if (argc >= 3)
strcpy(filename,argv[2]);
else
strcpy(filename, PCAP_SAVEFILE);
/*
* Open the network device for packet capture. This must be called
* before any packets can be captured on the network device.
*/
if (!(p = pcap_open_live(ifname, snaplen, promisc, to_ms, errbuf))) {
fprintf(stderr, "Error opening interface %s: %s\n",
ifname, errbuf);
exit(2);
}
/*
* Look up the network address and subnet mask for the network device
* returned by pcap_lookupdev(). The network mask will be used later
* in the call to pcap_compile().
*/
if (pcap_lookupnet(ifname, &net, &mask, errbuf) < 0) {
fprintf(stderr, "Error looking up network: %s\n", errbuf);
exit(3);
}
/*
* Create the filter and store it in the string called 'fltstr.'
* Here, you want only incoming packets (destined for this host),
* which use port 69 (tftp), and originate from a host on the
* local network.
*/
/* First, get the hostname of the local system */
if (gethostname(lhost,sizeof(lhost)) < 0) {
fprintf(stderr, "Error getting hostname.\n");
exit(4);
}
/*
* Second, get the dotted decimal representation of the network address
* and netmask. These will be used as part of the filter string.
*/
inet_ntop(AF_INET, (char*) &net, netstr, sizeof netstr);
inet_ntop(AF_INET, (char*) &mask, maskstr, sizeof maskstr);
/* Next, put the filter expression into the fltstr string. */
sprintf(fltstr,"dst host %s and src net %s mask %s and udp port 69",
lhost, netstr, maskstr);
/*
* Compile the filter. The filter will be converted from a text
* string to a bpf program that can be used by the Berkely Packet
* Filtering mechanism. The fourth argument, optimize, is set to 1 so
* the resulting bpf program, prog, is compiled for better performance.
*/
if (pcap_compile(p,&prog,fltstr,optimize,mask) < 0) {
/*
* Print out appropriate text, followed by the error message
* generated by the packet capture library.
*/
fprintf(stderr, "Error compiling bpf filter on %s: %s\n",
ifname, pcap_geterr(p));
exit(5);
}
/*
* Load the compiled filter program into the packet capture device.
* This causes the capture of the packets defined by the filter
* program, prog, to begin.
*/
if (pcap_setfilter(p, &prog) < 0) {
/* Copy appropriate error text to prefix string, prestr */
sprintf(prestr, "Error installing bpf filter on interface %s",
ifname);
/*
* Print error to screen. The format will be the prefix string,
* created above, followed by the error message that the packet
* capture library generates.
*/
pcap_perror(p,prestr);
exit(6);
}
/*
* Open dump device for writing packet capture data. In this sample,
* the data will be written to a savefile. The name of the file is
* passed in as the filename string.
*/
if ((pd = pcap_dump_open(p,filename)) == NULL) {
/*
* Print out error message if pcap_dump_open failed. This will
* be the below message followed by the pcap library error text,
* obtained by pcap_geterr().
*/
fprintf(stderr,
"Error opening savefile \"%s\" for writing: %s\n",
filename, pcap_geterr(p));
exit(7);
}
/*
* Call pcap_dispatch() to read and process a maximum of count (20)
* packets. For each captured packet (a packet that matches the filter
* specified to pcap_compile()), pcap_dump() will be called to write
* the packet capture data (in binary format) to the savefile specified
* to pcap_dump_open(). Note that packet in this case may not be a
* complete packet. The amount of data captured per packet is
* determined by the snaplen variable which is passed to
* pcap_open_live().
*/
if ((pcount = pcap_dispatch(p, count, &pcap_dump, (char *)pd)) < 0) {
/*
* Print out appropriate text, followed by the error message
* generated by the packet capture library.
*/
sprintf(prestr,"Error reading packets from interface %s",
ifname);
pcap_perror(p,prestr);
exit(8);
}
printf("Packets received and successfully passed through filter: %d.\n",
pcount);
/*
* Get and print the link layer type for the packet capture device,
* which is the network device selected for packet capture.
*/
if (!(linktype = pcap_datalink(p))) {
fprintf(stderr,
"Error getting link layer type for interface %s",
ifname);
exit(9);
}
printf("The link layer type for packet capture device %s is: %d.\n",
ifname, linktype);
/*
* Get the packet capture statistics associated with this packet
* capture device. The values represent packet statistics from the time
* pcap_open_live() was called up until this call.
*/
if (pcap_stats(p, &ps) != 0) {
fprintf(stderr, "Error getting Packet Capture stats: %s\n",
pcap_geterr(p));
exit(10);
}
/* Print the statistics out */
printf("Packet Capture Statistics:\n");
printf("%d packets received by filter\n", ps.ps_recv);
printf("%d packets dropped by kernel\n", ps.ps_drop);
/*
* Close the savefile opened in pcap_dump_open().
*/
pcap_dump_close(pd);
/*
* Close the packet capture device and free the memory used by the
* packet capture descriptor.
*/
pcap_close(p);
}