I'm using libpcap to write a sniffer program .. For starters i referred to tutorials on the net from various programmer on how to write Basic Sniffer program using Libpcap .. which captures Packets only from the Ethernet connection ...
And i've been searching a lot on How to write a program using libpcap to capture packets from the wifi connection .... but i'm not getting anything which can help me ...
Do i need to do some settings in my system to make sure that libpcap can capture the packet...because the method pcap_lookupdev points to the default device which is eth0
You either need to hardcode the name of the Wi-Fi device (probably wlan0) into your program, or give it a UI option (command-line flag, etc.) to let the user specify the device on which to capture traffic.
There is no setting on your system that will change the device returned by pcap_lookupdev().
Tcpdump and Wireshark/TShark/etc. have a -i command-line option to specify the device on which to capture, and Wireshark has a GUI dialog to allow the user to specify it. They don't rely on pcap_lookupdev() if the user specifies the device explicitly.
Note that if you're capturing on Wi-Fi, you will, by default, only capture traffic to and from your machine. If you want to capture all the traffic on your network, including traffic to and from other machines, you will need to capture in monitor mode; newer versions of libpcap have APIs to support that, but they're only guaranteed to work on OS X (for various complicated reasons, they may or may not work on Linux, which, given the device name eth0, I assume you're using; until that's fixed, you'd need to use something such as aircrack-ng to turn on monitor mode - see the WLAN capture setup page section on Linux in the Wireshark Wiki for informationon that).
Related
I am trying to run some simple SPARC tests on bare metal QEMU. I am using qemu-sparc64 -g 1234 simple_example and seems to be working fine (I can connect gdb to localhost:1234, step through, etc) but was wondering what does qemu-system-sparc64 do ? I tried running it with the same cmd line switches but got some errors. Any help is appreciated, thank you.
For any QEMU architecture target, the qemu-system-foo binary runs a complete system emulation of the CPU and all other devices that make up a machine using that CPU type. It typically is used to run a guest OS kernel, like Linux; it can run other bare-metal guest code too.
The qemu-foo binary (sometimes also named qemu-foo-static if it has been statically linked) is QEMU's "user-mode" or "linux-user" emulation. This expects to run a single Linux userspace binary, and it translates all the system calls that process makes into direct host system calls.
If you're running qemu-sparc64 then you are not running your program in a bare-metal environment -- it's a proper Linux userspace process, even if you're not necessarily using all of the facilities that allows. If you want bare-metal then you need qemu-system-sparc64, but your program needs to actually be compiled to run correctly on the specific machine type that you tell QEMU to emulate (eg the Sun4u hardware, which is the default). Also, by default qemu-system-sparc64 will run the OpenBIOS firmware, so your bare-metal guest code needs to either run under that OpenBIOS environment, or else you need to tell QEMU not to run the BIOS (and then you get to deal with all the hardware setup that the BIOS would do for you).
As qemu user mode emulation doesn't support ptrace system call, I am trying to debug a qemu user mode emulated process via qemu's gdbstub, and use another gdb instance to connect to it via target remote :1234.
This works fine for some basic command like si to single step instructions, but I cannot set breakpoint on the symbols in the main emulated executable such as main. Simply run break main will say the breakpoint is set to some raw un-relocated address (like 0x63a, but if I hit c in gdb client, the symbols for the main executable is never resolved to the real virtual address, and then is never hit.
Is this a general issue for debugging qemu user mode emulated process, and is there any way to set up the breakpoint correctly?
I’d like to be able to create a GUI that can be viewed over the network by a remote client.
One approach is to code the whole GUI in HTML5 and run it from a server such as Apache; the main difficulty with this is that the GUI includes at least one, sometimes two, windows containing live video streams (without any sound) and there doesn’t seem to be a good way of streaming live video into HTML5 - especially as it really needs to be live; a few seconds’ latency would be unacceptable.
Another approach (which I’ve done already, and actually works pretty well) is just to code the GUI as a desktop application (for example using Qt), and then to view the desktop remotely using VNC or Windows Remote Desktop. This gives the required responsiveness and lack-of-latency, but has the disadvantage that the whole OS desktop is accessible and not just my one application.
So, here is my question: is there a mechanism or a framework available that would enable me to use RFB (i.e. the protocol underlying VNC) or RDP (that underlying Windows Remote Desktop) to provide remote access to a single GUI application rather than a whole desktop?
When we comparing RDP and RFB the main deference is RDP only share metadata where RBF share whole frame buffer of the screen. So RBF is slow than RDP. VNC is using RFB where windows applications like Lync using RDP.
http://sandaruwmp.blogspot.com/2014/05/remote-desktop-application-with-rdp.html Here you can see a simple RDP example
Actually you can create an application that only shares a single application and also you can use many other protocols with RDP
here https://github.com/sandaru/RDAPP in this application it uses RDP with TCP that you can select only one application to show.
In this application it shares the desktop via RDP and listen to a TCP port you can send commands such as "stop selected processes", "Focus single application" and "share whole window". RDP react according to the TCP requests.
i hope this will be useful for you
NOTE: Above Source does not contain any NAT traverse mechanism.
Assume you want to connect your Ubuntu 13.04 desktop computer via TTL-232R-3V3 USB cable to the UART interface of an embedded system running an individual Linux flavor, that does not belong to a major distribution. Your own machine offers you the interface to your connection via /dev/ttyUSB0. Because you are using a framework for a high level language (pySerial) you know that you configure some terminal options via the C-struct termios.
Now the question is, where is that terminal you are configuring? Is that information you send to the remote device and configure that? Or do you simply configure how the /dev/ttyUSB0 interface is interpreted by your system? Or is there maybe even some configuration happening in the logic of the UART-to-USB converter cable? And if all 3 are possible, how would you determine which set of parameters where configured by your termios manipulations on /dev/ttyUSB0?
If it makes things easier to explain, consider the example of LF/CR handling which can contain, depending on the flags you set, either only LF, only CR or both as would be typical for windows. My question is not limited to these options only, though.
Note: I came to that question after I realised that I already saw some options active, that the man page declares as not available in POSIX and Linux.
All the configuration options are settings for the device driver. Most of them are implemented entirely in the driver software, such as echoing, CR-to-LF translation, and raw-vs-cooked mode.
Some of them, such as modes related to RS-232 signals, might be implemented in the device hardware, and the device driver will perform the appropriate device control operations to enable those options.
I have a set of CUDA apps that both write to the console via cout. I have a host machine with VS and NSight plug-in and a target machine with NSight service. However, when I execute the console app, it actually runs on the target machine (literally pops up a console).
So here's the question: how can I get the console to show up on the host and only the GPU stuff to execute on the target? Is this even possible?
Thanks!
The short answer is that it is currently not possible. The application on the target is executed by the Nsight Monitor process but Nsight Monitor currently does not forward the output back to host.
Currently your only option is to take care of it your self by capturing the output of your application on the target and somehow display it on the host.
If this feature is important to you i suggest you file a feature request via your Nvidia developer account.
The CUDA application completely runs on the target machine, so the console or UI for the application will be seen on the target machine only. You can set breakpoints in the GPU code in the VS side (your host machine), and it should break there.
If you feel the application quits too quickly and is not launching the kernels as expected (and you are not hitting the breakpoints), it may be that you have not deployed all the required DLLs on the target machine (e.g. CUDART).