Does tcpdump can able to merge with any dpi library for deep paket flow analysis.
For example: I need skype protocol flow details and header details.
ya i tried wireshark.but it doesn't analyse deep protocols like skype,whatsapp etc., please suggest the possible solutions.
For Skype: if the Skype dissector already in Wireshark doesn't dissect enough of the protocol, read the document people reverse-engineering the undocumented Skype protocol wrote, and enhance that dissector to dissect more of the protocol.
For WhatsApp: add this WhatsApp plugin to Wireshark.
Related
Is there a way to stream a video and audio on a website just to the clients, using a camera installed on the server - for instance, like youtube does ?
I've started reading webrtc, but if I use webrtc I should create a stun/turn server and other things, which for one way stream I think is not necessary (this is just my understanding of the things..) because I don't need anything from the clients, literally, neither their video, or audio..
So is there a way to achieve this using html5, streaming just in one direction:
server (camera) -> clients
Is there something about this out there, or should I stick with webrtc ?
I'm going to explain a possible solution for this scenario, there might be others, but I hope mine gives you a rough idea of how you could do it and a start point to explore more about the amazing possibilities of WebRTC. Please let me know if something is not understood.
So, WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. Sweet, that is: WebRTC has a quite good browser support (not in every browser though, Safari just started supporting it a month ago with Safari 11). But in this case we want to use WebRTC in the server side. At the end of the day we can still think about peer-to-peer real time communication, where one of our peers is the server.
I don't know if you are familiar with Node.js, but I recommend you to write your Server app with it (<3 Javascript!):
There are a few libraries that wrap WebRTC functionality to be used in the server side, like node-webrtc and node-rtc-peer-connection.
But I recommend you to to take a look at electron-werbrtc, since
the others might be using deprecated methods or be incomplete.
electron-webrtc runs a headless Electron client in the background to
use Chromium's built-in WebRTC implementation. So with it you should
be able to access the Camera in your server and create a stream to be
served to the other peer (the browser).
All above would be the WebRTC related tasks, in this case: streaming video peer(server)-to-peer(browser).
Now, let's talk about signaling process, stun and turn.
Signaling: imagine now a scenario peer-to-peer with 2 browsers, they want to establish a direct connection and stream video and audio between each other. But they don't know each other, like if I don't know your home address, I can't send you a letter. So they need a service that helps them know each other, so they can have the other's IP. This should be done by what is called "a signaling server". If somehow you know the other peer IP, you wouldn't need a signaling server.
STUN/TURN: the scheme above works perfectly in a local area network where each peer has its own IP address and there are no firewalls and routers between them. But otherwise, you can have peers behind a NAT or firewalls, and then your signaling server won't be able to make both peers to discover themselves. If you have peers behind a NAT, you'll need a STUN server, and if you have peers behind firewalls you'll need a TURN server. This is a bit simplified, but I just want you to have the general picture of when you might need STUN/TURN servers.
To better understand Signaling, STUN and TURN, there is a very graphic article that explains them perfectly.
Now, for your scenario:
I think you prob don't need STUN/TURN servers and also you prob don't need to implement the signaling process, because the browsers that are supposed to receive the stream from the server will know that server address, right? So they can establish a WebRTC connection with it.
EDIT: it is likely that you will need to implement some sort of handshake between the server and the clients (browsers), so this will be the signaling process. This is not part of WebRTC and this is why you need to implement it yourself. As I said, it is the way 2 peers can discover each other, but they also exchange information as their local media conditions, like codecs, resolutions they can handle, etc. For your case, your signaling server could be hosted in the same server you use to strea: you can build a small node.js app that runs there and that manages all the signaling process easily, it is not a big deal. I recommend you to read this article, and specially the section "How can I build a signaling service?". In general all WebRTC articles from that site are very helpful.
Does this make sense to you? I think with it you can start digging a little bit more and see if with this is enough or you need to implement more stuff. Hope it helps!
I have heard about using voice recognition to authenticate users. I wanted to use it in an HTML5 program immediately, but no matter how many Google searches I did, I couldn't figure out how to program it. Could anyone please help me? I have not fabricated any code yet!
Authentication requires that the server shares a secret with the user. This is often an asymmetric secret where one side can only verify, and not reproduce the other side.
In the case of voice authentication, this typically is represented as a cryptographic (hard to reverse engineer) fuzzy (not an exact match - you'll never sound exactly the same) hash (numeric representation) of a spoken passphrase. This offers stronger authentication over a pure password, because in addition to knowing the password you need to know the voice. You may find resources for text-to-speech APIs, but using these will not give you any additional security over a password approach.
In the end you're going to need to do the authentication on the server, so if you find a good library to do your hashing, you could stream the sound bytes to the server (lots of ways to do this) and do your hashing & authentication server side.
This isn't a step-by-step how-to answer, but coming up with a complete design will depend a lot on other requirements of the system. Search for solutions under "voice biometrics", such as Is there a voice authentication library?
I'm tracing packets between 2 agents. One is from Chrome on Mac, the other is from Chrome Beta on Android. They're communicating by a reference site like apprtc.appspot.com and I managed to save some logs out of it. (please download it or it only displayed as source code) Doing so I also capture packets in Wireshark while 2 agents communicating with WebRTC.
Using filter: stun||udp lots of Binding requests & responses can be founded.
Basically from the rfc doc it said:
An agent can respond to an initial offer at any point while gathering candidates...
thus allowing the remote party to also start forming checklists and performing
connectivity checks.
But I just can't see any sign of SDP like offer or answer sending to each other, which can be found in js log above. For cross reference I hope to find the right order of the entire communication.
Here's the Wireshark file kinda of big
Chrome uses TLS to encrypt the signaling packets. And if its a communication directly between the peer, the only way to see signaling is looking at the Console logs of chrome. It should have the offer answer exchange of the SDP. I am assuming its using SIP as the signaling protocol and you should be seeing it in the console.
If there is a intermediary between the peer, like a FreeSwitch any other SIP Server, it could be possible to debug it better as they have the keys to decode and find use the raw text messages.
I need to trace/sniff http traffic from other machines (for example from my android phone or ios device). In the past I used MSSOAPT (described here http://www.devproconnections.com/article/net-framework2/microsoft-soap-trace-tool) and it was perfect, I need something similar, and now it should have syntax highlighting for json and be able to unzip content :).
I would like to tell my android to go to http://my.machine.home/Foo?bar and this proxy should forward this to other server such as http://google.com/Foo?bar and it should print complete trafic.
I would prefer if the solution would not require proxy configuration on my device, but just would forward all requests sent to the process.
and it does not have to be free
If you are on Windows, Fiddler as a great free tool.
On other platforms, you could use the free and open-source WebScarab. The UI is not as easy to use as Fiddler, and although it runs on Windows, I rather prefer Fiddler there.
Look at charle proxy it is very easy to use and has all your requirements fulfilled.
I've been playing with HTML5 location lookups recently and its relatively straightforward to pull someones location from a device like an iPhone.
I want to write an app that uses location data, but its important that the location be factual. In other words I need to prevent people from authoring a fake post to the backing website / web service with mocked up GPS coordinates.
Is there anyway to collect GPS coordinates from a mobile device using the HTML5 geolocation apis and securely transmit that back to a web service in a way that someone wouldn't be able to author a post with the same data and "game the system" so to speak?
Not without some serious encryption on the payload on the client. Which if there is money involved, someone will reverse engineer and figure out how to create valid payloads themselves. Remember if there is money or fame involved then somebody will think the effort to do something like this is "worth it". If your web service is public and not using some kind of encryption nothing on the client will ensure that someone with a network connection can't sniff your protocol and fake whatever data they want. And SSL won't cut it. Anyone can proxy the SSL connection on their local network decrypt the payload and inspect it to their hearts content.
No. Completely agree with the answer from fuzzy lollipop. If you’re talking to a remote machine, the data can always be faked. Always always. What makes you certain you’re even talking to a mobile device at all? The User-Agent string? Pfft, it can be faked. Talking to a GPS? Pfft, could be coming from a predefined path. Talking to a web browser? Pfft, could be a bot, or some other malware.
And don’t think encryption (i.e. HTTPS) is going to help you. The client could edit any of your HTML, CSS, or JavaScript on-the-fly — take Firebug or Greasemonkey for example.
The reasons why you can’t trust the client are the same as the reasons why exploits such as SQL or HTML injection are so common. Ever heard the phrase “the customer is always right”? Well, the customer may be right, but the client is always untrustworthy.
The system is there to be gamed. As flaws are discovered, you patch them one by one. It’s more like leapfrog, rather than achieving the holy grail. Bruce Schneier’s quip “security is a process, not a product” comes to mind. Asking for a system that “can’t be gamed” is missing the point. What you need to be doing is creating a system where the server sanitises the data, and/or rejects bad data — fuzz testing is not a bad idea, either.
That’s about the best you can do without shipping custom untamperable mobiles to your customers with the OS in ROM, and the inside sealed with epoxy.