SSH compression for X11 forwarding - configuration

I’m on the East coast of the United States, SSHing into a server on the West coast.
I’ve managed to get X11 forwarding working so I can launch GUI apps for certain tasks where it’s helpful. However, for all the X11-forwarded apps (especially emacs!), there is so much lag between input (keystrokes, mouse clicks, etc.) and response that it sometimes goes from being incredibly frustrating to potentially harmful—when I intend to do A, but B happens because the lag is so great.
Is SSH compression a potential culprit? What kind of compression should I be using?

X11 graphics take up a lot of bandwidth. If your remote host is some distance away (i.e. not on the LAN), then you'll probably suffer sluggishness in your exported X11 applications.
I'm not sure about SSH compression. Performance may depend on other factors, such as CPU performance. From the ssh man page:
-C Requests compression of all data (including stdin, stdout,
stderr, and data for forwarded X11 and TCP connections). The
compression algorithm is the same used by gzip(1), and the
“level” can be controlled by the CompressionLevel option for pro‐
tocol version 1. Compression is desirable on modem lines and
other slow connections, but will only slow down things on fast
networks. The default value can be set on a host-by-host basis
in the configuration files; see the Compression option.
Here are some other workarounds you can use to make things faster:
Instead of interacting with the GUI using X11 forwarding, consider something else that has better optimization/compression, such as VNC or NX/FreeNX.
Use the terminal version of emacs instead of the GUI version.

As you specifically mentioned emacs: there is the command-line option
-nw, --no-window-system
Tell Emacs not to create a graphical frame. If you use
this switch when invoking Emacs from an xterm(1) window,
display is done in that window.
This can be much faster when working over ssh (as it only has to transfer character, not redraw the whole screen over X11).

Related

Is Google's Webspeech server request-limiting me, and is there a fix?

I've been writing an extension that allows the user to issue voice commands to control their browser, and things were going great until I hit a catastrophic problem. It goes like this:
The speech recognition object is in continuous mode, and whenever the onerror: 'no-speech' or onend events fire, it restarts. This way, the extension is constantly waiting to accept input and reacts whenever a command is issued, even after 5 minutes of silence.
After a few days of of development, today I reached the point where I was testing it in practical use, and I found that after a little while (and with no change to anything on my part), my onend event started firing constantly. As in, looking at the console, I would see 18,000 requests being made in the space of three seconds, all being instantly denied, thus triggering onend and restarting the request.
I'm aware that it would be optimal to wait for sound before sending a request, or to have local speech recognition capabilities without the need for a remote server, but the present API does not allow that.
Are my suspicions correct? Am I getting request limited?
Are my suspicions correct? Am I getting request limited?
Yes
I'm aware that it would be optimal to wait for sound before sending a request, or to have local speech recognition capabilities without the need for a remote server, but the present API does not allow that.
To hide the IP source of your request you can use anonymizer networks like Tor, though it will not be fast.
It's naive to assume Google will spend resources to process all audio being recorded on your system. In your application development it is better to rely on API which provides at least some guarantees. It could be either commercial API or open source implementation like CMUSphinx.
With CMUSphinx, you can also properly implement command keyword detection and increase accuracy by specifying the grammar of the commands.
You could also use a Voice Activity Detection (VAD) algorithm to detect when a user is talking. This can be done by either setting a volume threshold or a frequency threshold (Human speech is usually less than 400hz for example). This way, you won't send useless requests to Google unless those conditions are meant. I would not recommend using Tor as this would significantly increase latency. CMUSphinx is probably the best local system option, but if still want to use a web-based service, I would recommend either using a Voice Activity Detection algorithm or finding a different web-based software.

Can i crack usb security dongle?

I have to develop a plugin for a program that uses dongle to activate.Just wondering can i crack the key of the usb or something else?
I'm sure you can, but you might be running afoul of the various legislation regarding the act of reverse engineering content protection systems. I am, of course, referring to the American DCMA statues.
In any event, as pure thought experiment, I might try the following:
Clone the USB firmware image, and load it into a virtual USB port
As you say, crack the key and the USB interface, and short-circuit the check in a virtual USB device.
Locate the part of the code in the program that is doing the security check, and edit the bytecode / machine code to return successful without actually looking for the device.
NOTE: Do not contact me for services related to defeating security systems. I won't do it, and I'll probably lecture you.

Packet capture in RDMA?

Is there any utility like tcpdump in Linux for capturing the traffic which is going over RDMA channel? (Infiniband/RoCE/iWARP)
Old thread, but still:
As Roland pointed out, sniffing RDMA traffic is tricky, because once the endpoints did the initial handshake, traffic goes through network card (HCA) directly to the memory.
The only way to sniff this traffic w/o putting a dedicated HW sniffer on the wire is to have vendor-specific hooks in the network card, and a SW tool that uses these hooks.
If you have Mellanox HCAs, you can use the "ibdump" tool. This tool is also a part of Mellanox OFED package.
If you have other vendor's HW, you need to check with that vendor - you won't find any open-source packet sniffer for all RDMA-capable devices, sorry.
In general, no. One of the main characteristics of RDMA is that all the network processing is done on the adapter, without involving the CPU at all. Typically work requests are queued up directly from userspace to the adapter, without any system call. So there's nowhere for a sniffer to hook in to get traffic.
With that said, for Ethernet protocols, iWARP or IBoE (aka RoCE), you can hook up a system in the middle of a connection and set it up to do forwarding in software (eg the Linux bridge module) and then run tcpdump or wireshark to capture the RDMA traffic that passes through this system. Wireshark even has dissectors for iWARP and IBoE.
For native InfiniBand it is theoretically possible to build something similar (set up an adapter to capture and forward traffic) but as far as I know, no one has done even the needed firmware or driver work to do basic packet sniffing.
Chelsio's T4 device supports a packet trace feature allowing it to replicate ingress/egress offload packets to one of the device's NIC queues. Then you can use tcpdump or whatever on that ethX interface to see the RDMA or TOE packets.
Wireshark can be the one. But the problem is you need an observing server. Enabling the mirror feature, you should be able to receive the ROCE pocket at the observer.
A sure way to capture such traffic is to duplicate it into dedicated capture ports. Those ports might be additional ethernet/IB ports (of additional adapters) in your development machine or they may be located in an additional capture machine.
There are basically 2 ways how to duplicate the traffic:
Configure port-mirroring in your switch. Support for port mirroring is pretty common in managed Ethernet switches, even in cheap ones. This feature is also available in some Mellanox Infiniband switches. You can configure to mirror both directions of a port into another one, although this oversubscribes the receiver if the mirrored port receives and sends at line speed at the same time (full-duplex). In such a situation some frames can't be forwarded to the capture port then and are thus dropped. To avoid this limitation one needs to mirror each direction into a separate capture port.
Connect your network cable to a TAP (target access point) device that duplicates or splits the signal. With optical networking those TAPs are often constructed in a completely passive way and thus don't add much complexity and are relatively cheap to produce (examples). You need one TAP for each fiber, i.e. you always occupy 2 capture ports if you want to capture both directions. TAP devices are available for the fibers and connectors commonly used in Ethernet networks. If your Infiniband hardware uses the same then you should be able to use the same TAP devices there, as well. At least the passive ones.
Once the mirrored/tapped traffic arrive at your capture port(s), you can use standard capture tools such as tcpdump.
For Infiniband there is ibdump, however, depending on the Infiniband software you are using (open-source OFED vs. the proprietary Mellanox OFED) and the host channel adapter (HCA) you might be able to use tcpdump to capture Infiniband traffic, as well.

Tool to test websites

I'm developing on super fast fibre optic connection.
I want a tool that allows me to test web sites at certain preset speeds for example - I want to feel the experience of my site loading at modem speeds, then perhaps 1mbps, 2mbps, etc.
Basically I want to be able to set the speed of the connection so that I get the real feel of the site loading remotely from other countries and connections.
Anyone know of such a tool?
WANEM is a nice open source solution that can simulate Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
It also supports a mode of operation that only uses one network-interface, which makes it super quick to set up a test environment.
EDIT
Although WANEM is a Linux application, you only need to burn the bootable CD and start a machine with that CD, no need to sacrifice a machine to run WANEM. If even that's not an option you can also download it as a virtual appliance that runs in a VMWare Workstation ($$), VMWare Player (free) or VMWare Server (free).
However, in my opinion(based on real usage of such products) it's really easier to have the "network simulator" on a separate machine instead of loading it on either the server or the client under test. And as explained above, thanks to the bootable cd option that can be any machine you have lying around - we typically use decommissioned desktops and notebooks for this purpose.
there are a lot of tools outside like:
http://www.netlimiter.com/
http://www.antamediabandwidth.com/
...
basically the most of them work likes proxies

Design a networking application

Problem:
I need to design a networking application that can handle network failures and switch to another network in case of failure.
Suppose I am using three ethernet connections and one wireless also . At a particular moment only one connection is being used.
How should I design my system so that it can switch to another network in case of failure.
I know this is very broad question but any pointers will help!
I'd typically make sure that there's routing on the network and run one (or more) routing protocol instances on the host. That way network failure is (mostly) transparent to the application, as the host OS takes care of sending packets the right way.
On the open-source side, I have good experiences with zebra and quagga, at least on linux machines.
Create a domain model for this, describing the network elements, the kind of failures you want to be able to detect and handle, and demonstrate that it works. Then plug in the network code.
have one class polling for the connection. If poll timeout fires switch the ethernet settings. For wireless, set the wifi settings to autoconnect and then just enable/disable the wificard.
(But I dont know how you switch the ethernet connection)
First thing I would do is look for APIs that will give me network disconnection events.
I'd also find a way to check the state of the network connections.
These would vary depending on the OS and the Language used so you might want to have this abstracted in your application.
Example:
RegisterDisconnectionEvent(DisconnectionHandler);
function DisconnectionHandler()
{
FindActiveNetworkConnection();
// do something else...
}
A primitive way to do it would be to look out for network disconnection events. Your sequence would be:
Register/poll for network connections status changes. Maintain a list of all active network connections.
Use the first available network connection (Alternately you could sort it based on interface bandwidths, and use the one with highest bandwidth).
When you detect a down connection, use the next active one.
However, if there are implications to the functionality of your application, based on which network connection you use, you are much better off, having either a routing protocol do the job for you, or have a tracking application within your application. This tracking application would track network paths (through various methods like ping, traceroute, etc) across all your available interfaces to see which one can reach the ultimate destination, and use the appropriate network interface.
Also, you could monitor your network interfaces for not just status changes, but also for input/output errors, and change your selection accordingly. This would help you use the most efficient network at any given point of time. But this would need to be balanced with the churn caused by switching a network connection.
If you control all of the involved hosts, Multipath TCP will probe all of your connections and automatically choose the one that works; if multiple connections are working, it will load balance across them.
If you don't control the endpoints, there's no choice but doing the probing in the application. Mosh is an example of an application that does that quite elegantly.
You didn't mention what your application does; perhaps it would be possible to redesign your protocol so that it uses all available connections simultaneously, the way BitTorrent does, and therefore doesn't care about some links being down at any given time?